International Organization for Standardization: ISO/TR 10017:2003(E)

Features of the International Organization for Standardization, the importance and scope. Identify potential needs for statistical methods. Characteristics of the descriptive statistics, design of experiments, regression analysis, sampling and modeling.

Рубрика Международные отношения и мировая экономика
Вид реферат
Язык английский
Дата добавления 24.04.2009
Размер файла 39,0 K

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

1

Contents

Foreword

Introduction

1 Scope

2 Normative references

3 Identification of potential needs for statistical techniques

4 Descriptions of statistical techniques identified

4.1 General

4.2 Descriptive statistics

4.3 Design of experiments (DOE)

4.4 Hypothesis testing

4.5 Measurement analysis

4.6 Process capability analysis

4.7 Regression analysis

4.8 Reliability analysis

4.9 Sampling

4.10 Simulation

4.11 Statistical process control (SPC) charts

4.12 Statistical tolerancing

4.13 Time series analysis

Bibliography

Foreword

ISO (the International Organization for Standardization) is a worldwide federation of national standards bodies (ISO member bodies). The work of preparing International Standards is normally carried out through ISO technical committees. Each member body interested in a subject for which a technical committee has been established has the right to be represented on that committee. International organizations, governmental and non-governmental, in liaison with ISO, also take part in the work. ISO collaborates closely with the International Electrotechnical Commission (IEC) on all matters of electrotechnical standardization.

International Standards are drafted in accordance with the rules given in the ISO/IEC Directives, Part 2.

The main task of technical committees is to prepare International Standards. Draft International Standards adopted by the technical committees are circulated to the member bodies for voting. Publication as an International Standard requires approval by at least 75 % of the member bodies casting a vote.

In exceptional circumstances, when a technical committee has collected data of a different kind from that which is normally published as an International Standard ("state of the art", for example), it may decide by a simple majority vote of its participating members to publish a Technical Report. A Technical Report is entirely informative in nature and does not have to be reviewed until the data it provides are considered to be no longer valid or useful.

Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO shall not be held responsible for identifying any or all such patent rights.

ISO/TR 10017 was prepared by Technical Committee ISO/TC 176, Quality management and quality assurance. Subcommittee SC 3, Supporting technologies.

This second edition cancels and replaces the first edition (ISO/TR 10017:1999) and is now based on ISO 9001:2000.

This Technical Report might be updated to reflect future revisions of ISO 9001. Comments on the contents of this Technical Report may be sent to ISO Central Secretariat for consideration in a future revision.

Introduction

The purpose of this Technical Report is to assist an organization in identifying statistical techniques that can be useful in developing, implementing, maintaining and improving a quality management system in compliance with the requirements of ISO 9001:2000.

In this context, the usefulness of statistical techniques follows from the variability that may be observed in the behaviour and outcome of practically all processes, even under conditions of apparent stability. Such variability can be observed in the quantifiable characteristics of products and processes, and can be seen to exist at various stages over the total life cycle of products, from market research to customer service and final disposal.

Statistical techniques can help to measure, describe, analyse, interpret and model such variability, even with a relatively limited amount of data. Statistical analysis of such data may provide a better understanding of the nature, extent and causes of variability. This could help to solve and even prevent problems that could result from such variability.

Statistical techniques can thus allow better use of available data to assist in decision making, and thereby help to continually improve the quality of products and processes to achieve customer satisfaction. These techniques are applicable to a wide spectrum of activities, such as market research, design, development, production, verification, installation and servicing.

This Technical Report is intended to guide and assist an organization in considering and selecting statistical techniques appropriate to the needs of the organization. The criteria for determining the need for statistical techniques, and the appropriateness of the technique(s) selected, remain the prerogative of the organization.

The statistical techniques described in this Technical Report are also applicable to other standards in the ISO 9000 family, in particular ISO 9004:2000.

1. Scope

This Technical Report provides guidance on the selection of appropriate statistical techniques that may be useful to an organization in developing, implementing, maintaining and improving a quality management system in compliance with ISO 9001. This is done by examining those requirements of ISO 9001 that involve the use of quantitative data, and then identifying and describing the statistical techniques that can be useful when applied to such data.

The list of statistical techniques cited in this Technical Report is neither complete nor exhaustive, and does not preclude the use of any other techniques (statistical or otherwise) that are deemed to be beneficial to the organization. Furthermore, this Technical Report does not attempt to prescribe which statistical technique(s) are to be used; nor does it attempt to advise on how the technique(s) are to be implemented.

This Technical Report is not intended for contractual, regulatory or certification/registration purposes. It is not intended to be used as a mandatory checklist for compliance with ISO 9001:2000 requirements. The justification for using statistical techniques is that their application would help to improve the effectiveness of the quality management system.

NOTE 1 The terms "statistical techniques" and "statistical methods" are often used interchangeably.

NOTE 2 References in this Technical Report to "product" are applicable to the generic product categories of service, software, hardware and processed materials, or a combination thereof, in accordance with the definition of "product" in ISO 9000:2000.

2. Normative references

The following referenced documents are indispensable for the application of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.

ISO 9001:2000, Quality management systems -- Requirements

3. Identification of potential needs for statistical techniques

The need for quantitative data that may reasonably be associated with the implementation of the clauses and sub-clauses of ISO 9001 is identified in Table 1. Listed against the need for quantitative data thus identified are one or more statistical techniques that could be of potential benefit to the organization, when appropriately applied to such data.

NOTE Statistical techniques can be usefully applied to qualitative data, if such data can be converted into quantitative data.

Where no need for quantitative data could be readily associated with a clause or subclause of ISO 9001, no statistical technique is identified.

The statistical techniques cited in this Technical Report are limited to those that are well known. Likewise, only relatively straightforward applications of statistical techniques are identified here.

Each of the statistical techniques noted below is described briefly in Clause 4, to assist the organization to assess the relevance and value of the statistical techniques cited, and to help determine whether or not the organization should use them in a specific context.

Table 1 -- Needs involving quantitative data and supporting statistical technique(s)

Clause/subclause of ISO 9001:2000

Needs involving the use of quantitative data

Statistical technique(s)

4 Quality management system

4.1 General requirements

See Introduction to this Technical

Report

4.2 Documentation requirements 4.2.1 General

None identified

4.2.2 Quality manual

None identified

4.2.3 Control of documents

None identified

4.2.4 Control of records

None identified

5 Management responsibility

5.1 Management commitment

None identified

5.2 Customer focus

Need to determine customer requirements

Need to assess customer satisfaction

See 7.2.2 in this table See 8.2.1 in this table

5.3 Quality policy

None identified

5.4 Planning 5.4.1 Quality objectives

None identified

5.4.2 Quality management system planning

None identified

5.5 Responsibility, authority and communication

5.5.1 Responsibility and authority

None identified None identified

5.5.2 Management representative

None identified

5.5.3 Internal communication

None identified

5.6 Management review 5.6.1 General

None identified

5.6.2 Review input a) results of audits

Need to obtain and evaluate audit data

Descriptive statistics; sampling

b) customer feedback

Need to obtain and assess customer feedback

Descriptive statistics; sampling

c) process performance and product conformity

Need to assess process performance and product conformity

Descriptive statistics; process capability analysis; sampling; SPC charts

d) status of preventive and corrective actions

Need to obtain and evaluate data from preventive and corrective actions

Descriptive statistics

5.6.3 Review output

None identified

6 Resource management

6.1 Provision of resources

None identified

6.2 Human resources 621 General

None identified

6.2.2 Competence, awareness and training

6.2.2 a)

None identified

6.2.2 b)

None identified

6.2.2 c) evaluate the effectiveness of the actions taken

Need to assess competence, and effectiveness of training

Descriptive statistics, sampling

6.2.2 d)

None identified

6.2.2 e)

None identified

6.3 Infrastructure

None identified

6.4 Work environment

Need to monitor the work environment

Descriptive statistics, SPC charts

7 Product realization

7.1 Planning of product realization

None identified

7.2 Customer-related processes

7.2.1 Determination of requirements related to the product

None identified

7.2.2 Review of requirements related to the product

Need to assess the organization's ability to meet defined requirements

Descriptive statistics, measurement analysis, process capability analysis, sampling, statistical tolerancing

7.2.3 Customer communication

None identified

7.3 Design and development

7.3.1 Design and development planning

None identified

7.3.2 Design and development inputs

None identified

7.3.3 Design and development outputs

Need to verify that design outputs satisfy input requirements

Descriptive statistics, design of experiments, hypothesis testing, measurement analysis, regression analysis, reliability analysis, sampling, simulation, time series analysis

7.3.4 Design and development review

None identified

7.3.5 Design and development verification

Need to verify that design outputs satisfy input requirements

Descriptive statistics, design of experiments, hypothesis testing, measurement analysis, process capability analysis, regression analysis, reliability analysis, sampling, simulation, time series analysis

7.3.6 Design and development validation

Need to validate that product meets stated use and needs

Descriptive statistics, design of experiments, hypothesis testing, measurement analysis, process capability analysis, regression analysis, reliability analysis, sampling, simulation

7.3.7 Control of design and development changes

Need to evaluate, verify and validate effect of design changes

Descriptive statistics, design of experiments, hypothesis testing, measurement analysis, process capability analysis, regression analysis, reliability analysis, sampling, simulation

7.4 Purchasing 741 Purchasing process

Need to ensure that purchased product conforms to specified purchase requirements

Need to evaluate suppliers ability to supply product to meet organizations requirements

Descriptive statistics, hypothesis testing measurement analysis, process capability analysis, regression analysis, reliability analysis, sampling

Descriptive statistics, design of experiments, process capability analysis, regression analysis, sampling

7.4.2 Purchasing information

None identified

7.4.3 Verification of purchased product

Need to establish and implement inspection and other activities to ensure that purchased product meets specified requirements

Descriptive statistics, hypothesis testing, measurement analysis, process capability analysis, reliability analysis, sampling

7.5 Production and service provision

7.5.1 Control of production and service provision

Need to monitor and control production and service activity

Descriptive statistics, measurement analysis, process capability analysis, regression analysis, reliability analysis, sampling, SPC charts, time series analysis

7.5.2 Validation of processes for production and service provision

Need to validate, monitor, and control processes whose output cannot be readily measured

Descriptive statistics, process capability analysis, regression analysis, sampling, SPC charts, time series analysis

7.5.3 Identification and traceability

None identified

7.5.4 Customer property

Need to verify characteristics of

customer property

Descriptive statistics, sampling

7.5.5 Preservation of product

Need to monitor the effect of handling, packaging and storage on product quality

Descriptive statistics, regression analysis, reliability analysis, sampling, SPC charts, time series analysis

7.6 Control of monitoring and measuring devices

Need to ensure that monitoring and measurement process and equipment is consistent with requirement

Need to assess the validity of previous measurements, where required

Descriptive statistics, measurement analysis process capability analysis, regression analysis, sampling, SPC charts, statistical tolerancing, time series analysis

Descriptive statistics, hypothesis testing, measurement analysis, regression analysis, sampling, statistical tolerancing, time series analysis

8 Measurement, analysis and improvement

8.1 General

None identified

8.2 Monitoring and measurement 8.2.1 Customer satisfaction

Need to monitor and analyse information pertaining to customer perception

Descriptive statistics, sampling

Need to monitor and measure quality management system processes, to demonstrate the ability of the process to achieve planned results

Descriptive statistics, design of experiment, hypothesis testing, measurement analysis, process capability analysis, sampling, SPC charts, time series analysis

824 Monitoring and measurement of product

Need to monitor and measure product characteristics at appropriate stages of realization to verify that requirements are met

Descriptive statistics, design of experiment, hypothesis testing, measurement analysis, process capability analysis, regression analysis, reliability analysis, samplini SPC charts, time series analysis

4. Descriptions of statistical techniques identified

4.1 General

The following statistical techniques, or families of techniques, that might help an organization to meet its needs, are identified in Table 1:

-- descriptive statistics;

-- design of experiments;

-- hypothesis testing;

-- measurement analysis;

-- process capability analysis;

-- regression analysis;

-- reliability analysis;

-- sampling;

-- simulation;

-- statistical process control (SPC) charts;

-- statistical tolerancing;

-- time series analysis.

Of the various statistical techniques listed above, it is worth noting that descriptive statistics (which includes graphical methods) constitutes an important aspect of many of these techniques.

As stated earlier, the criteria used in selecting the techniques listed above are that the techniques are well known and widely used, and their application has resulted in benefit to users.

The choice of technique and the manner of its application will depend on the circumstances and purpose of the exercise, which will differ from case to case.

A brief description of each statistical technique, or family of techniques, is provided in 4.2 to 4.13. The descriptions are intended to assist a lay reader to assess the potential applicability and benefit of using the statistical techniques in implementing the requirements of a quality management system.

The actual application of statistical techniques cited here will require more guidance and expertise than is provided by this Technical Report. There is a large body of information on statistical techniques available in the public domain, such as textbooks, journals, reports, industry handbooks and other sources of information, which may assist the organization in the effective use of statistical techniques1). However it is beyond the scope of this Technical Report to cite these sources, and the search for such information is left to individual initiative.

Listed in the Bibliography are ISO and IEC Standards and Technical Reports related to statistical techniques. They are cited here for information; this Technical Report does not specify compliance with them.

4.2 Descriptive statistics

4.2.1 What it is

The term descriptive statistics refers to procedures for summarizing and presenting quantitative data in a manner that reveals the characteristics of the distribution of data.

The characteristics of data that are typically of interest are their central value (most often described by the average), and spread or dispersion (usually measured by the range or standard deviation). Another characteristic of interest is the distribution of data, for which there are quantitative measures that describe the shape of the distribution (such as the degree of "skewness", which describes symmetry).

The information provided by descriptive statistics can often be conveyed readily and effectively by a variety of graphical methods, which include relatively simple displays of data such as

-- a trend chart (also called a "run chart"), which is a plot of a characteristic of interest over a period of time, to observe its behaviour over time,

-- a scatter plot, which helps to assess the relationship between two variables by plotting one variable on the x-axis and the corresponding value of the other on the y-axis, and

-- a histogram, which depicts the distribution of values of a characteristic of interest.

There is a wide array of graphical methods that can aid the interpretation and analysis of data. These range from the relatively simple tools described above (and others such as bar-charts and pie-charts), to techniques of a more complex nature, including those involving specialised scaling (such as probability plots), and graphics involving multiple dimensions and variables.

Graphical methods are useful in that they can often reveal unusual features of the data that may not be readily detected in quantitative analysis. They have extensive use in data analysis when exploring or verifying relationships between variables, and in estimating the parameters that describe such relationships. Also, they have an important application in summarizing and presenting complex data or data relationships in an effective manner, especially for non-specialist audiences.

Descriptive statistics (including graphical methods) are implicitly invoked in many of the statistical techniques cited in this Technical Report, and should be regarded as a fundamental component of statistical analysis.

4.2.2 What it is used for

Descriptive statistics is used for summarizing and characterizing data. It is usually the initial step in the analysis of quantitative data, and often constitutes the first step towards the use of other statistical procedures.

The characteristics of sample data may serve as a basis for making inferences regarding the characteristics of populations from which the samples were drawn, with a prescribed margin of error and level of confidence.

4.2.3 Benefits

Descriptive statistics offers an efficient and relatively simple way of summarizing and characterizing data, and also offers a convenient way of presenting such information. In particular, graphical methods are a very effective way of presenting data, and communicating information.

Descriptive statistics is potentially applicable to all situations that involve the use of data. It can aid the analysis and interpretation of data, and is a valuable aid in decision making-

4.2.4 Limitations and cautions

Descriptive statistics provides quantitative measures of the characteristics (such as the average and standard deviation) of sample data. However these measures are subject to the limitations of the sample size and sampling method employed. Also, these quantitative measures cannot be assumed to be valid estimates of characteristics of the population from which the sample was drawn, unless the underlying statistical assumptions are satisfied.

4.2.5 Examples of applications

Descriptive statistics has useful application in almost all areas where quantitative data are collected. It can provide information about the product, process or some other aspect of the quality management system, and maybe used in management reviews. Some examples of such applications are as follows:

-- summarizing key measures of product characteristics (such as the central value and spread);

-- describing the performance of some process parameter, such as oven temperature;

-- characterizing delivery time or response time in the service industry;

-- summarizing data from customer surveys, such as customer satisfaction or dissatisfaction;

-- illustrating measurement data, such as equipment calibration data;

-- displaying the distribution of a process characteristic by a histogram, against the specification limits for that characteristic;

-- displaying product performance results over a period of time by means of a trend chart;

-- assessing the possible relationship between a process variable (e.g. temperature) and yield by a scatter plot.

4.3 Design of experiments (DOE)

4.3.1 What it is

Design of experiments refers to investigations carried out in a planned manner, and which rely on a statistical assessment of results to reach conclusions at a stated level of confidence.

DOE typically involves inducing change(s) to the system under investigation, and statistically assessing the effect of such change on the system. Its objective may be to validate some characteristic(s) of a system, or it may be to investigate the influence of one or more factors on some characteristic(s) of a system.

The specific arrangement and manner in which the experiments are to be carried out constitute the design of the experiment, and such design is governed by the objective of the exercise and the conditions under which the experiments are to be conducted.

There are several techniques that may be used to analyse experiment data. These range from analytical techniques, such as the "analysis of variance" (ANOVA), to those more graphical in nature, such as "probability plots".

4.3.2 What it is used for

DOE may be used for evaluating some characteristic of a product, process or system, for the purpose of validation against a specified standard, or for comparative assessment of several systems.

DOE is particularly useful for investigating complex systems whose outcome may be influenced by a potentially large number of factors. The objective of the experiment may be to maximize or optimize a characteristic of interest, or to reduce its variability. DOE may be used to identify the more influential factors in a system, the magnitude of their influence, and the relationships (i.e. interactions), if any, between the factors. The findings may be used to facilitate the design and development of a product or process, or to control or improve an existing system.

The information from a designed experiment may be used to formulate a mathematical model that describes the system characteristic(s) of interest as a function of the influential factors; and with certain limitations (cited briefly in 4.3.4). Such a model may be used for purposes of prediction.

4.3.3 Benefits

When estimating or validating a characteristic of interest, there is a need to assure that the results obtained are not simply due to chance variation. This applies to assessments made against some prescribed standard, and to an even greater degree in comparing two or more systems. DOE allows such assessments to be made with a prescribed level of confidence.

A major advantage of DOE is its relative efficiency and economy in investigating the effects of multiple factors in a process, as compared to investigating each factor individually. Also, its ability to identify the interactions between certain factors can lead to a deeper understanding of the process. Such benefits are especially pronounced when dealing with complex processes (i.e. processes that involve a large number of potentially influential factors).

Finally, when investigating a system there is the risk of incorrectly assuming causality where there may be only chance correlation between two or more variables. The risk of such error can be reduced through the use of sound principles of experiment design.

4.3.4 Limitations and cautions

Some level of inherent variation (often aptly described as "noise") is present in all systems, and this can sometimes cloud the results of investigations and lead to incorrect conclusions. Other potential sources of error include the confounding effect of unknown (or simply unrecognised) factors that may be present, or the confounding effect of dependencies between the various factors in a system. The risk posed by such errors can be mitigated by well designed experiments through, for example, the choice of sample size or by other considerations in the design of the experiment. These risks can never be eliminated and therefore should be borne in mind when forming conclusions.

Also, strictly speaking, the experiment findings are valid only for the factors and the range of values considered in the experiment. Therefore, caution should be exercised in extrapolating (or interpolating) much beyond the range of values considered in the experiment.

Finally, the theory of DOE makes certain fundamental assumptions (such as the existence of a canonical relationship between a mathematical model and the physical reality being studied) whose validity or adequacy are subject to debate.

4.3.5 Examples of applications

A familiar application of DOE is in assessing products or processes as, for example, in validating the effect of medical treatment, or in assessing the relative effectiveness of several types of treatment. Industrial examples of such applications include validation tests of products against some specified performance standards.

DOE is widely used to identify the influential factors in complex processes and thereby to control or improve the mean value, or reduce the variability, of some characteristic of interest (such as process yield, product strength, durability, noise level). Such experiments are frequently encountered in the production, for example, of electronic components, automobiles and chemicals. They are also widely used in areas as diverse as agriculture and medicine. The scope of applications remains potentially vast.

4.4 Hypothesis testing

4.4.1 What it is

Hypothesis testing is a statistical procedure to determine, with a prescribed level of risk, if a set of data (typically from a sample) is compatible with a given hypothesis. The hypothesis may pertain to an assumption of a particular statistical distribution or model, or it may pertain to the value of some parameter of a distribution (such as its mean value).

The procedure for hypothesis testing involves assessing the evidence (in the form of data) to decide whether a given hypothesis regarding a statistical model or parameter should or should not be rejected.

The hypothesis test is explicitly or implicitly invoked in many of the statistical techniques cited in this Technical Report, such as sampling, SPC charts, design of experiments, regression analysis and measurement analysis.

4.4.2 What it is used for

Hypothesis testing is widely used to enable one to conclude, at a stated level of confidence, whether or not a hypothesis regarding a parameter of a population (as estimated from a sample) is valid. The procedure may therefore be applied to test whether or not a population parameter meets a particular standard; or it may be used to test for differences in two or more populations. It is thus useful in decision making.

Hypothesis testing is also used for testing model assumptions, such as whether or not the distribution of a population is normal, or whether sample data are random.

The hypothesis test procedure may also be used to determine the range of values (termed the "confidence interval") which can be said to contain, at a stated confidence level, the true value of the parameter in question.

4.4.3 Benefits

Hypothesis testing allows an assertion to be made about some parameter of a population, with a known level of confidence.

As such, it can be of assistance in making decisions that depend on the parameter.

Hypothesis testing can similarly allow assertions to be made regarding the nature of the distribution of a population, as well as properties of the sample data itself.

4.4.4 Limitations and cautions

To ensure the validity of conclusions reached from hypothesis testing, it is essential that the underlying statistical assumptions are adequately satisfied, notably that the samples are independently and randomly drawn.

Furthermore, the level of confidence with which the conclusion can be made is governed by the sample size.

At a theoretical level, there is some debate regarding how a hypothesis test can be used to make valid inferences.

4.4.5 Examples of applications

Hypothesis testing has general application when an assertion must be made about a parameter or the distribution of one or more populations (as estimated by a sample) or in assessing the sample data itself.

For example, the procedure may be used in the following ways:

-- to test whether the mean (or standard deviation) of a population meets a given value, such as a target or a standard;

-- to test whether the means of two (or more) populations are different, as when comparing different batches of components;

-- to test that the proportion of a population with defects does not exceed a given value;

-- to test for differences in the proportion of defective units in the outputs of two processes;

-- to test whether the sample data have been randomly drawn from a single population;

-- to test if the distribution of a population is normal;

-- to test whether an observation in a sample is an "outlier", i.e. an extreme value of questionable validity;

-- to test if there has been an improvement in some product or process characteristic;

-- to determine the sample size required to accept or reject a hypothesis, at a stated level of confidence;

-- using sample data, to determine a confidence interval within which the true population average might lie.

4.5 Measurement analysis

4.5.1 What it is

Measurement analysis (also referred to as "measurement uncertainty analysis" or "measurement system analysis") is a set of procedures to evaluate the uncertainty of measurement systems under the range of conditions in which the system operates. Measurement errors may be analysed using the same methods as those used to analyse product characteristics.

4.5.2 What it is used for

Measurement uncertainty should be taken into account whenever data are collected. Measurement analysis is used for assessing, at a prescribed level of confidence, whether the measurement system is suitable for its intended purpose.

It is used for quantifying variation from various sources such as variation due to the appraiser (i.e. the person taking the measurement), or variation from the measurement process or from the measurement instrument itself.

It is also used to describe the variation due to the measurement system as a proportion of the total process variation, or the total allowable variation.

4.5.3 Benefits

Measurement analysis provides a quantitative and cost-effective way of selecting a measurement instrument, or for deciding whether the instrument is capable of assessing the product or process parameter being examined.

Measurement analysis provides a basis for comparing and reconciling differences in measurement, by quantifying variation from various sources in measurement systems themselves.

4.5.4 Limitations and cautions

In all but the simplest cases, measurement analysis needs to be conducted by trained specialists.

Unless care and expertise are used in its application, the results of measurement analysis could encourage false and potentially costly over-optimism, both in the measurement results and in the acceptability of the product. Conversely, over-pessimism can result in the unnecessary replacement of adequate measurement systems.

4.5.5 Examples of applications

4.5.5.1 Measurement uncertainty determination

The quantification of measurement uncertainties can serve to support an organization's assurance to its customers (internal or external) that its measurement processes are capable of adequately measuring the quality level to be achieved.

Measurement uncertainty analysis can often highlight variability in areas that are critical to product quality, and hence guide an organization in allocating resources in such areas to improve or maintain quality.

4.5.5.2 Selection of new instruments

Measurement analysis can help guide the choice of a new instrument by examining the proportion of variation that is associated with the instrument.

4.5.5.3 Determination of the characteristics of a particular method (trueness, precision, repeatability, reproducibility, etc.)

This allows the selection of the most appropriate measurement method(s) to be used in support of assuring product quality.

It may also allow an organization to balance the cost and effectiveness of various measurement methods against their effect on product quality.

4.5.5.4 Proficiency testing

An organization's measurement system may be assessed and quantified by comparing its measurement results with those obtained from other measurement systems.

Also, in addition to providing assurance to customers, this may help an organization to improve its methods or the training of its staff with regard to measurement analysis.

4.6 Process capability analysis

4.6.1 What it is

Process capability analysis is the examination of the inherent variability and distribution of a process, in order to estimate its ability to produce output that conforms to the range of variation permitted by specifications.

When the data are measurable variables (of the product or process), the inherent variability of the process is stated in terms of the "spread" of the process when it is in a state of statistical control (see 4.11), and is usually measured as six standard deviations (6cr) of the process distribution.

If the process data are a normally distributed ("bell shaped") variable, this spread will (in theory) encompass 99,73 % of the population.

Process capability can be conveniently expressed as an index, which relates the actual process variability to the tolerance permitted by specifications.

A widely used capability index for variable data is Cp (a ratio of the total tolerance divided by 6a), which is a measure of the theoretical capability of a process that is perfectly centred between the specification limits.

Another widely used index is Cp^, which describes the actual capability of a process which may or not be centred; Cp^ is especially applicable to situations involving one-sided specifications.

Other capability indices have been devised to account for long- and short-term variability better and for variation around the intended process target value.

When the process data involve "attributes" (e.g. percent nonconforming, or the number of nonconformities), process capability is stated as the average proportion of nonconforming units, or the average rate of nonconformities.

4.6.2 What it is used for

Process capability analysis is used to assess the ability of a process to produce outputs that consistently conform to specifications, and to estimate the amount of nonconforming product that can be expected.

This concept may be applied to assessing the capability of any sub-set of a process, such as a specific machine.

The analysis of "machine capability" may be used, for example, to evaluate specific equipment or to assess its contribution to overall process capability.

4.6.3 Benefits

Process capability analysis provides an assessment of the inherent variability of a process and an estimate of the percentage of nonconforming items that can be expected. This enables the organization to estimate the costs of nonconformance, and can help guide decisions regarding process improvement. Setting minimum standards for process capability can guide the organization in selecting processes and equipment that should produce acceptable product.

4.6.4 Limitations and cautions

The concept of capability strictly applies to a process in a state of statistical control. Therefore, process capability analysis should be performed in conjunction with control methods to provide ongoing verification of control.

Estimates of the percentage of nonconforming product are subject to assumptions of normality. When strict normality is not realised in practice, such estimates should be treated with caution, especially in the case of processes with high capability ratios.

Capability indices can be misleading when the process distribution is substantially not normal. Estimates of the percentage of nonconforming units should be based on methods of analysis developed for appropriate distributions for such data. Likewise, in the case of processes that are subject to systematic assignable causes of variation, such as tool wear, specialized approaches should be used to calculate and interpret capability.

4.6.5 Examples of applications

Process capability is used to establish rational engineering specifications for manufactured products by ensuring that component variations are consistent with allowable tolerance build-ups in the assembled product. Conversely, when tight tolerances are necessary, component manufacturers are required to achieve specified levels of process capability to ensure high yields and minimum waste.

High process capability goals (e.g. Cp 2) are sometimes used at the component and subsystem level to achieve desired cumulative quality and reliability of complex systems.

Machine capability analysis is used to assess the ability of a machine to produce or perform to stated requirements. This is helpful in making purchase or repair decisions.

Automotive, aerospace, electronics, food, pharmaceutical and medical device manufacturers routinely use process capability as a major criterion to assess suppliers and products. This allows the manufacturer to minimize direct inspection of purchased products and materials.

Some companies in manufacturing and service industries track process capability indices to identify the need for process improvements, or to verify the effectiveness of such improvements.

4.7 Regression analysis

4.7.1 What it is

Regression analysis relates the behaviour of a characteristic of interest (usually called the "response variable") with potentially causal factors (usually called "explanatory variables"). Such a relationship is specified by a model that can come from science, economics, engineering, etc, or it can be derived empirically. The objective is to help understand the potential cause of variation in the response, and to explain how much each factor contributes to that variation. This is achieved by statistically relating variation in the response variable with variation in the explanatory variables, and obtaining the best fit by minimizing the deviations between the predicted and the actual response.

4.7.2 What it is used for

Regression analysis allows the user to do the following:

-- to test hypotheses about the influence of potential explanatory variables on the response, and to use this information to describe the estimated change in the response for a given change in an explanatory variable;

-- to predict the value of the response variable, for specific values of the explanatory variables;

-- to predict (at a stated level of confidence) the range of values within which the response is expected to lie, given specific values for the explanatory variables;

-- to estimate the direction and degree of association between the response variable and an explanatory variable (although such an association does not imply causation). Such information might be used, for example, to determine the effect of changing a factor such as temperature on process yield, while other factors are held constant.

4.7.3 Benefits

Regression analysis can provide insight into the relationship between various factors and the response of interest, and such insight can help guide decisions related to the process under study and ultimately improve the process.

The insight yielded by regression analysis follows from its ability to describe patterns in response data concisely, compare different but related subsets of data, and analyse potential cause-and-effect relationships. When the relationships are modelled well, regression analysis can provide an estimate of the relative magnitudes of the effect of explanatory variables, as well as the relative strengths of those variables. This information is potentially valuable in controlling or improving process outcomes.

Regression analysis can also provide estimates of the magnitude and source of influences on the response that come from factors that are either unmeasured or omitted in the analysis. This information may be used to improve the measuring system or the process.

Regression analysis may be used to predict the value of the response variable, for given values of one or more explanatory variables; likewise it may be used to forecast the effect of changes in explanatory variables on an existing or predicted response. It may be useful to conduct such analyses before investing time or money in a problem when the effectiveness of the action is not known.

4.7.4 Limitations and cautions

When modelling a process, skill is required in specifying a suitable regression model (e.g. linear, exponential, multivariate), and in using diagnostics to improve the model. The presence of omitted variables, measurement error(s), and other sources of unexplained variation in the response can complicate modelling. Specific assumptions behind the regression model in question, and characteristics of the available data, determine what estimation technique is appropriate in a regression analysis problem.

A problem sometimes encountered in developing a regression model is the presence of data whose validity is questionable. The validity of such data should be investigated where possible, since the inclusion or omission of the data from the analysis could influence the estimates of the model parameters, and thereby the response.

Simplifying the model, by minimizing the number of explanatory variables, is important in modelling. The inclusion of unnecessary variables can cloud the influence of explanatory variables and reduce the precision of model predictions. However, omitting an important explanatory variable may seriously limit the model and the usefulness of the results.

4.7.5 Examples of applications

Regression analysis is used to model production characteristics such as yield, throughput, quality of performance, cycle time, probability of failing a test or inspection, and various patterns of deficiencies in processes. Regression analysis is used to identify the most important factors in those processes, and the magnitude and nature of their contribution to variation in the characteristic of interest.

Regression analysis is used to predict the outcomes from an experiment, or from controlled prospective or retrospective study of variation in materials or production conditions.

Regression analysis is used to verify the substitution of one measurement method by another, for example, replacing a destructive or time-consuming method by a non-destructive or time-saving one.

Examples of applications of non-linear regression include modelling drug concentrations as functions of time and weight of respondents; modelling chemical reactions as a function of time, temperature and pressure.

4.8 Reliability analysis

4.8.1 What it is

Reliability analysis is the application of engineering and analytical methods to the assessment, prediction and assurance of problem-free performance over time of a product or system under study2).

The techniques used in reliability analysis often require the use of statistical methods to deal with uncertainties, random characteristics or probabilities of occurrence (of failures, etc.) over time. Such analysis generally involves the use of appropriate statistical models to characterize variables of interest, such as the time-to-failure, or time- between-failures. The parameters of these statistical models are estimated from empirical data obtained from laboratory or factory testing or from field operation.

Reliability analysis encompasses other techniques (such as fault mode and effect analysis) which focus on the physical nature and causes of failure, and the prevention or reduction of failures.

4.8.2 What it is used for

Reliability analysis is used for the following purposes:

-- to verify that specified reliability measures are met, on the basis of data from a test of limited duration and involving a specified number of test units;

-- to predict the probability of problem-free operation, or other measures of reliability such as the failure rate

or the mean-time-between-failures of components or systems;

-- to model failure patterns and operating scenarios of product or service performance;

-- to provide statistical data on design parameters, such as stress and strength, useful for probabilistic design;

-- to identify critical or high-risk components and the probable failure modes and mechanisms, and to support the search for causes and preventive measures.

The statistical techniques employed in reliability analysis allow statistical confidence levels to be attached to the estimates of the parameters of reliability models that are developed, and to predictions made using such models.

4.8.3 Benefits

Reliability analysis provides a quantitative measure of product and service performance against failures or service interruptions. Reliability activities are closely associated with the containment of risk in system operation. Reliability is often an influencing factor in the perception of product or service quality, and in customer satisfaction.

The benefits of using statistical techniques in reliability analysis include

-- the ability to predict and quantify the likelihood of failure and other reliability measures within stated confidence limits,

the insights to guide decisions regarding different design alternatives using different redundancy and mitigation strategies,

2) Reliability analysis is closely related to the wider field of "dependability" which also includes maintainability and availability. These, and other related techniques and approaches, are defined and discussed in the I EC publications cited in the Bibliography.

-- the development of objective acceptance or rejection criteria for performing compliance tests to demonstrate that reliability requirements are met,

-- the capability to plan optimal preventive maintenance and replacement schedules based on the reliability analysis of product performance, service and wearout data, and

-- the possibility of improving design to achieve a reliability objective economically.

4.8.4 Limitations and cautions

A basic assumption of reliability analysis is that the performance of a system under study can be reasonably characterized by a statistical distribution. The accuracy of reliability estimates will therefore depend on the validity of this assumption.

The complexity of reliability analysis is compounded when multiple failure modes are present, which may or may not conform to the same statistical distribution. Also, when the number of failures observed in a reliability test is small, this can severely affect the statistical confidence and precision attached to estimates of reliability.

The conditions under which the reliability test is conducted are critically important, particularly when the test involves some form of "accelerated stress" (i.e. stress that is significantly greater than that which the product will experience in normal usage). It may be difficult to determine the relationship between the failures observed under test and product performance under normal operating conditions, and this will add to the uncertainty of reliability predictions.

4.8.5 Examples of applications

Typical examples of applications of reliability analysis include

-- verification that components or products can meet stated reliability requirements,

-- projection of product life cycle cost based on reliability analysis of data from tests at new product introduction,

-- guidance on decisions to make or buy off-the-shelf products, based on the analysis of their reliability, and the estimated effect on delivery targets and downstream costs related to projected failures,

-- projection of software product maturity based on test results, quality improvement and reliability growth, and establishing software release targets compatible with market requirements, and


Подобные документы

  • Regulation of International Trade under WTO rules: objectives, functions, principles, structure, decision-making procedure. Issues on market access: tariffs, safeguards, balance-of-payments provisions. Significance of liberalization of trade in services.

    курс лекций [149,5 K], добавлен 04.06.2011

  • Currency is any product that is able to carry cash as a means of exchange in the international market. The initiative on Euro, Dollar, Yuan Uncertainties is Scenarios on the Future of the World International Monetary System. The main world currency.

    реферат [798,3 K], добавлен 06.04.2015

  • Natural gas is one of the most important energy resources. His role in an international trade sector. The main obstacle for extending the global gas trading. The primary factors for its developing. The problem of "The curse of natural resources".

    эссе [11,4 K], добавлен 12.06.2012

  • The history of Human Rights Watch - the non-governmental organization that monitors, investigating and documenting human rights violations. Supportive of a diverse and vibrant international human rights movement and mutually beneficial partnerships.

    презентация [1,6 M], добавлен 12.03.2015

  • Mission, aims and potential of company. Analysis of the opportunities and threats of international business. Description of the factors that characterize the business opportunities in Finland. The business plan of the penetration to market of Finland.

    курсовая работа [128,3 K], добавлен 04.06.2013

  • Organisation of the Islamic. Committee of Permanent Representatives. Conference International Islamic Court of Justice. Independent Permanent Commission on Human Rights. Cooperation with Islamic and other Organizations. Peaceful Settlement of Disputes.

    реферат [22,2 K], добавлен 21.03.2013

  • The study of the history of the development of Russian foreign policy doctrine, and its heritage and miscalculations. Analysis of the achievements of Russia in the field of international relations. Russia's strategic interests in Georgia and the Caucasus.

    курсовая работа [74,6 K], добавлен 11.06.2012

  • Сингапур как наименее коррумпированная страна Азии, анализ эффективности политики и государственного регулирования. Оценка индекса восприятия коррупции в Сингапуре и России согласно рейтингу Transparency International. Пути уменьшения мотивов коррупции.

    презентация [127,3 K], добавлен 03.04.2017

  • Діяльність Міжнародного банка реконструкції та розвитку, його основні функції та цілі, механізми кредитування. Спеціальні права запозичення. Бреттон-Вудські інститути. Організаційна структура International Bank for Reconstruction and Development.

    лекция [489,5 K], добавлен 10.10.2013

  • История создания Международной финансовой корпорации (International Finance Corporation). Оперативное руководство и страны-члены, которые коллегиально определяют политику МФК, в том числе принимают инвестиционные решения. Ее финансовые продукты и услуги.

    презентация [478,7 K], добавлен 23.10.2013

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.