Measurement topics

Four classic areas to consider for measurement​

Pre-production

Gauge studies; classic, destructive, vision systems, sensors

Machine set-up

Capability Studies

 

Production

Sample Size and frequency estimation

Establishing the best way to chart a process; variables and attributes

Estimating the parts per million (ppm) defective

 

Post-production

Acceptance sampling schemes

Sorting quarantined stock

 

Improvement work

Trial planning and analysis

Surveys: employee, customer satisfaction

Communicating the status quo in pictures that are understood

 

Sample size  =

 

ƒ {change, variability, confidence}


All measured quantities are uncertain.
With every estimate ask for the range.

 

Graphical summary of Basic Stats from Minitab, showing confidence limits around
Mean and Median

We collect data to understand something. Deming said the most important was to understand variation.

All too often qualiticians collect data because that it how it has always been done.

 

Why collect it?

  • In case something goes wrong and I need evidence to show I am not to blame    -   WRONG

  • To understand if the process has changed                                                             -   CORRECT

  • To understand what are the true parameters influencing a process                       -   CORRECT

  • To communicate unambiguously with our colleagues                                             -   CORRECT

 

 

Once we know why we need data, the objective of collecting the data, we can plan what and how to collect samples. We will decide which variability we need to measure. We will decide how much data to be confident in the decision we need to make.

 

Process control data is increasingly being automatically collected and analysed. The output is a message to the operators to carry out a specific action. Computer systems are big data collection networks with standard outputs. The qualitician’s job is to define the best chart to suit the operators’ needs. This must be reviewed (parameters change and sample size & frequencies change) regularly as customer needs change and the process is improved.

 

Other quality data should be planned to answer a question; often what is the root cause of a problem. Then the qualitician uses a program like Minitab to analyse, prepare pictures to suit the audience, and provide clear evidence of which actions are required to improve the process.

 

When we measure a sample

…  we are trying to make predictions about the population from which the sample was taken; predictions about its mean [m] and its variability [s]. Then a comparison can be made, either with a process standard or with another population.

 

We are not too interested in the sample itself. Indeed, often the sample is scrapped after measurement. It is the size of the sample that defines whether we have reliable information about the accuracy and the precision of the process.

 

 

The basics about sample sizes

My recent experience has shown many industrial companies are wasting money by measuring too many samples or the wrong parameter; processes change requiring a
 of the old control plans.

 

Remember when deciding how many to check.

sample size     n         the number of results we have.

‚change            c         in the process which we wish to detect and is important for us to understand.

variability         s         of the process, (not used in proportions of attributes).

confidence     αβ        we need in our predictions, m and s, being correct.

 

 

       n = ƒ [c,s,αβ]    ALSO    c = ƒ [n,s,αβ]   AND   ab = ƒ [n,c,s]    

 

Use Minitab's sample size calculator for reviewing control checks (both variable and attribute), estimating what is needed for a trial (both simple and multi-variable).

Stat  „   Power and Sample Size „

 

 

Confidence limits

All measurements are an estimate. There will be a band of uncertainty about the truth around the result, so we must know how wide this band is.

For example: I weigh 120 samples in a capability study and calculate the average. Because there is error in the gauge, because the operator is inconsistent, because it is only a sample and not the whole population, the true average of the process output may be more or less than the calculated average. A similar logic applies to the calculation of the variability, the gauge R&R% and the capability indices.

 

The average weight of the samples is 466.78 mgs.

Based on our knowledge from the samples measured, the process average could be as low as 466.23 mgs, or as high as 467.32 mgs

 

The variability (expressed as a standard deviation) is 3.03 mgs.

It could be as low as 2.69, it could be as high as 3.47.

 

The R&R of the gauge used to weigh the product (estimated during a previous study where 24 results were used) was 17%.

It could be as low as 13%, it could be as high as 23%.

 

Knowing the confidence limits (CLs) around a statistic can help us to make better decisions and avoid mistakes.

We may decide it is necessary to take more samples in order to reduce the CLs before a final decision is made.

 

When given a statistic, always ask for the range (confidence limit) around the result.

A larger sample reduces the range of the possible statistic.

 

This topic is covered in greater depth the following RK:QM courses:

  • Advanced SPC.

  • Sample sizes:

  • Minitab: fasttrack introdction for the packaging industry