Gauge studies check that the results provided by gauges are:
Those shown in blue are the first studies to learn.
ACCURATE - BIAS: Calibration: checks that the results are centred on the true dimension
PRECISE - REPEATABILITY: measures variability within the gauge
- REPRODUCIBILITY: measures variability between people / gauges
These two are combined to provide the R&R% result
Attribute checks, vision systems, and sensors are all gauge systems which have to be checked to ensure results are reliable. Whereas the above variable gauge studies use variable measurement, this group of gauges is assessed using their α and β error.
An R&R study
R&R, is the statistic that we use to measure the capability of a gauge; it is the ratio of:
variability : product tolerance
band of uncertainty : tolerance of product being measured
6 std devs ÷ tolerance
Until the mid 2000s, the packaging industry used 5.15 sds (99% of all results) as the element of variability. This has generally changed to to 6 sds (99.7% of all results), reflecting the high expectations in measurement systems.
Gauges used in trials
If you are checking a gauge’s suitability to measure product produced in a trial, tolerance is replaced with effect to be detected. Typically, this will be ⅙ of the tolerance; this illustrates the challenge of finding a suitable gauge for a trial.
... used to decide the acceptability of gauge capability is
R&R% > 30% unacceptable
>20% <30% needs improvement
> 10% < 20% acceptable
<10% world class
Many gauges have an R&R% of 5% today.
for classic gauge study and also destructive gauge study
Stat ► Quality Tools ► Gage Study ► Gage R&R Study (crossed)
for inter plant/lab study/gauge study
Stat ► Quality Tools ► Gage Study ► Gage R&R Study (nested)
Attribute gauges and vision systems
Four possible outcomes
.. when a vision system or a sensor gauge checks a piece of product:
1 and 4 are correct decisions, 2 and 3 incorrect decisions.
The capability of such gauging systems (equivalent of R&R%) is defined by the α and β errors.
Frequently, the system will be adjusted by production in reaction to high levels of good product rejected, increasing the levels of defective product delivered; and vice versa.
The solution is to improve the system, not adjust it.
The methodology for estimating the performance is included in an RK:QM course Advanced SPC.
Outline procedure for measuring the capability of attribute gauiges to compare their performance can be seen in the document:
rkqm attribute gauge cap procedure.pdf
Capability of attribute gauges
The dilemma of an attribute gauge system:
Where do we set the reject value?
Set at 50 we reject all defectives but also reject high volumes of acceptable product.
Set at 60 we accept all good product but also accept high volumes of defective product.
Go nogo gauges
The use of “go – no go” gauges in a quality control plan for volume production processes is dangerous because the results are likely to suggest the process is producing good product, when in fact there are significant amounts of defective product being made .
The gauge was introduced at a time when defect levels were between 5% and 10%. Today, industry expects 0.03%. It’s origins are in the engineering industry where they were used for 100% inspection; not sampling.
There are two problems with the go-nogo gauge.
1 – Our inability to analyse data for precision and trend
2 – The very large quantity of samples required, typically 10,000.
Prior to the regular use of computers, a system of analysis known as Xbar + R was used to estimate the R&R% of a gauge. A sample size of 10 pieces measured twice by 2 operators was generally used.
Today, the system of analysis used is ANOVA. The rule of thumb for this sytem is 6 pieces measured twice by three operators.
As Minitab provides the confidence intervals arouind the R&R%, the sample size can be honed by increasing the number of samples if the REPEATABILITY limits are two wide. It is normally advised to require all the operators who use the gauge to take part in the study.