08.01.2017 Views

3e2a1b56-dafb-454d-87ad-86adea3e7b86

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Supplement to Chapter 17 Statistical process control (SPC) 525<br />

Table S17.1 Type I and type II errors in SPC<br />

Actual process state<br />

Decision In control Out of control<br />

Stop process Type I error Correct decision<br />

Leave alone Correct decision Type II error<br />

Upper control limit<br />

Lower control limit<br />

are to the population mean, the higher the likelihood of investigating and trying to rectify a<br />

process which is actually problem-free. If the control limits are set at two standard deviations,<br />

the chance of a type I error increases to about 5 per cent. If the limits are set at one standard<br />

deviation then the chance of a type I error increases to 32 per cent. When the control limits<br />

are placed at ±3 standard deviations away from the mean of the distribution which describes<br />

‘normal’ variation in the process, they are called the upper control limit (UCL) and lower<br />

control limit (LCL).<br />

Critical commentary<br />

When its originators first described SPC more than half a century ago, the key issue was only<br />

to decide whether a process was ‘in control’ or not. Now, we expect SPC to reflect common<br />

sense as well as statistical elegance and promote continuous operations improvement.<br />

This is why two (related) criticisms have been levelled at the traditional approach to SPC.<br />

The first is that SPC seems to assume that any values of process performance which lie<br />

within the control limits are equally acceptable, while any values outside the limits are not.<br />

However, surely a value close to the process average or ‘target’ value will be more acceptable<br />

than one only just within the control limits. For example, a service engineer arriving<br />

only 1 minute late is a far better ‘performance’ than one arriving 59 minutes late, even if the<br />

control limits are ‘quoted time ± one hour’. Also, arriving 59 minutes late would be almost<br />

as bad as 61 minutes late! Second, a process always within its control limits may not be<br />

deteriorating, but is it improving. So rather than seeing control limits as fixed, it would be<br />

better to view them as a reflection of how the process is being improved. We should expect<br />

any improving process to have progressively narrowing control limits.<br />

Quality loss function<br />

The Taguchi loss function<br />

Genichi Taguchi proposed a resolution of both the criticisms of SPC described in the critical<br />

commentary box. 12 He suggested that the central issue was the first problem – namely that<br />

the consequences of being ‘off-target’ (that is, deviating from the required process average<br />

performance) were inadequately described by simple control limits. Instead, he proposed a<br />

quality loss function (QLF) – a mathematical function which includes all the costs of poor<br />

quality. These include wastage, repair, inspection, service, warranty and generally, what he<br />

termed, ‘loss to society’ costs. This loss function is expressed as follows:<br />

L = D 2 C<br />

where<br />

L = total loss to society costs<br />

D = deviation from target performance<br />

C = a constant.<br />

Target-oriented quality<br />

Figure S17.5 illustrates the difference between the conventional and Taguchi approaches<br />

to interpreting process variability. The more graduated approach of the QLF also answers the<br />

second problem raised in the critical commentary box. With losses increasing quadratically<br />

as performance deviates from target, there is a natural tendency to progressively reduce process<br />

variability. This is sometimes called a target-oriented quality philosophy.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!