One of the first lessons of the SixSigma methodology
is to CenterTheProcess. For example, we want to be equally likely
to make parts that are too short or too long.
Contexts where centering the process makes sense:
- Both an upper and a lower specification limit are well-defined.
- Inspection costs are significant.
- The rework cycle-time is significant.
- The ProcessVariation? can be described by a NormalDistribution?.
- The cost of being too small is similar to the cost of being too big. The costs of (sometimes) being out of spec include scrap, rework, inspections, and other measures to deal with the problem. So even if big parts can be cut down to spec, and small parts cannot, the costs of being too big may still be similar to the costs of being too small.
- Two real-world examples are given on the SixSigma page.
Contexts where centering the process does not make sense:
- Only one specification limit is well-defined.
- Inspection costs are only incurred when this specification limit comes into play.
- Rework and other costs are only incurred when this specification limit comes into play.
Examples where centering the process does not make sense:
- In TheNetherlands, there is a symbol e on food packaging, which signifies that at least a certain percentage (say 85%) contains at least as much as the box says you're getting. So in this case, too little is worse than too much. The same can be said for other products as well. In general, there are numerous cases when either too short is much worse than too long or vice-versa. --AalbertTorsius.
- Often the costs, or at least the costs of the consequences, are radically different: a part with a hole too small won't let the inserted part fit at all, so the part needs to be rejected; if the hole is too big, the parts can still be assembled but bad things can happen later - possibly a complete failure leading to tragic results.
- Some processes are one-sided and can't be centered, for example, server response time should be less than 3 seconds.
See a reference to Taguchi Methods (Taguchi Minimal Loss Function) for a much more thorough discussion.
If the ProcessVariation? is described by a NormalDistribution?,
shifting the ProcessCenter? by one StandardDeviation will increase the out-of-spec frequency at one end by an order of magnitude, and reduce the out-of-spec frequency at the other end by an order of magnitude. So roughly, a 100-fold difference in cost between the two ends can justify a one StandardDeviation shift of the ProcessCenter?.
The necessary cost ratio increases as the ProcessCapability? increases. In other words, as you widen the spec limits, and tighten the StandardDeviation, the more tightly you want to CenterTheProcess.
This is incorrect. The process and the specification are two different things. Shifting the process will have an unknown affect, in general, on the material with respect to the specification. If you have a tightly controlled process, you can shift many standard deviations and still stay within the specification limits. This is part of the argument in favor of continually reducing variation.
A hypothetical example, using absurdly precise probabilities:
- A process has a StandardDeviation of 1.
- The upper spec limit is 3.
- The lower spec limit is -3.
- The ProcessVariation? follows a NormalDistribution?.
- It is much worse to be too large (>3) than too small (<-3).
First, let's see what happens if we
CenterTheProcess.
- On a bad day (for small parts), the ProcessAverage? is -1.5 .
- 7% of the parts are too small (<-3).
- 0.0003% of the parts are too large (>3).
- On a good day, the ProcessAverage? is 0.
- 0.14% of the parts are too small (<-3).
- 0.14% of the parts are too large (>3).
- On a bad day (for large parts), the ProcessAverage? is 1.5 .
- 0.0003% of the parts are too small (<-3).
- 7% of the parts are too large (>3).
Next, let's see what happens if we shift the process average by one
StandardDeviation.
- On a bad day (for small parts), the ProcessAverage? is -2.5 .
- 31% of the parts are too small (<-3).
- 0.000002% of the parts are too large (>3).
- On a good day, the ProcessAverage? is -1.
- 2% of the parts are too small (<-3).
- 0.003% of the parts are too large (>3).
- On a bad day (for large parts), the ProcessAverage? is 0.5 .
- 0.02% of the parts are too small (<-3).
- 0.6% of the parts are too large (>3).
Let's see how much of a difference in cost is need to justify the shift. Assume that the one
StandardDeviation shift is about optimum. Then the total cost of shifting a little more should be close to zero, comparing bad days. (Since most of the expensive defects happen on bad days.)
- On a bad day (for small parts), the ProcessAverage? is -2.6 .
- 34% of the parts are too small (<-3).
- 0.000002% of the parts are too large (>3).
- On a good day, the ProcessAverage? is -1.1 .
- 3% of the parts are too small (<-3).
- 0.002% of the parts are too large (>3).
- On a bad day (for large parts), the ProcessAverage? is 0.4 .
- 0.03% of the parts are too small (<-3).
- 0.5% of the parts are too large (>3).
Assume a too-small part costs $1, and a too-large part costs $x. (Yes, the theory is based on costs that increase the further away from the center of the target you are, but this is just a simple trade-off analysis.)
When the shift is one StandardDeviation, the cost for two bad days (one of each type) is:
0.621% * $x + 30.85% * $1
When the shift is 1.1 StandardDeviations?, the corresponding cost is:
0.466% * $x + 34.45% * $1
For these costs to be equal,
0.155% * $x = 3.6 % * $1
x = 23
For spec limits of +/-3, x = 23 (and the shift cuts scrap costs by 70%).
For spec limits of +/-4, x = 190 (and the shift cuts scrap costs by 90%).
For spec limits of +/-5, x = 1500
For spec limits of +/-6, x = 12000
Again, these values are absurdly precise.
Discussion:
The DistributionOfAllStatistics page illustrates the NormalDistribution?.
See also: SixSigma
CategoryManufacturing
CategoryStatistics