Leading ERP software for growing manufacturers

Variation Reduction Part 1

Variation Reduction Part 1

By Bob Sproull

Review

In my last post we completed our discussion on the basic differences between being an optimizer versus being a satisficer.  We also demonstrated how a satisficer uses incremental improvements to make step-wise improvements, rather than the optimizer who waits for the perfect solution before implementing it.  In today’s post we will begin a new series of posts on Variation Reduction.

Introduction

One of the keys to optimizing the performance of your processes is having an under-standing of the nature and sources of variability.  Like anything else, if you don’t understand the basics of something, you simply won’t be able to improve it.  In the next series of posts, I’m going to discuss a series of key points related to process and system variability.

So what exactly, is variability?  If we were to look at a formal definition of this term, we might look to Hopp and Spearman’s [1] definition who define variability as the quality of non-uniformity of a class of entities. For example, a group of individuals who all weigh exactly the same, have no variability in weight, while another group with vastly different weights is considered to be highly variable in this regard.  In manufacturing systems, there are many attributes in which variability might be of interest.  Things like, physical dimensions, process times, machine failure/repair times, quality measures, temperatures, material hardness, setup times, and so on are examples of characteristics that are prone to non-uniformity or variation.

Variability is closely associated with (but not identical to) randomness.  Therefore, to understand the causes and effects of variability, one must understand the concept of randomness and the related subject of probability which are both beyond the scope of this series of posts. But what we can do to better define variability, is to better understand the measures and classes of variability.

In order to effectively analyze variability, we really have to have a way to quantify it.  The basic measure of variation is referred to as variance and is denoted by the symbol, σ2.  Variance is a measure of absolute variation, as is the standard deviation, σ defined as the square root of the variance. Sigma (σ) is the most commonly used term when considering the variation of something. But even though σ is the most commonly used term, it may not be the best term to use because absolute variability is less important than relative variability.  For example, suppose we have a standard deviation of 10 micrometers (µm) which would be extremely low if we were measuring the length of bolts with a nominal length of two inches.  But this same 10 µm would represent a very high level of variation for line widths on a chip whose mean width is five micrometers.

A reasonable relative measure of the variability of a random variable is the standard deviation σ, divided by the mean µ, which is referred to as the coefficient of variation (CV).The formula for CV is:

CV = σ / µ

When we are evaluating the variability of a production system, we can use the following classification criteria to judge the quality and class of variability:

Variability Class

Coefficient of Variation

Typical Situation

Low (LV)

CV < 0.75

Process time without outages

Moderate (MV)

0.75 ≤ CV < 1.33

Process times with short adjustments (e.g., setups)

High (HV)

CV ≥ 1.33

Process times with long outages (e.g., failures)

 

When we think about processing times, we have a tendency to consider only the actual time that the machine or operator spends on the job actually working (i.e. not including failures or setups) and these times tend to be normally distributed.  If, for example, the average process time was 20 minutes and the standard deviation was 6.3 minutes, then CV = 6.3/20 = 0.315 and would be considered a low variation (LV) process.  Most LV processes follow a normal probability distribution.  Suppose the mean processing time was 20 minutes, but the standard deviation was 30 minutes.  The value for CV = 30/20 = 1.5.  This process would be considered highly variable. 

You may be wondering why we care whether a process is LV, MV or HV?  Suppose, for example, that you have identified a constraint that is classed as a LV process with an average process time of 30 minutes and a standard deviation of 10 minutes.  The calculated value of the coefficient of variation, CV = 10/30 = 0.33 and would be considered low variability (LV).  Suppose that the non-constraint operation feeding the constraint has an average processing time of one-half that of the constraint, 15 minutes, but its standard deviation was 30 minutes.  The calculated value for CV = 30/15 = 2.0 and is considered a HV process.  A value of 2.0 from the above table suggests that this process probably has long failure outages which could starve the constraint!  When developing your plan of attack for reducing variation, using the coefficient of variation (CV) suggests that you include non-constraint processes that feed the constraint operation if they are classified as HV.

 

Next Time

In my next post, we’ll discuss some of the more common causes of moderate to high variability as well as presenting more about variability in general.  As always, if you have any questions or comments about any of my posts, leave me a message and I will respond. 

Until next time.

Bob Sproull

References:

[1] Wallace J. Hopp and Mark L. Spearman, Factory Physics-Foundations of Manufacturing Management, 2nd Edition, Irwin McGraw-Hill, 2001

Bob Sproull

About the author

Bob Sproull has helped businesses across the manufacturing spectrum improve their operations for more than 40 years.

facebook-icon facebook-icon linkedin-icon linkedin-icon twitter-icon twitter-icon blog-icon blog-icon youtube-icon youtube-icon instagram-icon instagram-icon Bookmark this page Google +