Optimize Existing Processes to Achieve Six Sigma Capability

This is the second of a two part series. Part one appeared in the January issue and addressed designing processes to achieve Six Sigma capability.A well-designed, well-executed system keeps a process under control at an existing performance level. Control charts of the process can identify, and even predict, when the process is becoming unstable.

By Dave Harrold, CONTROL ENGINEERING March 1, 1999

KEY WORDS

Process control & instrumentation

Productivity, measurement & control

Quality assurance

Statistical quality control

Sidebars: Six Sigma for Complex Systems

This is the second of a two part series. Part one appeared in the January issue and addressed designing processes to achieve Six Sigma capability.

A well-designed, well-executed system keeps a process under control at an existing performance level. Control charts of the process can identify, and even predict, when the process is becoming unstable. Operational procedures specify actions necessary to stabilize and bring an out-of-control process back into control. However, one thing a control system cannot do is improve the capability of a process. A process can only be improved when the significant characteristics influencing final output are:

Identified;

Understood;

Optimized; and

Maintained at optimum level.

In Six Sigma (6σ) vernacular, this is referred to as Measure, Analyze, Improve, and Control (MAIC).

Achieving 6σ capability is not easy, and is seldom accomplished during the first attempt to optimize a process. Companies serious about achieving 6σ capability allow Black Belt practitioners, trained in use of statistical methods and tools, to apply their specialized skills to defined improvement opportunities. A 6σ guru’s first pass MAIC of a project finds “low-hanging fruit,” but each subsequent MAIC pass requires greater diligence, more experience, and better tools.

Will the effort pay off?

AlliedSignal attributes 6σ principles with saving $1.5 billion since 1994, and projects additional savings of $600 million in 1999.

Jack Welch, chairman and ceo of General Electric (Fairfield, Conn.) expects 6σ to yield $750 million in net benefits for 1998.

To reap such benefits requires understanding four MAIC phases and eight key tools (see related diagram).

To understand MAIC phases, 12 sequenced steps ensure that a thorough examination of the process is conducted, as follows:

Select the critical to quality characteristics;

Define the required performance standards;

Validate measurement system, methods, and procedures;

Establish the current processes capability;

Define upper and lower performance limits;

Identify sources of variation;

Screen potential causes of variation to identify the vital few variables needing control;

Discover variation relationships for the vital variables;

Establish operating tolerances on each of the vital variables;

Validate the measurement system’s ability to produce repeatable data;

Determine the capability of the process to control the vital variables; and

Implement statistical process control on the vital variables.

Tools make the difference

Like statistical process control (SPC) or statistical quality control (SQC), 6σ uses SPC/SQC tools. What 6σ methods add are tools, discipline, and know-how to improve processes.

Three metrics are used to measure process performance. Potential process capability (Cp), and process capability (Cpk) indexes assess current process capabilities. Instability index (St) is used to examine a process across time.

Cp index is the ratio of specification width over process spread (Cp = Upper Specification Limit– Lower Specification Limit ÷ 6 Sigma).

Cp index assumes an ideal state when the process is centered between specification limits and therefore represents what the process could become when it reaches the design target.

In real life, very few processes are centered on the desirable target. An off-target process should be “penalized” for shifting from where it should be. Cpk is the index to measure real process capability with the off-target shift applied.

Cpk is defined as Cpk = Cp(1– k) where k = Target – Process mean ÷ ½ Specification width.

As defined, Cp and Cpk are appropriate for processes with normal distribution, but not all processes are normally distributed. Without knowledge of the shape of the distribution, using Cpk may incorrectly influence the assessment and lead to bad business decisions (See Non-normal distribution curve).

There are three techniques to correct non-normal distributions:

Transform the data and specification limits into a normal distribution;

Use an empirical model to fit the non-normal data and develop an estimate of the percentage of products outside the specification limits. The nonconforming percentage is then related to an equivalent capability index for a process having a normal distribution; or

Intelligently use the Cpk index with full knowledge of the shape of process distribution to assess capability trends.

Cp and Cpk measure performance relative to process specification, but do not consider process consistency across time. The instability index provides a method for determining whether assignable causes are present across time in a process and is calculated as: St = (Number of out-of-control data points÷ Total number of data points) × 100. If all points are out of control, St will be 100. When all data points yield a St of zero, the control limits should be recalculated based on the tightened distribution.

Cause-and-effect analysis is a technique for identifying possible causes of a problem or effect and is as valuable for identifying desirable effects as it is for problem solving.

Validating the measurement system before its use for capability studies is an important part of the 6σ process. Excessive variation in the measurement system may mask important variations in the process targeted for improvement and makes achieving high process capability impossible, no matter how much the process is improved.

Measurement system capability must be assessed and periodically revalidated using appropriate statistical studies for accuracy, repeatability, reproducibility, stability, and linearity.

Good measurement systems demonstrate the following characteristics:

Variations are due to common, not special causes;

Total variability of the measurement system must be small compared with both the process variability and specification limits; and

Scale increments of the measuring device must not be larger than one-tenth of either the process variability or the specification limits, whichever is smaller.

Seldom does a process have a single variable influencing process capability nor does it have several hundred. Armed with a cause-and-effect diagram, the control engineer’s goal is to screen and prioritize each cause to identify the significant few variables influencing process capability and product quality. Analyzing multiple variables can use any number of tools including separation of variable techniques, design of experiments, and calculus.

An effective 6σ process control system requires consideration of potential failure modes of each process and product characteristic. Knowledge gained is used to develop control system plans including considerations of how to control important variables and characteristics, how to react when the process is unstable, and how to improve the process.

Failure mode and effects analysis (FMEA) is an analytical technique which must be used in all significant processes to assure potential failure modes have been considered and addressed. A FMEA should be a summary of what could go wrong with the process, the potential impact of each failure, and how to deal with or correct the problem.

A FMEA normally serves the following purposes:

Identifies known or potential process failure modes;

Assesses the potential risk effects of the failures to internal and external customers;

Identifies potential manufacturing, assembly, or service process causes and control actions; and

Detects process variables that must be controlled to reduce similar failures.

In keeping with the intent of the 6σ process, a plan must be generated to successfully control a process across time. Typical elements of a control plan include:

A detailed explanation of the process to be controlled;

Identification of the equipment, techniques, methods, tools, etc. used in the process;

A list of all significant input and output variables and characteristics to be controlled;

Identification of measurement equipment, techniques, methods, tools, etc. used to obtain data;

Statement of the minimum number of units, samples, or readings and the minimum acceptable frequency of monitoring;

Definition of techniques to be used to analyze the collected data; and

Instructions on specific actions to take when a process is out-of-control.

Charts can tell a story

Charting of data is important when analyzing the process for improvement opportunities and as a means to maintain long-term process control. Control charts provide a visual representation of how a process varies over time or from unit to unit. Control charts come in two types, charts for mapping attributes, and charts to map variables.

Attribute control charts include fractional defects of a subgroup, number of defects of a subgroup, and number of defects per subgroup.

Variable control charts monitor process or product characteristics across time and include average, range, standard deviation, and moving average charts.

Embarking on a journey to achieve Six Sigma capabilities across an entire enterprise is no “walk-in-the-park.” Achieving and maintaining a long-term competitive advantage requires being an excellent company, delivering excellent products, every day. There may be other ways to achieve excellence, but Six Sigma is the one adopted by corporate giants like AlliedSignal, General Electric, and Motorola.

Six Sigma for Complex Systems

Statistical Process Control (SPC) enjoys wide usage in industry as an effective tool for process improvement. Six Sigma expands upon SPC methods developed over several decades to provide more rigorous quality tools. While SPC is used in discrete-parts manufacturing as well as continuous and batch manufacturing processes, association of Six Sigma design with SPC is clearer and more easily understood in process applications.

“However, achieving Six-Sigma quality in complex, low-volume (or even one-of-a-kind) manufacturing is a more complicated endeavor,” says Charles Becker, senior master black belt-technology at GE Industrial Systems (Salem, Va.). As an example, Dr. Becker cites the manufacture of a large ac motor and its associated variable-speed drive for a steel rolling mill. “This application requires more intricate, but equally rigorous use of statistical tools, supplemented with solid systems engineering and computer modeling.”

Quality must start with an understanding of product functions and features most critical to customer satisfaction, the so-called critical to quality items (CTQs). The Quality Function Deployment (QFD) process—an integral part of Six Sigma—systematically translates and prioritizes these customer-level CTQs into clear, quantifiable design objectives. For example, “precise and stable control of steel thickness” produced in the rolling mill (say

Designers then use system engineering principles to evaluate and trade off the contributions of various subsystems in achieving these design targets, thus “flowing down” the requirements to the subsystem level, explains Dr. Becker. In the rolling mill example, some fraction of the allowable speed error would be allocated to the motor tachometer, while the rest is assigned to the drive electronics.

Six-Sigma design is an iterative process. It is repeated at the subassembly- and component-level until the designers can determine target values and allowable errors (variation) for a Six-Sigma design of critical individual piece parts and assembly processes (for example, accuracy of the master clock oscillator in the drive control board). “Analytical and Monte Carlo computer models, as well as design of experiments, are used to develop predictive equations (transfer functions) that ultimately relate piece part precision to the drive’s ability to regulate speed, hence thickness of metal in the rolling mill,” says Dr. Becker.

Armed with transfer functions relating variation in component parts and processes to customer observable CTQs, the design team can optimize overall design within capabilities of the manufacturing process and parts suppliers. Ability to analyze and trade off capability in one part or subassembly for that in another, while still ensuring Six-Sigma performance at the customer CTQ level, is crucial to designing the highest quality products at market-driven prices. In the case of the steel mill drive, lower cost or a more reliable system may result by trading off tachometer precision for improved drive electronics and software.

It would be impractical (even unnecessary) to perform this kind of analysis for each of hundreds or thousands of parts in a large, complex system. It’s a matter of pinpointing where to concentrate the Six-Sigma tasks. “The art of Six-Sigma design relies heavily on engineering experience, augmented by statistical verification, to identify and analyze those items most important to final product performance,” adds Dr. Becker.

—Frank J. Bartos, fbartos@cahners.com