Presentation is loading. Please wait.

Presentation is loading. Please wait.

Measure Phase Process Capability

Similar presentations


Presentation on theme: "Measure Phase Process Capability"— Presentation transcript:

1 Measure Phase Process Capability
Now we will continue in the Measure Phase with “Process Capability”.

2 Measurement System Analysis
Process Capability Continuous Capability Concept of Stability Attribute Capability Wrap Up & Action Items Process Capability Measurement System Analysis Six Sigma Statistics Process Discovery Welcome to Measure Within this module we are going to go through Stability and its affect on a process as well as how to measure the Capability of a process.

3 Understanding Process Capability
The inherent ability of a process to meet the expectations of the customer without any additional efforts. Provides insight as to whether the process has a : Centering Issue (relative to specification limits) Variation Issue A combination of Centering and Variation Inappropriate specification limits Allows for a baseline metric for improvement. This is the Definition of Process Capability. We will now begin to learn how to assess it. *Efforts: Time, Money, Manpower, Technology and Manipulation

4 Capability as a Statistical Problem
Our Statistical Problem: What is the probability of our process producing a defect ? Define a Practical Problem Create a Statistical Problem Correct the Apply the Correction to the Practical Simply put Six Sigma always starts with a practical problem, translates it into a statistical problem, corrects the statistical problem and then validates the practical problem. We will re-visit this concept over and over, especially in the Analyze Phase when determining sample size.

5 Capability Analysis Numerically Compares the VOP to the VOC
Capability Analysis provides you with a quantitative assessment of your processes ability to meet the requirements placed on it. Capability Analysis is traditionally used for assessing the outputs of a process, in other words comparing the Voice of the Process to the Voice of the Customer. However, you can use the same technique to assess the capability of the inputs going into the process. they are after all, outputs from some previous process, and you have expectations, specifications or requirements for their performance. Capability Analysis will give you a metric that you can use to describe how well it performs and you can convert this metric to a sigma score if you so desire. You will learn in the lesson how the output variation width of a given process output compares with the specification width established for that out put. This ratio, the output variation width divided by the specification width is what is know as capability. Since the specification is an essential part of this assessment, a rigorous understanding of the validity of the specification is vitally important, it also has to be accurate. This is why it is important to perform a RUMBA type analysis on process inputs and outputs.

6 Capable and on target Off target Incapable
Process Output Categories LSL USL Average Target Capable and on target Center process Reduce spread Off target Incapable Two output behaviors determine how well we meet our customer or process output expectations. The first is the amount of variation present in the output and the second is how well the output is centered relative to the requirements. If the amount of variation is larger than the difference between the upper spec limit minus the lower spec limit, our product or service output will always produce defects, it will not be capable of meeting the customer or process output requirements. As you have learned, variation exists in everything. There will always be variability in every process output, you can’t eliminate it completely, but you can minimize it and control it. You can tolerate variability if the variability is relatively small compared to the requirements and the process demonstrates long-term stability, in other words the variability is predictable, and the process performance is on target meaning the average value is near the middle value of the requirements. The output from a process is either: capable or not capable, centered or not centered. The degree of capability and/or centering determines the number of defects generated. If the process is not capable, you must find a way to reduce the variation. And if it is not centered, it is obvious that you must find a way to shift the performance. But what do you do if it is both incapable and not centered? It depends, but most of the time you must minimize and get control of the variation first, this is because high variation creates high uncertainty, you can’t be sure if your efforts to move the average are valid or not. Of course, if is just a simple adjustment to shift the average to where you want it, you would do that before addressing the variation.

7 Problem Solving Options – Shift the Mean
LSL USL Shift This involves finding the variables that will shift the process over to the target. This is usually the easiest option. Our efforts in a Six Sigma project are to examine a process that is performing at a level less than desired then Shift the Mean of performance such that all outputs are within an acceptable range. Our ability to Shift the Mean involves finding the variables that will shift the process over to the target. This is the easiest option.

8 Problem Solving Options – Reduce Variation
LSL This is typically not so easy to accomplish and occurs often in Six Sigma projects. USL Reducing the variation means fewer of our outputs fail further away from the target. Our objective then is to reduce variation of the inputs to stabilize the output.

9 Problem Solving Options – Shift Mean & Reduce Variation
LSL USL Shift & Reduce This occurs often in Six Sigma projects. Combination of shifting the Mean and reducing variation – This is the primary objective of Six Sigma projects.

10 Problem Solving Options
LSL USL Move Spec Obviously this implies making them wider, not narrower. Customers usually do not go for this option but if they do…it’s the easiest! Move the specification limits – Obviously this implies making them wider, not narrower. Customers usually do not go for this option.

11 Can be conducted on both Discrete and Continuous Data.
Capability Studies Capability Studies: Are intended to be regular, periodic, estimations of a process’s ability to meet its requirements. Can be conducted on both Discrete and Continuous Data. Are most meaningful when conducted on stable, predictable processes. Are commonly reported as Sigma Level which is optimal (short term) performance. Require a thorough understanding of the following: Customer’s or business’s specification limits Nature of long-term vs. short-term data Mean and Standard Deviation of the process Assessment of the Normality of the data (Continuous Data only) Procedure for determining Sigma level A stable process is one that is consistent with time. Time Series Plots are one way to check for stability, Control charts is another. Your process may not be stable at this time, one of the purposes of the Measure Phase is to identify the many X’s possible for the defects seen, gather data and plot it to see if there are any patterns to identify what to work on first. When performing Capability Analysis, try to get as much data as are possible, back as far in time as possible, over a reference frame that is generally representative of your process.

12 Steps to Capability #1 #2 #3 #4 #5 #6 #7 Select Output for Improvement
Verify Customer Requirements Validate Specification Limits Collect Sample Data Determine Data Type (LT or ST) Check data for normality Calculate Z-Score, PPM, Yield, Capability Cp, Cpk, Pp, Ppk #1 #2 #3 #4 #5 #6 Remember to follow the steps to capability. #7

13 Verifying the Specifications
Questions to consider: What is the source of the specifications? Customer requirements (VOC) Business requirements (target, benchmark) Compliance requirements (regulations) Design requirements (blueprint, system) Are they current? Likely to change? Are they understood and agreed upon? Operational definitions Deployed to the work force Specifications must be verified before completing the Capability Analysis. It doesn’t mean that you will be able to change them, but on occasion some internal specifications have been made much tighter than the customer wants.

14 Data Collection Short-term data: Long-term data:
Capability Studies should include “all” observations (100% sampling) for a specified period. Long-term data: Is collected across a broader inference space. Monthly, quarterly; across multiple shifts, machines, operators, etc Subject to both common and special causes of variation. More representative of process performance over a period of time. Typically consists of at least 100 – 200 data points. Short-term data: Collected across a narrow inference space. Daily, weekly; for one shift, machine, operator, etc. Is potentially free of special cause variation. Often reflects the optimal performance level. Typically consists of 30 – 50 data points. You must know if the data collected from process outputs is a short-term or a long-term representation of how well the process performs. There are several reasons for this, but for now we will focus on it from the perspective of assessing the capability of the process. To help you understand short-term vs. long-term data, we will start by looking at a manufacturing example first. In this scenario the manufacturer is filling bottles with a certain amount of fluid. Assume that the product is built in lots. Each lot is built using a particular vendor of the bottle, by a particular shift and set of employees and by one of many manufacturing lines. The next lot could be from a different vendor, employees, line, shift, etc. Each lot is sampled as it leaves the manufacturing facility on its way to the warehouse. The results are represented by the graphic where you see the performance data on a lot by lot basis for the amount of fill based on the samples that were taken. Each lot has its own variability and average as shown. The variability actually looks reasonable and we notice that the average from lot to lot is varying as well. What the customer eventually experiences is the amount of fluid in each bottle and the value across the full variability of all the lots. It can now be seen and stated that the long-term variability will always be greater than the short-term variability.

15 Short Term Performance
Baseline Performance Process Baseline: The average, long-term performance level of a process when all input variables are unconstrained. 1 ` Short Term Performance TARGET USL 2 3 4 Long-term baseline Here is another way to look at long-term and short-term performance. The “road” appearing graphic actually represents the target (center line) and the upper and lower spec limits. Here again you see the representative performance in short-term snapshots, which result in the larger long-term performance. Process baseline is a term that you will use frequently as a way to describe the output performance of a process. Whenever you hear the word “baseline” it automatically implies long-term performance. To not use long-term data to describe the baseline performance would be dangerous. As an example, imagine you reported that the process performance baseline was based on distribution 3 in the graphic, you would mislead yourself and others that the process had excellent on target performance. If you used distribution 2, you would be led to believe that the average performance was near the USL and that most of the output of the process was above the spec limit. To resolve these potential problems, it is important to always use long-term data to report the baseline. How do you know if the data you have is short or long-term data? Here are some guidelines. A somewhat technical interpretation of long-term data is that the process has had the opportunity to experience most of the sources of variation that can impact it. Remembering the outputs are a function of the inputs, what we are saying is that most of the combinations of the inputs, each with their full range of variation has been experienced by the process. You may use these situations as guidelines. Short-term data is a “snapshot” of process performance and is characterized by these types of conditions: One shift One line One batch One employee One type of service One or only a few suppliers Long-term data is a “video” of process performance and is characterized by these types of conditions: Many shifts Many batches Many employees Many services and lines Many suppliers Long-term variation is larger than short-term variation because of : material differences, fluctuations in temperature and humidity, different people performing the work, multiple suppliers providing materials, equipment wear, etc. As a general rule, short-term data consist of 20 to 30 data points over a relatively short period of time and long-term data consist of 100 to 200 data points over an extended period of time. Do not be misled by the volume of product or service produced as an indicator of long and short-term performance. Data that represents the performance of a process that produces 100,000 widgets a day for that day will be short-term performance. Data the represents the performance of a process that produces 20 widgets a day over a 3 month period will be long-term performance. While we have used a manufacturing example to explain all this, it is exactly the same for a service or administrative type of process. In these types of processes, there are still different people, different shifts, different workloads, differences in the way inputs come into the process, different software, computers, temperatures, etc. The same exact concepts and rules apply. You should now appreciate why, when we report process performance, we need to know what the data is representative of. Using such data we will now demonstrate how to calculate process capability and then we will show how it is used. LSL

16 Within Group Variation Between Group Variation
Components of Variation Even stable processes will drift and shift over time by as much as 1.5 Standard Deviations on the average. Short Term Within Group Variation Between Group Variation Long Term Overall Variation There are many ways to look at the difference between short-term and long-term data. First keep on mind that you never have purely short-term or purely long-term data. It is always something in between. Short-term data basically represent your “entitlement” situation: you are controlling all the controllable sources of variation. Long-term data includes (in theory) all the variation that one can expect to see in the process. Usually what we have is something in between. It is a judgment call to decide which type of data you have: it varies depending on what you are trying to do with it and what you want to learn from it. In general one or more months of data are probably more long-term than short-term; two weeks or less is probably more like short-term data.

17 (short-term capability)
Sum of the Squares Formulas SS total Shift Precision (short-term capability) x Time Output Y SS between SS within = + These are the equations describing the sum of squares which are the basis for the calculations used in capability. No, you do not need to memorize them or even really understand them. They are built into MINITABTM for the processing of data.

18 Stability A Stable Process is consistent over time. Time Series Plots and Control Charts are the typical graphs used to determine stability. At this point in the Measure Phase there is no reason to assume the process is stable. Tic toc… tic toc… Stability is established by plotting data in a Time Series Plot or in a Control Chart. If the data used in the Control Chart goes out of control, the data is not stable. At this point in the Measure Phase there is no reason to assume the process is stable. Performing a Capability Study at this point effectively draws a line in the sand. If however, the process is stable, short-term data provides a more reliable estimate of true Process Capability. Looking at the Time Series Plot shown on this slide, where would you look to determine the entitlement of this process? As you can see the circled region has a much tighter variation. We would consider this the process entitlement; meaning, that if we could find the X’s that are causing the instability this is the best the process can perform in the short term. The idea is that we’ve done it for some time, we should be able to do it again. This does not mean that this is the best this process will ever be able to do.

19 Measures of Capability
Cp and Pp What is Possible if your process is perfectly Centered The Best your process can be Process Potential (Entitlement) Hope Reality Cpk and Ppk The Reality of your process performance How the process is actually running Process Capability relative to specification limits Mathematically Cpk and Ppk are the same and Cp and Pp are the same. The only difference is the source of the data, Short-term and Long-term, respectively.

20 Capability Formulas Six times the sample Standard Deviation
LSL – Lower specification limit USL – Upper specification limit Sample Mean Three times the sample Standard Deviation Consider the “K” value the penalty for being off center.

21 MINITAB™ Example Open worksheet “Camshaft.mtw”. Check for Normality: Stat > Basic Statistics > Normality. By looking at the “P-values” the data look to be Normal since P is greater than .05 Open the worksheet “Camshaft.mtw”. There are two columns of data that show the length of camshafts from two different suppliers. Check the Normality of each supplier. In order to use Process Capability as a predictive statistic, the data must be Normal for the tool we are using in MINITAB™.

22 Note the subgroup size for this example is 5. LSL=598 USL=602
MINITAB™ Example Create a Capability Analysis for both suppliers, assume long-term data. Note the subgroup size for this example is 5. LSL=598 USL=602 Stat > Quality Tools > Capability Analysis (Normal) At this point in time we are only attempting to get a baseline number that we can compare to at the end of problem solving. We are not using it to predict a quality, we want to get a snapshot. DO NOT try and make your process STABLE BEFORE working on it! Your process is a project because there is something wrong with it so go figure it out, don’t bother playing around with stability.

23 MINITAB™ Example is the process Mean which falls short of the target (600) for Supplier 1, and the left tail of the distribution falls outside the lower specification limits. From a practical standpoint what does this mean? You will have camshafts that do not meet the lower specification of 598 mm. Next we look at the Cp index. This tells us if we will produce units within the tolerance limits. Supplier 1 Cp index is .66 which tells us they need reduce the process variation and work on centering. Look at the PMM levels? What does this tell us?

24 MINITAB™ Example is the process man for Supplier 2 and is very close to the target although both tails of the distribution fall outside of the specification limits. The Cpk index is very similar to Supplier 1 but this infers that we need to work on reducing variation. When making a comparison between Supplier 1 and 2 elative to Cpk vs Ppk we see that Supplier 2 process is more prone to shifting over time. That could be a risk to be concerned about. Again, Compare the PPM levels? What does this tell us? Hint look at PPM < LSL. So what do we do. In looking only at the means you may claim that Supplier 2 is the best. Although Supplier 1 has greater potential as depicted by the Cp measure and it will likely be easier to move their Mean than deal with the variation issues of Supplier 2. Therefore we will work with Supplier 1.

25 MINITAB™ Example Stat>Quality Tools>Capability Analysis>Normal…>Options…Benchmark Z’s (sigma level) MINITAB™ has a selection to calculate Benchmark Z’s or Sigma levels along with the Cp and Pp statistics. By selecting these the graph will display the “Sigma Level” of your process! Generate the new capability graphs for both suppliers and compare Z values or sigma levels.

26 MINITAB™ Example The overall long term sigma level is 1.85 for supplier 1 you should also note that it has the potential to be 1.99 sigma as the process stands in its current state.

27 MINITAB™ Example The overall long term sigma level is 1.39 for supplier 2, you should also note that it has the potential to be 1.39 sigma as the process stands in its current state.

28 With short-term data do one of the following:
Example Short Term With short-term data do one of the following: Option 1 Option 2 Enter “Subgroup size:” = total number of samples Go to “Options”, turn off “Within subgroup analysis” The default of MINITAB™ assumes long-term data. Many times you will have short-term data, be sure to adjust MINITAB™ based on Option 1 or 2 as shown here to ensure you get a proper analysis. For option 1 you will enter the subgroup size as the total number of data points you have in your short term study. For option 2, you will turn off the within subgroup analysis found inside the options selection. Using data from Column “Bi modal” in the Minitab worksheet “GraphingData.mtw”

29 Continuous Variable Caveats
Capability indices assume Normally Distributed data. Always perform a Normality test before assessing Capability. Well this is one way to lie with Statistics…When used as a predictive model, capability makes assumptions about the shape to the data. When data is Non-normal, the models assumptions don’t work and would be inappropriate to predict. It’s actually good news to have data that looks like this because your project work will be easy!!! Why? Clearly there is something occurring in the process that should be fairly obvious and is causing these very two distinct distribution to occur. Go take a look at each of the distributions individually and determine what is causing this. DON’T fuss or worry about Normality at this point, hop out to the process and see what is going on. Here in the Measure Phase stick with observed performance unless your data are Normal. There are ways to deal with Non-normal data for predictive capability but we’ll look at that once you have removed some of the special causes from the process. Remember here in the Measure Phase we get a snapshot of what we’re dealing with, at this point don’t worry about predictability, we’ll eventually get there.

30 Capability Steps #7 Select Output for Improvement Verify Customer Requirements Validate Specification Limits Collect Sample Data Determine Data Type (LT or ST) Check data for Normality Calculate Z-Score, PPM, Yield, Capability Cp, Cpk, Pp, Ppk #1 #2 #3 #4 #5 #6 We can follow the steps for calculating capability for Continuous Data until we reach the question about data Normality… When we follow the steps in performing a Capability Study on Attribute Data we hit a wall at step 6. Attribute Data is not considered Normal so we will use a different mathematical method to estimate capability.

31 Attribute Capability Steps
Select Output for Improvement Verify Customer Requirements Validate Specification Limits Collect Sample Data Calculate DPU Find Z-Score Convert Z-Score to Cp & Cpk #1 #2 #3 #4 #5 #6 #7 Notice the difference when we come to step 5… Please read the slide.

32 Z Scores Z Score is a measure of the distance in Standard Deviations of a sample from the Mean. Given an average of 50 with a Standard Deviation of 3 what is the proportion beyond the upper spec limit of 54? 54 50 The Z Score effectively transforms the actual data into standard normal units. By referring to a standard Z table you can estimate the area under the Normal curve.

33 Z Table In our case we have to lookup the proportion for the Z score of This means that approximately 9.1% of our data falls beyond the upper spec limit of 54. If we are interested in determining parts per million defective we would simply multiply the proportion by one million. In this case there are 91,760 parts per million defective.

34 ZST ZLT Attribute Capability
Attribute data is always long-term in the shifted condition since it requires so many samples to get a good estimate with reasonable confidence. Short-term Capability is typically reported, so a shifting method will be employed to estimate short-term Capability. ZST ZLT Short Term Capability Long Term Capability Subtract 1.5 Add You Want to Estimate : Your Data Is : Sigma Level Short-Term DPMO Long-Term DPMO 1 2 3 1350.0 4 31.7 6209.7 5 0.3 232.7 6 0.0 3.4 Stable process can shift and drift by as much as 1.5 Standard Deviations. Want the theory behind the 1.5…Google it! It doesn’t matter.

35 Attribute Capability By viewing these formulas you can see there is a relationship between them. If we divide our Z short-term by 3 we can determine our Cpk and if we divide our Z long-term by 3 we can determine our Ppk. Some people like to use sigma level (MINITAB™ reports this as “Z-bench”), other like to use Cpk, Ppk. If you are using Cpk and Ppk you can easily translate that into a Z score or sigma level by dividing by 3.

36 Attribute Capability Example
A customer service group is interested in estimating the Capability of their call center. A total of 20,000 calls came in during the month but 2,666 of them “dropped” before they were answered (the caller hung up). Results of the call center data set: Samples = 20,000 Defects = 2,666 They hung up….! We will use this example to demonstrate the capability of a customer service call group.

37 Look up DPU value on the Z-Table Find Z-Score
Attribute Capability Example Calculate DPU Look up DPU value on the Z-Table Find Z-Score Convert Z Score to Cpk, Ppk Example: Look up ZLT ZLT = 1.11 Convert ZLT to ZST = = 2.61 Follow these steps to determine your Process Capability. Remember that, DPU is Defects per unit, the total number of possible errors or defects that could be counted in a process or service. DPU is calculated by dividing the total number of defects by the number of units or products.

38 2 .87 Attribute Capability Calculate DPU
Look up DPU value on the Z-Table Find Z Score Convert Z Score to Cpk, Ppk Example: Look up ZLT ZLT = 1.11 Convert ZLT to ZST = = 2.61 2 .87 "Cpk” is an index (a simple number) which measures how close a process is running to its specification limits, relative to the natural variability of the process. A Cpk of at least 1.33 is desired and is about 4 sigma + with a yield of % . The above Cpk of .87 is about 2.61 sigma or a 87% Yield. If you want to know how that variation will affect the ability of your process to meet customer requirements (CTQ's), you should use Cpk. If you just want to know how much variation the process exhibits, a Ppk measurement is fine. Remember Cpk represents the short-term capability of the process and Ppk represents the long-term capability of the process. With the 1.5 shift, the above Ppk process capability will be worse than the Cpk short-term capability.

39 Estimate Capability for Continuous Data
Summary At this point, you should be able to: Estimate Capability for Continuous Data Estimate Capability for Attribute Data Describe the impact of Non-normal Data on the analysis presented in this module for Continuous Capability Don’t get hung up on capability, right now use it as a benchmark, you’ll show the improved capability when you get to the Control Phase. Get an accurate snapshot and move on!

40 Get the latest products at…
The Certified Lean Six Sigma Yellow Belt Assessment The Certified Lean Six Sigma Yellow Belt (CLSSYB) tests are useful for assessing a Yellow Belt’s knowledge of Lean Six Sigma. The CLSSYB can be used in preparation for the ASQ or IASSC Certified Six Sigma Yellow Belt exam or for any number of other certifications, including private company certifications. The Lean Six Sigma Yellow Belt Course Manual Open Source Six Sigma Course Manuals are professionally designed and formatted manuals used by Belt’s during training and for reference guides afterwards. The OSSS manuals complement the OSSS Training Materials and consist of slide content, instructional notes data sets and templates. Get the latest products at… 40


Download ppt "Measure Phase Process Capability"

Similar presentations


Ads by Google