Download presentation
Presentation is loading. Please wait.
Published byPrashant Kshirsagar Modified over 5 years ago
1
Prepared By: Mr. Prashant S. Kshirsagar (Sr.Manager-QA dept.)
2
◦ Introduce the basic concepts of an attribute measurement systems analysis (MSA). ◦ Understand operational definitions for inspection and evaluation. ◦ Define attribute MSA terms. ◦ Define Procedure for conducting attribute MSA ◦ Demonstrate trial for conducting attribute MSA 2
3
A measurement systems analysis is an evaluation of the efficacy of a measurement system. The purpose of Measurement System Analysis is to qualify a measurement system for use by quantifying its accuracy, precision, and stability. It is applicable to both continuous and attribute data.
5
Most problematic measurement system issues come from measuring attribute data in terms that rely on human judgment such as good/bad, pass/fail, etc. This is because it is very difficult for all testers to apply the same operational definition of what is “good” and what is “bad.”
6
When, we are not getting any measurement values then the tool used for this kind of analysis is called Attribute gage R&R. The R&R stands for repeatability and reproducibility. Repeatability : is the variation in measurements obtained with one measurement instrument when used several times by one appraiser while measuring the identical characteristic on the same part. Reproducibility : It is defined as the variation in the average of the measurements made by different appraisers using the same measuring instrument when measuring the identical characteristic on the same part. .
7
To evaluate product features and make accept/reject decisions. Mandatory criteria for establishment and use of operational definitions include: A) Criteria that can be applied to an object (or a group of objects) which precisely describes what is acceptable and unacceptable. B) A written description of the process for collecting data, including the method in which accept/reject decisions will be made. C) Review of the accept/reject criteria with people who will do the inspections to ensure that the requirements are understood.
8
Select at least 20 parts to be evaluated during the study. At least 5 of the parts should be defective in some way. If larger sample sizes are used, include at least 25% defective parts. Care should be taken when selecting defective parts – If possible select parts which are slightly beyond the specification limits or acceptance standards. Label each part with proper identification. Three inspectors will evaluate each part thrice (Three trials). A fourth person should record the data. Note down the observations in the form of 1 or 0, 1 is OK, 0 is not ok. The order of inspections should be randomized after each group of inspections to minimize the risk that the inspector will remember previous accept/reject decisions. The inspectors must work independently and cannot discuss their accept/reject decisions with each other.
9
Appraiser AAABBBCCC Trialsiiiiiiiiiiiiiiiiii 1111111111 2111111111 3111111111 4000000000 5111111111 6111111111 7111111111 8111111111 9111111111 10111111111 11111111111 12111111111 13111111111 14111111111 15111111111 16111111111 17111111111 18111111111 19111111111 20111111111 The data recorder may use a table similar to the one given below. 0 Not Ok 1 Ok
10
Type 1 Errors: when a good part is rejected. Type 1 errors increase ‐ Manufacturing costs. Incremental labor and material expenses are necessary to re – inspect, repair, or dispose the suspect parts. Type 1 errors are also called as “Producer’s Risk” or alpha errors. Type 2 Errors: when a bad part is accepted. Type 2 errors may occur Perhaps the inspector was poorly trained or rushed through the inspection and inadvertently overlooked a Small defect on the part. When Type 2 errors occur, defects slip through the containment net and are shipped to the customer. Because Type 2 errors put the customer at risk of receiving defective parts; customer may raised the complaint! Type 2 errors are sometimes called as “Consumer’s Risk”. Type 2 errors are also called as “beta” errors.
11
What is effectiveness? The effectiveness of an inspection process is correct call! ◦ Correct Call (Cc):- The number of times of which the operator (s) identify a good sample as a good one. Effectiveness = number of correct evaluations number of total opportunities
12
What is False Alarm? False Alarm (Fa) – The number of times of which the operator (s) identify a good sample as a bad one. The probability of a false alarm, also known as Type I error or producer’s risk, is given by: Fa (False Alarm) = number of false alarms number of non-defective items
13
What is Miss rate? A miss is a defective item that is classified as non- defective. Miss rate (Mr) – The number of times of which the operators identify a bad sample as a good one. The probability of a miss, also known as Type II error or consumer’s risk, is given by: Mr (Miss rate) = number of misses number of defective items
14
Acceptability criteria: If all measurement results agree, the gage is acceptable. If the measurement results do not agree, the gage can not be accepted, it must be improved and re-evaluated. EFFECTIVENESS (< 80% - Not Acceptable) MISS - RATE ( > 5% - Not Acceptable ) FALSE ALARM RATE( > 10% -Not Acceptable)
15
What could have caused the poor agreement? What should be done to improve the measurement system? What should be done to improve consistency? Do the Brain Storming!
16
If any of the decisions disagree, the measurement system may need improvement. Improvement actions include: Reworking the gage, Re ‐ training the inspectors, Clarifying the accept/reject criteria, Adding more lighting After implementing the improvement actions, repeat the study. If the error cannot be eliminated, Must take appropriate corrective actions, such as switching to a new measurement system, adding redundant inspections, or conducting a more extensive study.
17
Exercise
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.