Measuring and Controlling Quality Chapter 8 Measuring and Controlling Quality MANAGING FOR QUALITY AND PERFORMANCE EXCELLENCE, 10E, © 2017 Cengage Publishing,
Quality Profile: MESA Products, Inc. Designs, manufactures and installs cathodic protection systems that control the corrosion of metal surfaces in underground and submerged structures. Implemented a comprehensive quality assurance specification for product acceptance, resulting in a significant improvement in quality. A variety of tools help to improve performance, including “lean” manufacturing, ISO 9000, and Baldrige. A monthly balanced report card helps the company to review its performance and find ways to improve.
Quality Profile: Operations Management International Operates and maintains more than 160 public and private sector wastewater and water treatment facilities Key enablers for its Quality as a Business Strategy leadership system are the company’s Linkage of Process Model, which defines relationships among processes, and its Family of Measures, a balanced scorecard of 20 integrated metrics. Team charters for improvement projects specify which of OMI’s more than 150 critical processes are involved, metrics for evaluation, costs, required resources, and other information vital to the success.
Measurement for Quality Control Measurement is the act of collecting data to quantify the values of product, service, process, and other business metrics. Measures and indicators refer to the numerical results obtained from measurement. The term indicator is often used for measurements that are not a direct or exclusive measure of performance. Good measures should be SMART: Simple, Measurable, Actionable, Related (to customer and operational requirements), and Timely.
Dashboards … summaries of key performance measures, typically consisting of a small set of measures (five or six) that provide a quick summary of process performance. Dashboards often use graphs, charts, and other visual aids to communicate key measures and alert workers and managers when performance is not where it should be.
Common Quality Measurements A unit of work is the output of a process or an individual process step. A nonconformance is any defect or error associated with a unit of work. In manufacturing we often use the term defect, and in service applications, we generally use the term error to describe a nonconformance. A nonconforming unit of work is one that has one or more defects or errors.
Types of Quality Control Measures An attribute measurement characterizes the presence or absence of nonconformances in a unit of work, or the number of nonconformances in a unit of work. Attribute measurements often are collected by visual inspection and expressed as proportions and counts. Variable measurements apply to dimensional quantities such as length, weight, and time, or any value on a continuous scale of measurement. Variable measurements are generally expressed with statistical measures such as averages and standard deviations.
Attribute Measurements In manufacturing it is common to use the terms “proportion defective” and “defects per unit,” DPU, in these formulas.
Defect Classification 1. Critical defect: A critical defect is one that judgment and experience indicate will surely result in hazardous or unsafe conditions for individuals using, maintaining, or depending on the product and will prevent proper performance of the product. 2. Major defect: A major defect is one not critical but likely to result in failure or to materially reduce the usability of the unit for its intended purpose. 3. Minor defect: A minor defect is one not likely to materially reduce the usability of the item for its intended purpose, nor to have any bearing on the effective use or operation of the unit.
Throughput Yield (TY) …the number of units that have no nonconformances
Rolled Throughput Yield (RTY) …the proportion of conforming units that results from a series of process steps. Mathematically, it is the product of the yields from each process step.
DPMO Defects per million opportunities (DPMO) = (Number of defects discovered)/(opportunities for error) × 1,000,000 In services, the term often used as an analogy to dpmo is errors per million opportunities (epmo).
Cost of Quality Measures Cost of Quality (COQ) – the cost of avoiding poor quality, or costs incurred as a result of poor quality Provides a basis for identifying improvement opportunities and success of improvement programs COQ translates quality problems into the “language” of upper management—the language of money.
Quality Cost Classification Prevention Investments made to keep nonconforming products from occurring and reaching the customer Appraisal Associated with efforts to ensure conformance to requirements, generally through measurement and analysis of data to detect nonconformances Internal failure Costs of unsatisfactory quality found before the delivery of a product to the customer External failure Costs incurred after poor-quality products reach the customer
Measurement System Evaluation Observed variation in process output stems from the natural variation that occurs in the output itself as well as the measurement system. The total observed variation in production output is the sum of the true process variation (which is what we actually want to measure) plus variation due to measurement:
Errors in Manual Inspection Complexity: The number of defects caught by an inspector decreases with more parts and less orderly arrangement. Defect rate: When the product defect rate is low, inspectors tend to miss more defects than when the defect rate is higher. Inspection rate: The inspector’s performance degrades rapidly as the inspection rate increases.
Metrology …the science of measurement and is defined broadly as the collection of people, equipment, facilities, methods, and procedures used to assure the correctness or adequacy of measurements. National and international trade requires weights and measures organizations that assure uniform and accurate measures used in trade, national or regional measurement standards laboratories, standards development organizations, and accredited and internationally recognized calibration and testing laboratories.
Accuracy and Precision Accuracy is defined as the difference between the true value and the observed average of a measurement. Accuracy is measured as the amount of error in a measurement in proportion to the total size of the measurement. Precision is defined as the closeness of repeated measurements to each other. Precision relates to the variance of repeated measurements.
Calibration …the process of verifying the capability and performance of an item of measuring and test equipment compared to traceable measurement standards. The National Institute of Standards and Technology (NIST) maintains national measurement standards. NIST calibrates the reference-level standards of those organizations requiring the highest level of accuracy. These organizations calibrate their own working-level standards and those of other metrology laboratories. These working-level standards are used to calibrate the measuring instruments used in the field. Many organizations must ensure traceability by keeping records that their own measuring equipment has been calibrated by laboratories or testing facilities whose measurements can be related to appropriate standards.
Repeatability and Reproducibility Analysis Repeatability (equipment variation, EV) – variation in multiple measurements by an individual using the same instrument. Reproducibility (appraiser variation, AV) - variation in the same measuring instrument used by different individuals. A repeatability and reproducibility (R&R) study is a study of variation in a measurement system using statistical analysis.
R&R Studies Select m operators and n parts Calibrate the measuring instrument Randomly measure each part by each operator for r trials Compute key statistics to quantify repeatability and reproducibility
R&R Evaluation A measurement system is adequate if R&R is low relative to the total variation, or equivalently, the part variation is much greater than the measurement system variation. Under 10% error - OK 10-30% error - may be OK over 30% error - unacceptable
Statistical Perspectives of R&R The proper way to express the results as a percentage of the total is to use the ratio of variances from formula (8.16), that is,
Process Capability Measurement Process capability is the ability of a process to produce output that conforms to specifications. A process capability study is a carefully planned study designed to yield specific information about the performance of a process under specified operating conditions. Typical questions include: Where is the process centered? How much variability exists in the process? Is the performance relative to specifications acceptable? What proportion of output will be expected to meet specifications? What factors contribute to variability?
Types of Process Capability Studies Process characterization study - how a process performs under actual operating conditions Peak performance study - how a process performs under ideal conditions Component variability study - relative contribution of different sources of variation (e.g., process factors, measurement system)