Download presentation
Presentation is loading. Please wait.
Published byGavin Price Modified over 10 years ago
1
THE INTERNATIONAL HARMONISED PROTOCOL FOR THE PROFICIENCY TESTING OF (CHEMICAL) ANALYTICAL LABORATORIES INTERNATIONAL UNION OF PURE AND APPLIED CHEMISTRY ANALYTICAL, APPLIED, CLINICAL, INORGANIC AND PHYSICAL CHEMISTRY DIVISIONS INTERDIVISIONAL WORKING PARTY FOR HARMONISATION OF QUALITY ASSURANCE SCHEMES FOR ANALYTICAL LABORATORIES
2
CONTENTS Preface 1 Introduction 2. Definitions
3. Organisation (Protocol) of Proficiency Testing Schemes 3.1 Framework 3.2 Organisation 3.3 Test Materials 3.4 Frequency of Test Sample Distribution 3.5 Establishing the Assigned Result 3.6 Choice of Analytical Method 3.7 Assessment of Performance 3.8 Performance Criteria 3.9 Reporting of Results 3.10 Liaison with participants 3.11 Collusion and Falsification of Results 3.12 Repeatability
3
CONTENTS 4. Generalised Statistical Procedure for the Analysis of Results - Approaches to Data Analysis in Proficiency Testing 4.1 Estimates of Assigned Value 4.2 Formation of a z-Score 4.3 Interpretation of z-Scores 4.4 An Alternative Score 4.5 Combination of Results of a Laboratory Within a Round of the Trial 4.6 Running Scores 4.7 Classification, Ranking and Other Assessment of Proficiency Data 5. An Outline Example of how assigned values and target values may be specified and used Scheme 6. References
4
CONTENTS Appendix I: Suggested Headings to a Quality Manual for Organisation of Proficiency Testing Schemes (Not necessarily in this order) Appendix II: A Recommended Procedure for Testing Matterial for Sufficient Homogeneity Appendix III: Combination of Results of a Laboratory Within one Round of a Trial Appendix IV: Calculation of Running Scores Appendix V: An Alternative Scoring Procedure for Proficiency Testing Schemes Appendix VI: An Outline Example of How Assigned Values and Target Values may be Specified and Used
5
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.1 Proficiency Testing Scheme Methods of checking laboratory testing performance by means of interlatoratory tests. 2.2. Internal Quality Control (IQC) The set of procedures undertaken by the latoratory for continuous monitoring of operations and results in order to decide whether the results are reliable enough to be released; ...
6
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.3 Quality Assurance Programme/ System The sum total of a laboratory's activities aimed at achieving the required standard of analysis. ... 2.4 Testing Laboratory A laboratory that measures, examines, tests, calibrates or otherwise determines the characteristics or performance of materials or products. ...
7
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.5 Reference Material (RM) A material or substance one or more of whose properties values are sufficiently homogeneous and weIl established to be used for the calibration of an apparatus, the assessment of a measurement method, or for assigning values to materials. 2.6 Certified Reference Material (CRM) A reference material, accompanied by a certificate, one or more of whose property values are certified by a procedure which establishes traceability to an accurate realisation of the unit in which the property values are expressed, and for which each certified value is accompanied by an uncertainty at a stated level of confidence. ...
8
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.7 True Value The actual concentration of the analyte in the matrix. 2.8 Assigned value The value to be used as the true value by the proficiency test coordinator in the statistical treatment of results. It is the best available estimate of the true value of the analyte in the matrix.
9
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.9 Target Value for Standard Deviation A numerical value for the standard deviation of a measurement result, which has been designated as a goal for measurement quality. 2.10 Interlaboratory Test Comparisons Organisation, performance and evaluation of tests on the same items or materials on identical portions of an effectively homogeneous material, by two or more different laboratories in accordance with predetermined conditions.
10
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.11 Coordinator The organisation with responsibility for coordinating all of the activities involved in the operation of a proficiency testing scheme. 2.12 Accuracy The closeness of agreement between a test result and the accepted reference value. ...
11
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.13 Trueness The closeness of agreement between the average value obtained from a large series of test results and an accepted reference value. NOTE - The measure of trueness is usually expressed in terms of bias. 2.14 Bias The difference between the expectation of the test results and an accepted reference value NOTE- Bias is a systematic error as contrasted to random error. ...
12
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.15 Laboratory bias The difference between the expectation of the test results from a particular laboratory and an accepted reference value. 2.16 Bias of the measurement method The difference between the expectation of test results obtained from all laboratories using that method and an accepted reference value. ...
13
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.17 Laboratory component of bias The difference between the laboratory bias and the bias of the measurement method. NOTES 1. The laboratory component of bias is specific to a given laboratory and the conditions of measurement within the laboratory, and also it may be different levels of the test. 2. The laboratory component of bias is relative to the overall average result, not the true or reference value.
14
2. DEFINITIONS AND TERMINOLOGY USED IN PROTOCOL
2.18 Precision The closeness of agreement between independent test results obtained under prescribed conditions. NOTES 1. Precision depends only on the distribution of random errors and does not relate to the accepted reference value. 2. The measure of precision is usually expressed in terms of imprecision and computed as a standard deviation of the test results. Higher imprecision is reflected by a larger standard deviation. 3. 'Independent test results' means results obtained in a manner not influenced by any previous result on the same or similar material.
15
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.1 Framework Test samples must be distributed on a regular basis to the participants who are required to return results within a given time. The results will be subject to statistical analysis by the coordinator and participants will be promptly notified of their performance. Advice will be available to poor performers and all participants will be kept fully informed of the progress of the scheme. Participants will be identified in Reports by code only.
16
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.1 Framework The structure of the scheme for any one analyte or round in a series should be as follows: 1. coordinator organises preparation, homogeneity testing and validation of test material; 2. coordinator distributes test sampIes on a regular schedule; 3. participants analyse test portions and report results centrally; 4. results subjected to statistical analysis; performance of laboratories assessed; 5. participants notified of their performance; 6. advice available for poor performers, on request; 7. coordinator reviews performance of scheme; 8. next round commences.
17
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
Day to day running of the scheme will be the responsibility of the coordinator. The coordinator must document all practices and procedures... Preparation of test materials will either be contracted-out or undertaken by the coordinator. The laboratory preparing the test material should have demonstrable experience in the area of analysis being tested. It is essential for the coordinator to retain control over the assessment of performance as this will help to maintain the credibility of the scheme. Overall direction of the scheme should be overseen by a small advisory panel having representatives ... from, for example, the coordinator, contract laboratories (if any), appropriate professional bodies, participants and end-users of analytical data.
18
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.3 Test Materials The test materials to be distributed in the scheme must be generally similar in type to the materials that are routinely analysed (in respect of composition of the matrix and the concentration range or quantity of the analyte). It is important that they are of acceptable homogeneity and stability. The assigned value will not be disclosed to the participants until after the results have been collated. The bulk material prepared for the proficiency test must be sufficiently homogeneous for each analyte so that all laboratories will receive test samples that do not differ significantly in analyte concentration. The coordinator must clearly state the procedure used to establish the homogeneity of the test material between-sample standard deviation should be less than 0.3 times the target value for the standard deviation.
19
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.3 Test Materials Where possible the coordinating laboratory should also provide evidence that the test material is sufficiently stable to ensure that the material will not undergo any significant change throughout the duration of the proficiency test. Prior to distribution of the test samples, therefore, the stability of the matrix and the analytes it contains must be determined by carrying out analyses after it has been stored for an appropriate period of time. The storage conditions, most especially of time and temperature, used in the stability trials must represent those conditions likely to be encountered in the entire duration of the proficiency test. Stability trials must therefore take account of the transport of the test. samples to participating laboratories as well as the conditions encountered purely in a laboratory environment. ...
20
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.3 Test Materials Ideally the quality checks on the samples referred to above should be performed by a different laboratory from that which prepared the sample, although it is recognised that this may cause difficulties to the coordinating organisation. .. Coordinators should consider any hazards that the test materials might pose and take appropriate action to advise any party that might be at risk (e.g. test material distributors, testing laboratories etc.) of the potential hazard involved.
21
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.4 Frequency of Test Sample Distribution ...depends upon a number of factors of which the most important are: i) the difficulty of executing effective analytical quality control; ii) the laboratory throughput of test samples; iii) the consistency of the results from previous rounds; iv) the cost/benefit of the scheme; v) the availability of suitable material for proficiency test schemes
22
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.5 Establishing the Assigned Value The coordinator should give details on how the assigned value was obtained where possible with a statement of its traceability and its uncertainty possible approaches ... 3.5.1 Consensus value from expert laboratories This value is the consensus of a group of expert laboratories that achieve agreement by the careful execution of recognised reference methods... The consensus value will normally be a robust mean or the mode.
23
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.5.2 Formulation ... addition of a known amount or concentration of analyte to a base material containing none problems might arise with the use of formulation ... (a) There is a need to ensure that the base material is effectively free from analyte or that the residual analyte concentration is accurately known, (b) It may be difficult to mix the analyte homogeneously into the base material where this is required, (c) The added analyte may be more loosely bonded than, or in a different chemical form from, that found in the typical materials that the test materials represent....
24
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.5.3 Direct comparison with certified reference materials ... test material is analysed along with appropriate certified reference materials by a suitable method under repeatability conditions. In effect the method is calibrated with the CRMs, providing direct traceability and an uncertainty for the value assigned to the test material. The CRMs must have both the appropriate matrix and an analyte concentration range that spans, or is close to, that of the test material. ...
25
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.5.4 Consensus of participants A value often advocated for the assigned value is the consensus (usually a robust mean or the mode) of the results of all of the participants in the round of the test. This value is clearly the cheapest and easiest, but... ... number of drawbacks to the consensus of participants. At a fundamental level it is difficult to see how a traceability or an uncertainty could be attributed to such a value, unless all of the participants were using the same reference method. Other objections ... against the consensus are: (a) there may be no real consensus amongst the participants (b) the consensus may be biased by the general use of faulty methodology. ...
26
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.5.5 Choice between methods The choice between methods of evaluating the assigned value depends on circumstances and is the responsibility of the organising agency. ... Empirical methods are used when the analyte is ill-defined chemically. ... e.g. the determination of "fat", the true result ... is produced by a correct execution of the method the analyte content is defined only if the method is simultaneously specified. special problems in proficiency trials when a choice of such methods is available. ... no valid consensus may be evident ... to overcome this problem: (i) a separate value of the assigned value is produced for each empirical method used; (ii) participants are instructed to use a prescribed method; (iii) participants are warned that a bias may result from using an empirical method different from that used to obtain the consensus.
27
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.6 Choice of Analytical Method Participants will be able to use the analytical method of their choice except when otherwise instructed to adopt a specified method. Methods used must be validated by an appropriate means, e.g. collaborative trial, reference method etc. ... 3.7 Assessment of Performance Laboratories will be assessed on the difference between their result and the assigned value. A performance score will be calculated for each laboratory, using the statistical scheme ... 3.8 Performance Criteria For each analyte in a round a criterion for the performance score may be set ... against which the performance score obtained by a labaratory can be judged. The performance criterion ... as to ensure that the analytical data routinely produced by the laboratory is of a quality that is adequate for its intended purpose. not necessarily ... the highest level
28
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.9 Reporting of Results ... should be clear and comprehensive and include data on the distribution of results from all laboratories together with participant's performance score. The test results as used by the coordinator should be displayed also, to enable participants to check that their data have been correctly entered. Reports should be made available as quickly as possible after the return of results to the coordinating ... 3.10 Liaison with Participants Participants should be provided with a detailed information pack on joining the scheme. ... advised immediately of any changes in scheme design or operation. Advice should be available to poor performers. Participants who consider that their performance assessment is in error must be able to refer the matter ... Feedback from laboratories should be encouraged, so that participants actively contribute to the development of the scheme.
29
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.11 Collusion and Falsification of Results ... there may be a tendency among participants to provide a falsely optimistic impression of their capabilities. For example, collusion may take place between laboratories, so that truly independent data are not submitted. ... Proficiency testing schemes should be designed to ensure that there is as little collusion and falsification as possible. ... instructions to participants should make it clear that collusion is contrary to professional scientific conduct and serves only to nullify the benefits of proficiency testing to customers, accreditation bodies and analysts alike. ... reasonable measures should be taken by the coordinators to prevent collusion, ... but it is the responsibility of the participating laboratories to avoid it.
30
3. ORGANISATION (PROTOCOL) OF PROFICIENCY TESTING SCHEMES
3.12 Repeatability Procedures used by laboratories participating in proficiency testing schemes should simulate those used in routine sample analysis. Thus, duplicate determinations on proficiency test samples should be carried out only if this is the norm for routine work. The result to be reported is in the same form (e.g., number of significant figures) as that normally reported to the customer. ...duplication in the tests to obtain a measure of repeatability proficiency ... should be allowed as a possibility in proficiency tests, but is not a requirement of this protocol.
31
4. GENERALISED STATISTICAL PROCEDURE FOR THE ANALYSIS OF RESULTS
4.1 Estimates of Assigned Value The first stage in producing a score from a result x (a single measurement of analyte concentration (or amount) in a test material) is obtaining an estimate of the bias, which is defined as: bias estimate = x - X where X is the true value.
32
4. GENERALISED STATISTICAL PROCEDURE FOR THE ANALYSIS OF RESULTS
4.2 Formation of a z-score Most proficiency testing schemes proceed by comparing the bias estimate ... with a target value for standard deviation that forms the criterion of performance. An obvious approach is to form the z-score given by z = (x-X) / where is the target value for standard deviation. A fixed value for is preferable and has the advantage that the z-scores derived from it can be compared from round to round to demonstrate general trends for a laboratory or a group of laboratories. ... The value chosen can be arrived at in several ways:
33
4. GENERALISED STATISTICAL PROCEDURE FOR THE ANALYSIS OF RESULTS
4.2 Formation of a z-score 4.2.1 By Perception The value of could be fixed arbitrarily, with a value based on a perception of how laboratories perform. The problem with this criterion is that both perceptions and laboratory performance may change with time. The value of therefore may need to be changed occasionally, disturbing the continuity of the scoring scheme. However, there is some evidence that laboratory performance responds favourably to a stepwise increase in performance requirements. 4.2.2 By Prescription The value of could be an estimate of the precision required for a specific task of data interpretation. This is the most satisfactory type of criterion, if it can be formulated, because it relates directly to the required information content of the data. Unless the concentration range is very small should be specified as a function of concentration.
34
4. GENERALISED STATISTICAL PROCEDURE FOR THE ANALYSIS OF RESULTS
4.2 Formation of a z-score 4.2.3 By Reference to Validated Methodology Where a standard method is prescribed for the analysis, could be obtained by interpolation from the standard deviation of reproducibility obtained during appropriate collaborative trials. 4.2.4 By Reference to a Generalised Model The value of could be derived from a general model of precision, such as the "Horwitz Curve". However, while this model provides a general picture of reproducibility, substantial deviation from it may be experienced for particular methods. It could be used if no specific information is available.
35
4. GENERALISED STATISTICAL PROCEDURE FOR THE ANALYSIS OF RESULTS
4.3 Interpretation of z-scores If X and were good estimates of the population mean and standard deviation, and the underlying distribution were normal, then z could be approximately normally distributed with a mean of zero and a unit standard deviation. An analytical system can be described as "well behaved" when it complies with these conditions. Under these circumstances an absolute value of z (z) greater than three suggests poor performance. Because z is standardised, it can be usefully compared between all analytes, test materials and analytical methods. Values of z obtained from diverse materials and concentration ranges can ... be combined to give a composite score for a laboratory in one round of a proficiency test. ... the meaning of z-scores can be immediately appreciated, i.e. values of z < 2 would be very common and values of z > 3 would be very rare in well behaved systems. ....
36
4. GENERALISED STATISTICAL PROCEDURE FOR THE ANALYSIS OF RESULTS
4.4 An alternative score An alternative type of scoring, here called Q-scoring, is based not on the standardised value but on the relative bias, namely Q = (x-X) / X ... not recommended in this protocol a number of sectors, for example occupational hygiene, use this type of approach. The scoring does have the disadvantage that the significance of any result is not inmediately apparent. 4.5 Combination of results of a laboratory within a round of the trial ... participants want a single figure of merit that will summarise the overall performance of the laboratory within a round. ... appropriate for the assessment of long term trends. ... general use of combination scores is not recommended ....
37
4. GENERALISED STATISTICAL PROCEDURE FOR THE ANALYSIS OF RESULTS
4.6 Running Scores ... for some purposes it may be useful to have a more general indicator of the performance of a laboratory over time. 4.7 Classification, ranking and other assesment of proficiency data Classification is not the primary aim of proficiency testing. However, it is possible that accreditation agencies will use proficiency test results for this purpose, so it is essential that any classification used should be statistically well-founded.
38
4. GENERALISED STATISTICAL PROCEDURE FOR THE ANALYSIS OF RESULTS
4.7 Classification, ranking and other assesment of proficiency data 4.7.1 Classification ... would be possible to classify scores as: Izl 2 Satisfactory 2 < z <3 Questionable Izl 3 Unsatisfactory ... the knowledge of the relevant probabilities rests on assumptions that may not be fulfilled: (i) that the appropriate values of X and have been used; and (ii) that the underlying distribution of analytical errors is normal, apart from outliers. ... classification is not recommended in proficiency tests. "Decision limits" based on z-scores may be used as an alternative where necessary. 4.7.2 Ranking ... ranked list is used for encouraging better performance in poorly ranked laboratories by providing a comparison among the participants. However, ranking is not recommended as it is an inefficient use of the information available and may be open to misinterpretation. A histogram is a more effective method of presenting the same data.
39
4. GENERALISED STATISTICAL PROCEDURE FOR THE ANALYSIS OF RESULTS
4.7.2 Ranking ... ranked list is used for encouraging better performance in poorly ranked laboratories by providing a comparison among the participants. However, ranking is not recommended as it is an inefficient use of the information available and may be open to misinterpretation. A histogram is a more effective method of presenting the same data.
40
APPENDICES APPENDIX II: A RECOMMENDED PROCEDURE FOR TESTING A MATERIAL FOR SUFFICIENT HOMOGENEITY
1. Prepare the whole of the bulk material in a form that is thought to be homogeneous, by an appropriate method. 2. Divide the material into the containers that will be used tor dispatch to the participants. 3. Select a minimum (n) of 10 containers strictly at random. 4. Separately homogenise the contents of each of the n selected containers and take two test portions. 5. Analyse the 2n test portions in a random order under repeatability conditions by an appropriate method. The analytical method used must be sufficiently precise to allow a satisfactory estimation of Ss.
41
APPENDICES APPENDIX II: A RECOMMENDED PROCEDURE FOR TESTING A MATERIAL FOR SUFFICIENT HOMOGENEITY
6. Form an estimate (Ss2) of the sampling variance and an estimate (Sa2) of the analytical variances by one-way analysis of variance, without exclusion of outliers. 7. Report values of x, Ss, Sa, n and the result of the F-test. 8. If is the target value for standard deviation for the proficiency test at analyte concentration = x, the value of Ss / should be less than 0.3 for sufficient homogeneity.
42
APPENDICES APPENDIX II: A RECOMMENDED PROCEDURE FOR TESTING A MATERIAL FOR SUFFICIENT HOMOGENEITY
EXAMPLE - Copper in Soya Flour (µg g-1) Sample No 1 2 Copper Content Grand mean = 10.02 Analysis of variance Critical value of F (p=0.05, v1=11, v2=12) is 2.72 < 3.78 There are significant differences between sampIes Sa = = 0.25 Ss = (( – )/2)1/2 = 0.29 = 1.1 (This is an example value of a target value for reference standard deviation and is not derived from the data) Ss/ = 0.29/1.1 = 0.26 < 0.3 .. significant differences between samples (F-test), the material is sufficiently homogeneous for the purpose of the proficiency trial, as Ss/ = 0.26 is less than the maximum recommended value of 0.3.
43
APPENDICES APPENDIX VI: AN OUTLINE EXAMPLE OF HOW ASSIGNED VALUES AND TARGET VALUES MAY BE SPECIFIED AND USED 1. Scheme The example requires that there will be four distributions of materials per year, dispatched by post on the Monday of the first full working week of January, April, July and October. Results must reach the organisers by the last day of the respective month. A statistical analysis of the results will be dispatched to participants within the weeks of the closing dates. This example considers the results from one particular circulation of the test materials for the determination of the analytes. 2. Testing for Sufficient Homogeneity In accordance with the procedure described in Appendix II ...
44
APPENDICES APPENDIX VI: AN OUTLINE EXAMPLE OF HOW ASSIGNED VALUES AND TARGET VALUES MAY BE SPECIFIED AND USED 3. Analyses required The analyses required in each round will be: (i) Hexachlorobenzene in an oil; (ii) Kjeldahl nitrogen in a cereal product. 4. Methods of Analysis and Reporting of Results No method is specified, but the target values were determined using a standard method, and participants must provide an outline of the method actually used, or give a reference to a documented method. Participants must report a single result, in the same form as would be provided for a client. Individual reported values are given in Table
45
APPENDICES APPENDIX VI: AN OUTLINE EXAMPLE OF HOW ASSIGNED VALUES AND TARGET VALUES MAY BE SPECIFIED AND USED 5. Assigned Values 5.1 Hexachlorobenzene in Oil Take the estimate of assigned analyte concentration X for the batch of material as the robust mean of the results of six expert laboratories. Reference Laboratory Result (µg/kg) X is µg/kg: traceability was obtained using a reference method calibrated using in house reference standards and the uncertainty on the assigned value was determined to be 10µg/kg from a detailed assessment of this method by the reference laboratories. 5.2 Kjeldahl Nitrogen in a Cereal Product Take the assigned value of analyte concentration X for the batch of material as the median of the results from all laboratories.
46
APPENDICES APPENDIX VI: AN OUTLINE EXAMPLE OF HOW ASSIGNED VALUES AND TARGET VALUES MAY BE SPECIFIED AND USED 6. Target Values for Standard deviation 6.1 Hexachlorobenzene in Oil In the example used in this Appendix the %RSDR value has been calculated from the Horwitz Equation (RSDR in % = 2(1-0.5logX)). The target value for the standard deviation () will therefore be: 1= X µg/kg 6.2 Kjeldahl Nitrogen in a Cereal Product In the examples used in this Appendix the %RSDR value has been calculated from published collaborative trials. The target value for the standard deviation () is given by: 2 = X g/100g
47
APPENDICES APPENDIX VI: AN OUTLINE EXAMPLE OF HOW ASSIGNED VALUES AND TARGET VALUES MAY BE SPECIFIED AND USED 6.2 Kjeldahl Nitrogen in a Cereal Product In the examples used in this Appendix the %RSDR value has been calculated from published collaborative trials. The target value for the standard deviation () is given by: 2 = X g/100g
48
APPENDICES APPENDIX VI: AN OUTLINE EXAMPLE OF HOW ASSIGNED VALUES AND TARGET VALUES MAY BE SPECIFIED AND USED 7. Statistical Analysis of Results of the Test 7.1 Hexachlorobenzene in Oil: Formation of z-score Calculate: z = (x - X) / for each individual result (x) using the values of X and derived 7.2 Kjeldahl Nitrogen in a Cereal Product: Formation of z-Score Calculate: z = (x - X) / for each individual result (x) using the values of X and derived
49
APPENDICES APPENDIX VI: AN OUTLINE EXAMPLE OF HOW ASSIGNED VALUES AND TARGET VALUES MAY BE SPECIFIED AND USED 8. Display of Results 8.1 z-Scores Tables The individual results for hexachlorobenzene pesticide in oil and for Kjeldahl nitrogen in a cereal product, together with associated z-scores are displayed in tabular form... 8.2 Histograms for z-Scores The z-scores for hexachlorobenzene pesticide in oil and for Kjeldahl nitrogen in a cereal product are displayed as bar-charts...
50
Tables
51
Figures
52
APPENDICES APPENDIX VI: AN OUTLINE EXAMPLE OF HOW ASSIGNED VALUES AND TARGET VALUES MAY BE SPECIFIED AND USED 9. Decision Limits Results with an absolute value for z of less than 2 will be regarded as satisfactory. Remedial action will be recommended when any of the z-scores exceed an absolute value of 3.0. In this example, such results are: Laboratories 005, 008, 012, 014 for hexachlorobenzene pesticide in oil, and Laboratory 008 for Kjeldahl nitrogen in a cereal product.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.