Download presentation
Presentation is loading. Please wait.
Published byBelinda Watts Modified over 9 years ago
1
/CITAC 2002-1/ 1 Identification, Measurement and Decision in Analytical Chemistry Steve Ellison LGC, England
2
/CITAC 2002-1/ 2 Introduction What is Identification? Why does it matter? Where does measurement fit in? Quality in identification How sure are you? – characterising uncertainty and method performance
3
/CITAC 2002-1/ 3 What is identification? “Classification according to specific criteria” – “Above” or “Below” a limit – “Within Spec.” – “Red” – Classification into ranges ( 10) – Molecular species by NMR, IR, MS….. – Material or ingredient (“Rubber”, “Fat”…) – Origin or authenticity
4
/CITAC 2002-1/ 4 Why does it matter? Classification contributes to decisions Decisions cost money – Incorrect batch rejection incurs reprocessing costs – Incorrect acceptance risks litigation and loses business – False positives may generate spurious prosecutions Costs are directly related to false classification probabilities – Know probabilities - optimise cost
5
/CITAC 2002-1/ 5 Where does measurement fit in? Measurement contributes to most identifications – Comparison with limits – Consistency of values (wavelength, mass, sequence length…) But not all – Relative Pattern identification (?) – Colour matching by eye – Identity parades….
6
/CITAC 2002-1/ 6 Interpretation Against Limits Measurement result (a)(b)(c)(d)(e)(f)(g) Limit
7
/CITAC 2002-1/ 7 Controlling Identification Good practice guidance Stated criteria Trained staff Controlled and calibrated instruments – Traceability! Validated methods ….. etc
8
/CITAC 2002-1/ 8 How sure are you? Does Measurement Uncertainty apply? If not, what does?
9
/CITAC 2002-1/ 9 Does Measurement Uncertainty Apply? NO at least, not for the ‘classification’ result
10
/CITAC 2002-1/ 10 Uncertainty and classification When is the limit exceeded? (a)(b)(c)(d)(e)(f)(g) Uncertainty in the measurement result contributes to uncertainty about classification
11
/CITAC 2002-1/ 11 Uncertainty and classification Uncertainty in the measurement result contributes to uncertainty about classification Uncertainties in test conditions lead to uncertainty in classification Uncertainties should be controlled to have little effect on the test result
12
/CITAC 2002-1/ 12 Characterising ‘uncertainty’ in identification False response rates – What is a false response rate? – How is it determined? Alternative expressions of method performance or uncertainty
13
/CITAC 2002-1/ 13 False response rates Observed Actual
14
/CITAC 2002-1/ 14 False response rates Observed Actual
15
/CITAC 2002-1/ 15 False negative rates Fraction of observed negatives which are false Fraction of true positives reading negative Fraction of all results which are incorrectly read as negative
16
/CITAC 2002-1/ 16 False negative rates Fraction of observed negatives which are false Fraction of true positives reading negative* Fraction of all results which are incorrectly read as negative *AOAC Definition (clinical)
17
/CITAC 2002-1/ 17 False negative rates Fraction of observed negatives which are false Fraction of true positives reading negative* Fraction of all results which are incorrectly read as negative $ *AOAC Definition (clinical) $ The one that affects costs directly
18
/CITAC 2002-1/ 18 False response rates: Example* Observed Actual *Nortestosterone in urine: screening method validation data. “Actual” rates from confirmatory test
19
/CITAC 2002-1/ 19 False negative rates: Example Fraction of observed negatives which are false – 2/17 = 11% Fraction of true positives reading negative – 2/16 = 13% Fraction of all results which are incorrectly read as negative – 2/32 = 6%
20
/CITAC 2002-1/ 20 False response rates: Example* Observed Actual *EMIT test for cocaine in urine: Ferrara et al, J. Anal. Toxicol., 1994, 18, 278
21
/CITAC 2002-1/ 21 False negative rates: Example Fraction of observed negatives which are false – 7/429 = 1.6% Fraction of true positives reading negative – 7/126 = 5.6% Fraction of all results which are incorrectly read as negative – 7/522 = 1.3%
22
/CITAC 2002-1/ 22 False response rates - how much data? Observed: 7/126 (5.6%) 95% confidence interval (binomial) – 1.6% to 9.5% 95% CI proportional to 1/ n obs – needs a LOT of false responses for precise figures – but false responses are rare for good methods…. Most useful direct studies are ‘worst case’ or near 50% false response levels
23
/CITAC 2002-1/ 23 False responses: Estimation from thresholds
24
/CITAC 2002-1/ 24 False responses: From probabilities Spectroscopic identification study – S.L.R. Ellison, S.L. Gregory, Anal. Chim. Acta., 1998 370 181. Calculated chance FT-IR match probabilities – probabilities based on “match-binning” - hits within set distance – required hypergeometric distribution (n matches of m taken from population) Compared with actual hits on IR database
25
/CITAC 2002-1/ 25 False responses: From probabilities Theoretical predictions very sensitive to probability assumptions – 10% changes in p make large differences in predictions Best performance within factor of 3-10 – (Improved over binomial probabilities by >10 6 ) Probability information must be excellent for good predictions
26
/CITAC 2002-1/ 26 False response rates from databases Most spectral databases contain 1 of each material – most populations do not! Population data must account for sub- populations – cf. DNA profiles for racially inhomogeneous populations
27
/CITAC 2002-1/ 27 Alternative performance indicators
28
/CITAC 2002-1/ 28 Conclusions Classification needs control to save money INPUT uncertainties need control False response rates and derived measures are useful performance indicators – Definitions vary and make a (big) difference – Sufficient data are hard to get except for carefully chosen analyte levels – Databases suspect unless built for the purpose – Theoretical predictions usable with great care Unwise to expect precise numbers!
29
/CITAC 2002-1/ 29 Best practice Consider costs of false responses Control qualitative test conditions via traceable calibration of equipment Check most critical false response rate – preferably both Use ‘worst-case’ and likely interferent studies to show limits of method performance Use APPROPRIATE population data Report with caution – particularly on probability estimates
30
/CITAC 2002-1/ 30
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.