Presentation is loading. Please wait.

Presentation is loading. Please wait.

CCI Firearms and Toolmark Examiner Academy Workshop on Current Firearms and Toolmark Research Pushing Out the Frontiers of Forensic Science.

Similar presentations


Presentation on theme: "CCI Firearms and Toolmark Examiner Academy Workshop on Current Firearms and Toolmark Research Pushing Out the Frontiers of Forensic Science."— Presentation transcript:

1 CCI Firearms and Toolmark Examiner Academy Workshop on Current Firearms and Toolmark Research Pushing Out the Frontiers of Forensic Science

2 Outline Morning-ish Introduction and the Daubert Standard Confocal Microscopy Focus Variation Microscopy Interferometric Microscopy Surface Data/Filtering

3 Outline Afternoon-ish Similarity scores and Cross-correlation functions Known Match/Known Non-Match Similarity Score histograms. False Positives/False Negatives/Error Rates Multivariate Discrimination of Toolmarks Measures of “Match Quality” Confidence Posterior Error Rate/Random Match Probability Lessons learned in conducting a successful research project

4 Introduction DNA profiling the most successful application of statistics in forensic science. Responsible for current interest in “raising standards” of other branches in forensics…?? No protocols for the application of statistics to comparison of tool marks. Our goal: application of objective, numerical computational pattern comparison to tool marks Caution: Statistics is not a panacea!!!!

5 Daubert (1993)- Judges are the “gatekeepers” of scientific evidence. Must determine if the science is reliable Has empirical testing been done? Falsifiability Has the science been subject to peer review? Are there known error rates? Is there general acceptance? Federal Government and 26(-ish) States are “Daubert States” The Daubert Standard

6 Tool Mark Comparison Microscope

7 G. Petillo 4 mm

8 Known Match Comparisons 5/8” Consecutively manufactured chisels G. Petillo

9 Known NON Match Comparisons 5/8” Consecutively manufactured chisels G. Petillo

10 4 mm 600 um 5/8” Consecutively manufactured chisels

11 Marvin Minsky First confocal microscope Confocal Microscope

12 Confocal Microscopes

13

14 In focus light Out of focus light Tool mark surface (profile of a striation pattern) Focal plane for objective Sample stage Objective lens Illumination aperture Source Confocal pinhole Detector

15 Rastering pattern of laser confocal Nipkow disk sweeps many pinholes

16 Programmable array Illumination/Detection Get any illumination/detection pattern

17 Sample stage Scan stage in “z”-direction Objective’s focal plane

18 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

19 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

20 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

21 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

22 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

23 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

24 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

25 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

26 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

27 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

28 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

29 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

30 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

31 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

32 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

33 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

34 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

35 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

36 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

37 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

38 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

39 Sample stage Scan stage in “z”-direction Detector Objective’s focal plane

40 Detector For Each Detector Pixel: Record the “axial response” as stage is moved along the z-direction Point on surface corresponding to pixel’s is in maximum focus here

41 Increasing surface height All-in-Focus 2D Image Overlay confocal “z-stack”

42 3D confocal image of portion of chisel striation pattern

43 Use high NA objectives for best results Small working distances Flanks up to ~ 70 o Cost ~150K – 250K (FTI IBIS ~1M) Get a vibration isolation table for your instrument ~7K Set up in a (dry) basement if possible Accuracy down to +/- 10 nm Confocal Microscope Trivia Optical slice thickness =

44 Some manufactures: Olympus LEXT (Laser) Zeiss CSM (White Light) LSM (Laser) Nanofocus  surf series (White Light) Sensofar/Leica Plu series/DCM (White Light) Confocal Microscope Trivia

45 Focus Variation Microscope Scherer and Prantl “Low res” common Focus variation mic ~ +/- 1  m

46 In focus light Out of focus light Tool mark surface (profile of a striation pattern) Focal plane for objective Sample stage Objective lens Source Detector

47 Cutaway Alicona, GMBH

48 Sample stage Scan stage in “z”-direction Objective’s focal plane

49 Detector For Each Detector Pixel: Record the “axial response” as stage is moved along the z-direction Point on surface corresponding to pixel is in maximum focus here

50 Focus Determination: Detector Pixel of interest Compute standard deviation (sd) of pixels grey values in the neighborhood A pixel in focus sits in a neighborhood with a large sd

51 Use high NA objectives for best results Can use external light Large working distances Flanks up to ~75 o Cost ~200K – 250K. 80K models WON’T have the vertical resolution needed for forensic work Get a vibration isolation table for your instrument ~7K Set up in a (dry) basement if possible Accuracy down to +/- 10nm Focus Variation Microscope Trivia

52 Some manufactures: Alicona IFM Can get optional rotational stage Sensofar/Leica S neox/DCM Focus Variation Microscope Trivia

53 Interferometer Incoming wave split Path lengths equal Recombine in-phase Fixed mirror Movable mirror recombine

54 Interferometer Incoming wave split Path lengths NOT equal Recombine out-of-phase Fixed mirror Movable mirror recombine

55 Interferometric Height Measurement The basic idea: Each surface point is a “fixed mirror” Move a reference mirror in objective Split beams recombine in and out of phase Constructive interference occurs when surface points in focal plane Infer the surface heights from where constructive interference occurs

56 Interferometric Microscope James Wyant Early Interferometric Microscope Early Interferometric Microscope for Surafce Metrology Wyant Modern Interferometric Microscope for Surafce Metrology

57 Tool mark surface (profile of a striation pattern) Focal plane for objective Sample stage Objective lens Camera (Detector) Source Microscope Configuration Piezo Reference mirror Beam-splitter Scan objective for Interference in “z”-direction Path lengths equal Point in focus

58 Tool mark surface (profile of a striation pattern) Sample stage Objective lens Camera (Detector) Source Microscope Configuration Piezo Reference mirror Beam-splitter Scan objective for Interference in “z”-direction Path lengths un-equal Point in out of focus Focal plane for objective

59 Interference Objectives Mirau objective ~ 10X – 100X Michelson objective ~ 2X – 10X Linnik objective + 100X

60 Detector For Each Detector Pixel: Record each pixels interference pattern as objective is scanned Point on surface corresponding To pixel’s is in maximum focus here

61 Inference patterns: Sample stage Scan objective for Interference in “z”-direction

62 Fringes Bruker NSD Fringe PatternSurface

63 Turn Fringes Into A Surface Intensity for each detector pixel: Fourier transform I(z) to get q(k) Compute surface heights deGroot k arg[q(k)] k0k0  A with:

64 Interferometry Trivia Use high NA objectives for best results Small working distances Flanks up to ~25 o Cost ~200K – 250K. Get a vibration isolation table for your instrument ~7K Set up in a (dry) basement if possible Comes in two modes VSI: Accuracy +/- 10nm PSI: Accuracy below 1nm

65 Some manufactures: Bruker (Acquired WYKO/Veeco) Taylor Hobson Sensofar/Leica S neox/DCM Interferometry Trivia

66 Surface Data 37.8837.89 37.9037.9237.9137.93 37.9437.99 37.8837.8937.87 37.8537.8937.9237.9738.02 37.8637.8537.84 37.85 37.9237.9838.03 37.8437.8237.81 37.8337.8537.8837.9237.9738.04 37.8137.80 37.8237.8437.8637.8937.9437.9838.05 37.8137.7837.7937.8237.8537.8937.9437.9638.0038.04 37.8237.80 37.8337.8737.9137.9837.9938.0238.05 37.8437.8137.8037.8137.8437.8937.9537.9938.0138.06 37.8437.8037.7637.7737.7837.8637.9237.9638.0038.03 37.8037.7737.7637.7437.7937.8437.9037.9337.9838.00 Surface heights (  m) Land Engraved Area: Point are “double precision”: 64-bits/point BIG FILES!

67 Surface Data Detector levels (16-bit values): Land Engraved Area: 1661716622 16625166321662916638166391664516665 166181662016613 166101660516622166321665616676 16606166021660016597 1660316604166321666216684 166001658916587 165941660316616166321665816686 1658516583 16588165991660816619166431666216689 16587165721657916590166041662216641166521666916688 16591165811658316594166101663016661166631667916692 16597165861658316585165971662316646166661667416695 16599165811656616569165741660716634166511666916683 16581165671656216556165751659716625166401666016671 Point are detector grey levels: 16-bits/point Smaller files. Convert to  m in RAM

68 Different systems use different storage formats Be aware if writing custom apps. ASK COMPANY FOR FILE FORMAT! Alicona: Saves surface data as doubles. HUGE FILES! Zeiss: Saves surface data as 16-bit grey levels with conversion factor Other?? 24, 32-bit detectors now?? Need to standardize file format! X3D Zhang,Brubaker Digital-Surf.sur Petraco Surface Data Trivia

69 Think of a toolmark surface as being made up of a series of waves Surface Filtering

70 Examine different scales by “blocking out” (filtering) some of the sinusoids Surface Filtering “Low Pass” filter blocks high frequencies and passes low frequencies (long wavelengths)

71 Examine different scales by “blocking out” (filtering) some of the sinusoids Surface Filtering “High Pass” filter blocks low frequencies and passes high frequencies (short wavelengths)

72 Wavelength “cutoffs” Surface Filtering Trivia A “High Pass” filter A “Low Pass” filter cut Wavelength ranges Short wavelengths passed: roughness Medium wavelengths passed: waviness Long wavelengths passed: form

73 Band-pass filter: Select narrow wavelength bands to keep. High-pass/Low-pass combinations (Filter banks) Wavelets are great at doing this Surface Filtering

74 Statistics Weapon Mark Association – What measurement techniques can be used to obtain data for toolmarks? – What statistical methods should be used? How do we measure a degree of confidence for an association, i.e. a “match”? What are the identification error rates for different methods of identification?

75 R is not a black box! Codes available for review; totally transparent! R maintained by a professional group of statisticians, and computational scientists From very simple to state-of-the-art procedures available Very good graphics for exhibits and papers R is extensible (it is a full scripting language) Coding/syntax similar to MATLAB Easy to link to C/C++ routines Why ?

76 Where to get information on R : R: http://www.r-project.org/http://www.r-project.org/ Just need the base RStudio: http://rstudio.org/http://rstudio.org/ A great IDE for R Work on all platforms Sometimes slows down performance… CRAN: http://cran.r-project.org/http://cran.r-project.org/ Library repository for R Click on Search on the left of the website to search for package/info on packages Why ?

77 Finding our way around R/RStudio

78 Gauge similarity between tool marks with one number Similarity “metric” is a function which measures “sameness” Only requirement: s(A,B) = s(B,A) There are an INFINITE number of ways to measure similarity!! Common Computational Practice Often max CCF is used.

79 Cross-correlation

80

81 KNM can sometimes have high max-ccf… max-ccf: 0.751

82 Glock primer shear: Each profile ~2+ mm Lag over 2000 units (~0.8 mm) Max CCF distributions Cross-Correlation Scores from “Known Non-Matches” Scores from “Known Matches” We thought: Ehhhhhh…….

83 Random variables - All measurements have an associated “randomness” component Randomness –patternless, unstructured, typical, total ignorance Chaitin, Claude Multivariate Feature Vectors For an experiment/observation, put many measurements together into a list Collection random variables into a list called a random vector 1.Also called: observation vectors feature vectors

84 Potential feature vectors for surface metrology Entire surfaces *Surface profiles Surface/profile parameters Surface/profile Fourier transform or wavelet coefficients Translation/rotation/scale invariant surface (image) moments Multivariate Feature Vectors

85 Mean total profile: Mean waviness profile: Waviness profile Barcode representation

86 Toolmarks (screwdriver striation profiles) form database Biasotti-Murdock Dictionary Consecutive Matching Striae (CMS)-Space

87 Some Important Terms Latent Variable: weighted combination of experimental variables into a new “synthetic” variable Also called: scores, components or factors The weights are called loadings Most latent variables we will study are linear combinations between experimental variables and loadings: Dot prod. between obs. vect. and loading vect. gives a score:

88 PCA: Is a rotation of reference frame Gives new PC directions’ relative importance PC variance Principal Component Analysis

89 Technically, PCA is an eigenvalue-problem Diagonalize some version of S or R to get a PCs Typically Principal Component Analysis covariance matrix matrix of PC “loadings” matrix of PC variances For a data frame of p variables, there are p possible PCs. s ≅ PC importance, dimension reduction Scores are data projected into space of PCs retained Scores plots, either 2D or 3D

90 Need a data matrix to do machine learning Setup for Multivariate Analysis Represent as a vector of values {-4.62, -4.60, -4.58,...} Each profile or surface is a row in the data matrix Typical length is ~4000 points/profile 2D surfaces are far longer HIGHLY REDUNDANT representation of surface data PCA can: Remove much of the redundancy Make discrimination computations far more tractable

91 How many PCs should we use to represent the data?? No unique answer FIRST we need an algorithm to I.D. a toolmark to a tool ~45% variance retained 3D PCA of 1740 real and simulated mean profiles of striation patterns from 58 screwdrivers :

92 Support Vector Machines Support Vector Machines (SVM) determine efficient association rules In the absence of specific knowledge of probability densities SVM decision boundary

93 Support Vector Machines SVM computed as optimization of “Lagrange multipliers” Quadratic optimization problem Convex => SVMs unique unlike NNs k(x i,x j ) kernel function “Warps” data space and helps to find separations Many forms depending on application: linear, rbf usually C: penalty parameter control the margin of error between groups that are not perfectly separable: 0.1 to 10 usually

94 Support Vector Machines The SVM decision rule is given as: Equation for a plane in “kernel space” Multi group classification handled by “voting”

95 How many Principal Components should we use? PCA-SVM With 7 PCs, expect ~3% error rate With 13 PCs, expect ~1% error rate

96 This supervised technique is called Linear Discriminant Analysis (LDA) in R Also called Fisher linear discriminant analysis CVA is closely related to linear Bayes-Gaussian discriminant analysis Canonical Variate Analysis Works on a principle similar to PCA: Look for “interesting directions in data space” CVA: Find directions in space which best separate groups. Technically: find directions which maximize ratio of between group to within variation

97 Canonical Variate Analysis Project on PC1: Not necessarily good group separation! Project on CV1: Good group separation! Note: There are #groups -1 or p CVs which ever is smaller

98 Use between-group to within-group covariance matrix, W -1 B to find directions of best group separation (CVA loadings, A cv ): Canonical Variate Analysis CVA can be used for dimension reduction. Caution! These “dimensions” are not at right angles (i.e. not orthogonal) CVA plots can thus be distorted from reality Always check loading angles! Caution! CVA will not work well with very correlated data

99 Distance metric used in CVA to assign group i.d. of an unknown data point: If data is Gaussian and group covariance structures are the same then CVA classification is the same as Bayes-Gaussian classification. Canonical Variate Analysis

100 2D/3D-CVA scores plots of RB screwdrivers 2D CVA3D CVA Canonical Variate Analysis

101 2D scores plots of RB screwdrivers: PCA vs. CVA 2D PCA of striation pattern mean profiles2D CVA of striation pattern mean profiles

102 Discriminant functions are trained on a finite set of data How much fitting should we do? What should the model’s dimension be? Error Rate Estimation Model must be used to identify a piece of evidence (data) it was not trained with. Accurate estimates for error rates of decision model are critical in forensic science applications. The simplest is apparent error rate: Error rate on training set Lousy estimate, but better than nothing

103 Cross-Validation: hold-out chunks of data set for testing Known since 1940s Most common: Hold-one-out Error Rate Estimation Bootstrap: Randomly selection of observed data (with replacement) Known since the 1970s Can yield confidence intervals around error rate estimate The Best: Small training set, BIG test set

104 Refined bootstrapped I.D. error rate for primer shear striation patterns= 0.35% 95% C.I. = [0%, 0.83%] (sample size = 720 real and simulated profiles) 18D PCA-SVM Primer Shear I.D. Model, 2000 Bootstrap Resamples

105 How good of a “match” is it? Conformal Prediction Vovk Data should be IID but that’s it Cumulative # of Errors Sequence of Unk Obs Vects 80% confidence 20% error Slope = 0.2 95% confidence 5% error Slope = 0.05 99% confidence 1% error Slope = 0.01 Can give a judge or jury an easy to understand measure of reliability of classification result This is an orthodox “frequentist” approach Roots in Algorithmic Information Theory Confidence on a scale of 0%-100% Testable claim: Long run I.D. error- rate should be the chosen significance level

106 How Conformal Prediction works for us Given a “bag” of obs with known identities and one obs of unknown identity Vovk Estimate how “wrong” labelings are for each observation with a non- conformity score (“wrong-iness”) Looking at the “wrong-iness” of known observations in the bag: Does labeling-i for the unknown have an unusual amount of “wrong-iness”??: For us, one-vs-one SVMs: If not: p possible-ID i ≥ chosen level of significance Put ID i in the (1 - )*100% confidence interval

107 Conformal Prediction Theoretical (Long Run) Error Rate: 5% Empirical Error Rate: 5.3% 14D PCA-SVM Decision Model for screwdriver striation patterns For 95%-CPT (PCA-SVM) confidence intervals will not contain the correct I.D. 5% of the time in the long run Straight-forward validation/explanation picture for court

108 Conformal Prediction Drawbacks CPT is an interval method Can (and does) produce multi-label I.D. intervals A “correct” I.D. is an interval with all labels Doesn’t happen often in practice… Empty intervals count as “errors” Well…, what if the “correct” answer isn’t in the database An “Open-set” problem which Champod, Gantz and Saunders have pointed out Must be run in “on-line” mode for LRG After 500+ I.D. attempts run in “off-line” mode we noticed in practice

109 An I.D. is output for each questioned toolmark This is a computer “match” What’s the probability it is truly not a “match”? Similar problem in genomics for detecting disease from microarray data They use data and Bayes’ theorem to get an estimate No disease genomics = Not a true “match” toolmarks How good of a “match” is it? Efron Empirical Bayes’

110 Empirical Bayes’ We use Efron’s machinery for “empirical Bayes’ two-groups model” Efron Surprisingly simple! Use binned data to do a Poisson regression Some notation: S -, truly no association, Null hypothesis S +, truly an association, Non-null hypothesis z, a score derived from a machine learning task to I.D. an unknown pattern with a group z is a Gaussian random variate for the Null

111 Empirical Bayes’ From Bayes’ Theorem we can get Efron : Estimated probability of not a true “match” given the algorithms' output z-score associated with its “match” Names: Posterior error probability (PEP) Kall Local false discovery rate (lfdr) Efron Suggested interpretation for casework: We agree with Gelaman and Shalizi Gelman : = Estimated “believability” of machine made association “…posterior model probabilities …[are]… useful as tools for prediction and for understanding structure in data, as long as these probabilities are not taken too seriously.”

112 Empirical Bayes’ Bootstrap procedure to get estimate of the KNM distribution of “Platt-scores” Platt,e1071 Use a “Training” set Use this to get p-values/z-values on a “Validation” set Inspired by Storey and Tibshirani’s Null estimation method Storey z-score From fit histogram by Efron’s method get: “mixture” density We can test the fits to and ! What’s the point?? z-density given KNM => Should be Gaussian Estimate of prior for KNM Use SVM to get KM and KNM “Platt-score” distributions Use a “Validation” set

113 Posterior Association Probability: Believability Curve 12D PCA-SVM locfdr fit for Glock primer shear patterns +/- 2 standard errors

114 Bayesian over-dispersed Poisson with intercept on test setBayesian Poisson with intercept on test set Poisson (Efron) on test set Bayesian Poisson on test set

115 Bayes Factors/Likelihood Ratios In the “Forensic Bayesian Framework”, the Likelihood Ratio is the measure of the weight of evidence. LRs are called Bayes Factors by most statistician LRs give the measure of support the “evidence” lends to the “prosecution hypothesis” vs. the “defense hypothesis” From Bayes Theorem:

116 Bayes Factors/Likelihood Ratios Once the “fits” for the Empirical Bayes method are obtained, it is easy to compute the corresponding likelihood ratios. o Using the identity: the likelihood ratio can be computed as:

117 Bayes Factors/Likelihood Ratios Using the fit posteriors and priors we can obtain the likelihood ratios Tippett, Ramos Known match LR values Known non-match LR values

118 Empirical Bayes’: Some Things That Bother Me Need a lot of z-scores Big data sets in forensic science largely don’t exist z-scores should be fairly independent Especially necessary for interval estimates around lfdr Efron Requires “binning” in arbitrary number of intervals Also suffers from the “Open-set” problem Interpretation of the prior probability for this application Should Pr(S - ) be 1 or very close to it? How close?

119 How to Carry Out a “Successful” Research Project The Synergy Between Practitioners and Academia

120 Collaboration Practitioners: Think about what questions you want to be able to answer with data BEFORE experimentation Write down proposed questions/design Be aware that the questions you want answers too MAY NOT have answers What can you answer?? Be aware that a typical research project takes 1-2 years to complete

121 Collaboration Practitioners: Research projects are NOT just for interns! Interns typically need tremendous supervision for scientific/applied statistical research Take a college course on statistics/experimental design Rate-my-professor is your friend! Visit local university/company websites to look for the outside expertise you may need. Visit the department, go to some seminars

122 Collaboration Academics/Research consultants: Be aware practitioners cannot just publish whenever and whatever they want Long internal review processes! COMMUNICATION!!!!! Listen carefully to the needs/questions of collaborating practitioners Negotiate the project design What kind of results can be achieved within a reasonable amount of time? Hold regular face to face meetings if possible

123 Collaboration Academics/Research consultants: Applied research is not just for undergraduates/high-school interns! Visit the crime lab!!!!! Watch the practitioners do their job. Learn the tools they use day to day! Microscopy!!!!! Use their accumulated experience to help guide your design/desired outcomes What do they focus on??

124 Fire Debris Analysis Casework Liquid gasoline samples recovered during investigation: Unknown history Subjected to various real world conditions. If an individual sample can be discriminated from the larger group, this can be of forensic interest. Gas-Chromatography Commonly Used to ID gas. Peak comparisons of chromatograms difficult and time consuming. Does “eye-balling” satisfy Daubert, or even Frye.....????

125 2D PCA 97.3% variance retained Avg. LDA HOO correct classification rate: 83%

126 2D CVA Avg. LDA HOO correct classification rate: 92%

127 Accidental Patterns on Footwear Shoe prints contain marks and patterns due to various circumstances that can be used to distinguish one shoe print from another. How reliable are the accidental patterns for identifying particular shoes?

128 3D PCA 59.7% of variance Facial Recognition Approach to Accidental Pattern Identification

129 Tool marks Like shoes, tools can leave marks which can be used in identification Class characteristics Subclass characteristics Individual characteristics

130 Standard Striation Patterns Made with ¼’’ Slotted Screwdriver Measure lines and grooves with ImageJ Translate ImageJ data to a feature vector that can be processed

131 A, 2, #2 Bromberg, Lucky C, 8, #4 Bromberg, Lucky LEA Striations

132 Questioned Documents: Photocopier Identification Mordente, Gestring, Tytell Photocopy of a blank sheet of paper

133 Dust: Where does it come from? Any matter or substance: both natural and synthetic reduces into minute bits, pieces, smears, and residues encountered as trace aggregates Our Environments! Evidence! N. Petraco

134 Where can you find it?Everywhere House Work Outdoors Vehicle N. Petraco

135 Analyze Results 3D PCA-Clustering can show potential for discrimination

136

137 Bayes Net for Dust in Authentication Case

138 References Bolton-King, Evans, Smith, Painter, Allsop, Cranton. AFTE J 42(1),23 2010 Artigas. In: Optical Measurement of Surface Topography. Leach ed. Springer, 201l Helmli. In: Optical Measurement of Surface Topography. Leach ed. Springer, 2011 deGroot. In: Optical Measurement of Surface Topography. Leach ed. Springer, 201l Efron, B. (2010). Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction. New York: Cambridge University Press. Gambino C., McLaughlin P., Kuo L., Kammerman F., Shenkin S., Diaczuk P., Petraco N., Hamby J. and Petraco N.D.K., “Forensic Surface Metrology: Tool Mark Evidence", Scanning 27(1-3), 1-7 (2011). JAGS “A program for analysis of Bayesian hierarchical models using Markov Chain Monte Carlo simulation”, Version 3.3.0. http://mcmc-jags.sourceforge.net/http://mcmc-jags.sourceforge.net/ Kall L., Storey J. D., MacCross M. J. and Noble W. S. (2008). Posterior error probabilities and false discovery rates: two sides of the same coin. J Proteome Research, 7(1), 40-44.

139 References locfdr R package. 2011. locfdr: “Computation of local false discovery rates”, Version 1.1- 7.http://cran.r-project.org/web/packages/locfdr/index.htmlhttp://cran.r-project.org/web/packages/locfdr/index.html Moran B., "A Report on the AFTE Theory of Identification and Range of Conclusions for Tool Mark Identification and Resulting Approaches To Casework," AFTE Journal, Vol. 34, No. 2, 2002, pp. 227-35. Petraco a N. D. K., Chan H., De Forest P. R., Diaczuk P., Gambino C., Hamby J., Kammerman F., Kammrath B. W., Kubic T. A., Kuo L., Mc Laughlin P., Petillo G., Petraco N., Phelps E., Pizzola P. A., Purcell D. K. and Shenkin P. “Final Report: Application of Machine Learning to Toolmarks: Statistically Based Methods for Impression Pattern Comparisons”. National Institute of Justice, Grant Report: 2009-DN-BX-K041; 2012. Petraco N. D. K., Kuo L., Chan H., Phelps E., Gambino C., McLaughlin P., Kammerman F., Diaczuk P., Shenkin P., Petraco N. and Hamby J. “Estimates of Striation Pattern Identification Error Rates by Algorithmic Methods”, AFTE J., In Press, 2013. Petraco N. D. K., Zoon P., Baiker M., Kammerman F., Gambino C. “Stochastic and Deterministic Striation Pattern Simulation”. In preparation 2013. Platt J. C. “Probabilities for SV Machines”. In: Advances in Large Margin Classifiers Eds: Smola A. J., Bartlett P., Scholkopf B., and Schuurmans D. MIT Press, 2000. Plummer M. “JAGS: A Program for Analysis of Bayesian Graphical Models Using Gibbs Sampling”, Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), March 20– 22, Vienna, Austria.“JAGS: A Program for Analysis of Bayesian Graphical Models Using Gibbs Sampling Stan Development Team. 2013. “Stan: A C++ Library for Probability and Sampling”, Version 1.3. http://mc- stan.org/http://mc- stan.org/ Storey J. D. and Tibshirani R. “Statistical significance for genome wide studies”. PNAS 2003;100(16):9440- 9445. Vovk V., Gammerman A., and Shafer G. (2005). Algorithmic learning in a random world. 1st ed. Springer, New York.

140 References 20.Tippett CF, Emerson VJ, Fereday MJ, Lawton F, Richardson A, Jones LT, Lampert SM., “The Evidential Value of the Comparison of Paint Flakes from Sources other than Vehicles”, J Forensic Soc Soc 1968 ;8(2-3):61-65. 21.Ramos D, Gonzalez-Rodriguez J, Zadora G, Aitken C. “Information-Theoretical Assessment of the Performance of Likelihood Ratio Computation Methods”, J Forensic Sci 2013;58(6):1503-1518.

141 Acknowledgements Professor Chris Saunders (SDSU) Professor Christoph Champod (Lausanne) Alan Zheng (NIST) Research Team: Dr. Martin Baiker Ms. Helen Chan Ms. Julie Cohen Mr. Peter Diaczuk Dr. Peter De Forest Mr. Antonio Del Valle Ms. Carol Gambino Dr. James Hamby Ms. Alison Hartwell, Esq. Dr. Thomas Kubic, Esq. Ms. Loretta Kuo Ms. Frani Kammerman Dr. Brooke Kammrath Mr. Chris Lucky Off. Patrick McLaughlin Dr. Linton Mohammed Mr. John Murdock Mr. Nicholas Petraco Dr. Dale Purcel Ms. Stephanie Pollut Dr. Peter Pizzola Dr. Graham Rankin Dr. Jacqueline Speir Dr. Peter Shenkin Mr. Chris Singh Mr. Peter Tytell Mr. Todd Weller Ms. Elizabeth Willie Dr. Peter Zoon

142 Website: Data, codes, reprints and preprints: toolmarkstatistics.no-ip.org/ npetraco@gmail.com


Download ppt "CCI Firearms and Toolmark Examiner Academy Workshop on Current Firearms and Toolmark Research Pushing Out the Frontiers of Forensic Science."

Similar presentations


Ads by Google