Download presentation
Presentation is loading. Please wait.
Published byPaul Ward Modified over 6 years ago
1
Shape-based Quantification of 3D Face Data for Craniofacial Research
Katarzyna Wilamowska General Exam Department of Computer Science & Engineering University of Washington 2008 Hello, and thank you for coming. Today I’m going to tell you about some of the preliminary work I’ve done in ... and propose some future directions for my work. 1
2
22q11.2 Deletion Syndrome Hello, and thank you for coming.
Today I’m going to tell you about some of the preliminary work I’ve done in ... and propose some future directions for my work. 2
3
Motivation Describe useful features
There is a disease that’s hard to classify (unless you’re an expert) Final Goal: Want to connect visible features to genetic code (phenotype-genotype) Have to start somewhere: help detect disease more accurately In this process, find and describe useful features ... leading to solving the overall goal Help detect disease more accurately Describe useful features 3
4
22q11.2 Deletion Syndrome (22q11.2DS)
aka Velo-cardio-facial syndrome (VCFS) affects approximately 1 in 4000 individuals in the US early detection is important cardiac anomalies mild to moderate immune deficiencies learning disabilities one of the most common genetic deletions today physician diagnosis based on visual and non-visual cues + confirmation with genetic test impact development and survival FISH - fluorescence in situ hybridization genetic test for 22q11.2DS 4
5
22q11.2 Deletion Syndrome has Subtle Facial Features
One of the things we know about the disease is that it has subtle facial features variable penetrance of any specific feature, ex. SMALL MOUTH some have very small mouth, while others are less noticible 5
6
In fact there are over 180 features tied to 22q11.2DS.
no one individual is affected by all features no features are present in all affected individuals 6
7
Experts Looking at Photos
Becker et al. 2004 14 affected, 10 control one photo at infancy & one beyond 2 years old Improve accuracy of genetic testing referrals Profession # Sensitivity Specificity Geneticist 9 0.72 0.51 Speech Pathologist 13 0.52 Surgeon 10 0.64 0.50 Becker and colleagues published a study of how accurately Likely Care Providers identify affected individuals from facial photographs. Call to arms: identify features that will yield optimal referrals Sn = recall = proportion of actual affected correctly labeled as affected Sp = proportions of actual control correctly labeled as control geneticist sn sp accuracy 0.63 speech path sn sp accuracy 0.64 surgeon sn sp accuracy 0.58 7
8
Research Objective Develop a successful methodology to
classify 22q11.2 deletion syndrome affected individuals quantify the degree of dysmorphology in facial features Design consideration Minimal human involvement #2 can be used for phenotype- genotype studies leading to - understanding etiology of cranial malformations - pathogenesis of VCFS - informative on the genetic control necessary for normal craniofacial development dysmorphology == malformation (actually study of malformations) 8
9
Related Literature Medical Craniofacial Assessment
calipers, manual landmarks CT, MRI, Ultra Sound, Stereoscopic imaging Time consuming human involvement original measurements of calipers are intrusive, so movement to imaging systems normative data is mostly caucasian features were looking for are less than variance between different ethnicities 9
10
Related Literature Computer Vision Craniofacial Analysis 1D waveforms
2D images, landmarks 3D morphable models, new representations, landmarks hybrid 2D/3D systems Focus: biometric authentication and recognition 1D - Fourier, Cosine, Wavelets 2D - eigenfaces, lots of similar to 1D, and landmark models/placement 3D - morphable models (Hutton), Sal’s work on cheeks, landmark models/placement hybrid - geodesic faces (make 2D texture from 3D data), use different modalities (photo, range image and infrared image and combine results) CV work doesn’t directly apply to MD stuff so had to build infrastructure from scratch bio authentication -- that’s interesting work, and techniques can be leveraged trying to get exact measurements, while we want to categorize degrees of dysmorphology (so two different tasks) - devil’s in the details 10
11
feature extraction using wavelets
Recognition of Dysmorphic Faces Boehringer et al. 2006 input point labeling 1. facial photographs (256x256, grayscale, trimmed) 2. place predefined pattern 3. each point described by Gabor wavelets (different sizes & spatial orientations) totaling 40 per point 4. run Principle Component Analysis on those points 5. classify classify PCA feature extraction using wavelets 11
12
Recognition of Dysmorphic Faces Results
Classifiers: LDA, SVM, kNN, JV Simultaneous classification Manual 76% Auto 52% => face localization Pairwise classification 89% - 100% compared different diseases (no control) LDA did best 12
13
Dense Surface Models calculate mean landmarks
Hutton 2004 Hammond et al. 2005 Hammond 2007 calculate mean landmarks create dense correspondence to base mesh input warp to mean landmarks 1. manually place 9-20 landmarks 2. mean landmarks using Procrustes (least-squares alignment) 3. WARP each face to mean landmarks using thin-plate splines (TPS) 4. create dense correspondence 1. For each vertex on the base mesh find the closest point on every other mesh 2. Once you’ve done that for all the vertices of the base mesh, transfer the connectivity of the base mesh to all other meshes. THIS gives you a complete correspondence between all the meshes 5. trim surfaces 6. unwarp the meshes (TPS) 7. compute average shape for each group (status, age, sex) Procrustes algorithms 8. run principle component analysis 9. classify classify PCA unwarp average face trim surfaces 13
14
Dense Surface Models: 22q11.2DS Results
Data characteristics Classifiers Best 22q11.2 results 60 VCFS 130 control SVM, closest mean, Logistic regression, Neural networks, decision trees Sensitivity 0.83 Specificity 0.92 115 VCFS 185 control linear discriminant analysis Accuracy Face 0.94 Eyes 0.83 Nose 0.87 Mouth 0.85 these guys did really good 1. hand labeling of data (I don’t do this) I’m not using hand labeled data, to try to broaden the number of times my method can be used If I don;t need hand labeled data, my method is more broadly applicable. Calculated an 0.89 accuracy for first test 14
15
Comparison to Previous Work
Boehringer et al Hutton et al My work Data representation 2D photographs 3D meshes 2.5D depth images curved lines Control data no yes Data labeling manual none [automatic] Clean up empirically determined threshold Final goal separate diseases distance from average facial features 1. my work is more broadly applicable, since it doesn’t rely upon the land marking of faces 2. Hutton’s dense correspondence focuses his work to difference from average results, while I am pursuing interesting facial features 15
16
Data Preprocessing Data was collected using a 3dMD stereoscopic imaging system 12 cameras, 4 stands 1.5 milliseconds capture speed resulting in raw 3D image on the RIGHT cleanup needed and need to align the heads to the same orientation 16
17
Data Preprocessing: Pose Alignment 1st Attempt
Goal: Align each head to same orientation Solution: Hand align with Iterative Closest Point (ICP) assistance I started with a hand align approach assisted by ICP but often ICP would misalign heads and became impractical for more than the first dozen heads Hand aligned After ICP
18
Data Preprocessing: Pose Alignment 2nd Attempt
Goal: Align each head to same orientation Solution: Align using 1st principle component from PCA Standard approach, but fell short due to diverse data for each head
19
Data Preprocessing: Pose Alignment Final Solution
Goal: Align each head to same orientation Solution: Automatically calculate 3 rotation angles necessary to achieve goal For any sort of analysis it is necessary to align the heads. Due to the face that 22q11 has such subtle representation, we didn’t want to use morphing that would distort the shape of the face. Looked at rigid alignment and PCA, neither worked well, so... Yaw Roll Pitch 19
20
Data Preprocessing: Yaw and Roll Alignment
Use symmetry to align for each rotation rasterize choose rotation angle that minimizes difference For each Yaw and Roll rotation of the face Rasterize = take 3D mesh and interpolate to pixel values of 2D image absolute value ( - ) = 20
21
Data Preprocessing: Pitch Alignment
Minimize difference between chin height and forehead height forehead chin Since the top and bottom of the head are not symmetrical, had to use different approach... nose upper lip lower lip 21
22
Data Preprocessing: Alignment Results
Better results if both Yaw and Roll are aligned together Original Yaw Roll Yaw and Roll Iterative process (usually 2 iterations are enough for “convergence”) Symmetry about 1% needed to be corrected Pitch 15% needed human intervention - but same type of intervention, so possible that this could be cleaned up Pitch rotation can fall into local minimum due to top of the head Over-rotated After 1st iteration 22
23
Data Representation 3D snapshot 2.5D depth image Curved lines
23
24
Data Representation 3D snapshot Whole head Cutoff at ears
3D snapshot is a 2D photograph of the 3D mesh no depth information 24
25
Data Representation 3D snapshot 2.5D depth map Whole head
Cutoff at ears discretized to represent all 3 dimensions as pixel information, with depth == illumination 2.5D depth map 25
26
Data Representation 3D snapshot 2.5D depth map curved lines Whole head
Cutoff at ears Lastly wanted to look at profiles of the face and if there was any info there HOODED EYES, POOR CHEEK TONE 2.5D depth map curved lines 26
27
Curved Lines Detail curved lines single profile line made sense, but was there information in the horizontal, or a grid pattern? 27
28
Curved Lines Detail curved lines 28
29
Curved Lines Detail curved lines 29
30
Curved Lines Detail curved lines 30
31
Experiment Setup 53 affected, 136 control individuals
Age range 10 months to 39 years Data labeled status, gender & age Goal: classify each individual as affected or control 31
32
selection and quantification
System Diagram clean up automatic alignment PCA (similar to work in eigenfaces) use PCA to facial feature selection and quantification classify PCA Naive Bayes 32
33
Experiments: Component Selection
Select Attributes in WEKA grosser features picked up more in PCA Basic methodology of PCA doesn’t apply signal for VCFS buried in noise 10 months 10 years 30 years 33
34
Statistical measures Disease Test TP FP FN TN
Positive Negative Test TP FP FN TN Recall = Measures the proportion of actual affected which are correctly labeled as affected Precision = Measures the proportion of labeled affected which are actually affected Specificity = Measures the proportion of actual control which are correctly labeled as control F1 = harmonic mean of equal weights of precision and recall 34
35
Classifiers used
36
Results: Balancing Data Sets
Name #total (#affected) data set description ALL 189 (53) data collected from Children’s Hospital AS106 106 (53) each affected matched by gender, then closest age W86 86 (43) only affected labeled white matched by gender, then closest age features are fairly subtle so have to remove all extra stuff (ethnicity, etc) more coherent group we are able to pick up on the disease features beating the experts based on 2D photos and performing about the same as the experts from Children’s (different places on precision-recall curve) Recall = Measures the proportion of actual affected which are correctly labeled as affected Precision = Measures the proportion of labeled affected which are actually affected Specificity = Measures the proportion of actual control which are correctly labeled as control F1 = harmonic mean of equal weights of precision and recall 36
37
Results: 3D snapshot vs. 2.5D
the next experiment we wanted to run to see how does 3Dsnp compares to 2.5D human point of view they all do about the same in Fmeasure, but 3dsnp and 2.5D represent different points on the precision recall curve 3dSnp is more precision, but gets fewer of the affected individuals 2.5D gets more of the affected individuals, but it’s precision goes down different applications might prefer one or the other, we want 2.5D since it misses fewer of the affected individuals (ALSO more similar to expert results) 37
38
Results: Curved Lines Seven lines - less info in side of face
Few lines better than whole image - noise might be really good at picking out some global features like bump in forehead or low tone cheeks 38
39
Expert Survey 3 experts quantify features new insights
process of completing (expert survey) want to quantify the degree of features from survey and debriefing we get some new insights to the facial features ex. curve of the forehead 39
40
Comparison to experts F-measure Precision Recall Accuracy
Mark’s data is incomplete (missing one person) F-measure Precision Recall Accuracy
41
Proposal for Continued Work
Distance from Average Approximation using Ellipsoids Creating new texture information Assessing facial asymmetry Automatic Landmarks 3D Local features Global features Local features 41
42
Global Feature Distance From Average
Data sets: curved lines, 2.5D Separate by status, sex, age Similar approach as Hutton, but no landmarks necessary 42
43
Global Feature Creating New Texture Information
Average face Gaussian curvature Azimuth and elevation of normals Geodesic information We don’t have texture information due to HUMAN SUBJECTS REQUIREMENTS but perhaps we can generate a texture that will enhance classification 43
44
Contributions Fully automatic system Facial pose alignment method
Different data representations for classification Classification of 22q11.2DS affected individuals rivals experts 44
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.