Presentation is loading. Please wait.

Presentation is loading. Please wait.

3D Shape Analysis for Quantification, Classification and Retrieval

Similar presentations


Presentation on theme: "3D Shape Analysis for Quantification, Classification and Retrieval"— Presentation transcript:

1 3D Shape Analysis for Quantification, Classification and Retrieval
2D salient map 2D histogram GP tree 3D mesh object Indriyati Atmosukarto PhD Defense Advisor: Prof Linda Shapiro

2 General Motivation Increasing number of 3D objects available
Want to store, index, classify and retrieve objects automatically Need 3D object descriptor that captures global and local shape characteristics Advancement in technology for digital acquisition of 3D models has led to an increase in the n umber of 3D objects available. 3D objects are now commonly used in a number of areas such as games, manufacturing, cultural, medical just to name a few.

3 Medical Motivation Researchers at Seattle Children’s use CT scans and 3D surface meshes Investigate head shape dysmorphologies due to craniofacial disorders Want to represent, analyze and quantify variants from 3D head shapes A specific example of the usage of 3D objects in the medical field is the work that researchers at the Seattle Children’s hospital are pursuing. They aspire to develop new computational techniques that can represent , quantify and analyze variants of biological morphology from 3D head shapes

4 22q11.2 Deletion Syndrome (22q11.2DS)
Caused by genetic deletion Cardiac anomalies, learning disabilities Multiple subtle physical manifestations Assessment is subjective My work is motivated by the collaborations in two research studies with the people at Seattle Children’s. The first is a study of children with 22q11.2 deletion syndrome. 22q11.2DS is a genetic disease Early detection is desirable as people with 22q are tend to have cardiac anomalies and some spectrum of learning disabilities The condition is associated with multiple subtle physical manifestations such as, bulbous nasal tip, heavy brow line, hooded eyes Experts have difficulties in diagnosing and quantifying the condition

5 Deformational Plagiocephaly
Flattening of head caused by pressure Delayed neurocognitive development Assessment is subjective and inconsistent Need objective and repeatable severity quantification method The second research study is to study children with deformational plagiocephaly DP is a deformation of the head that is characterized by persistent flattening on the side resulting in a assymetric head shape It is caused by persistent pressure on the head before or after birth sleeping on their back or due to muscle tightness These figures show heads of infants with plagiocephaly…… Early diagnosis is desirable as there have been some research that the change in the head shape may affect the brain development and thus lead to delayed neurocognitive development Plagiocephaly Normal Brachycephaly

6 Objective Investigate new methodologies for representing 3D shapes
Representations are flexible enough to generalize from specific medical to general 3D object tasks Develop and test for 3D shape classification, retrieval and quantification Based on the fact that we want to be able to represent the various shapes due to deformities and also for other purposes, we want to investigates new methodologies for representing 3D objects and 3D shapes. Most existing 3D shape descriptors have only been developed and tested on general 3D object datasets, while those designed for medical purposes must usually satisfy a specific medical application and dataset. The objective of this work is to develop 3D shape representation methodologies that are flexible enough to generalize from specific medical tasks to general 3D object tasks. These methodologies would be developed and tested for different tasks including 3D shape classification, retrieval and quantification

7 Outline Related Literature Datasets Base Framework 3D Shape Analysis
Conclusion This is the outline of my talk. I will first start by briefly discussing the 3D object descriptors in the computer vision literature and mention some specific craniofacial assessment in the medical literature Next I will talk about the datasets that was used to develop and test our methodologies I will then start describing our methodologies by first describing the base framework used for feature extraction. Then I will dive into the details of our 3D object descriptors. I will then talk about our learning framework for quantification using GP and lastly conclude my talk

8 Shape Retrieval Evaluation Contest (SHREC)
Benchmark with common test set and queries Objective: evaluate effectiveness of 3D shape retrieval algorithms No descriptor performs best for all tasks 3D shape analysis has received increased attention in the past few years and as a result researchers took the initiative to organize an annual 3D shape retrieval contest called shrec a new initiative by the 3D shape community to introduce some kind of benchmark to the area. Releasing common test set and queries allows a direct comparison of different algorithms Results of the contest showed that no one descriptor performs the best for all kinds of retrieval task. Each descriptor has its own strength and weakness for the different queries and tasks.

9 3D Object Descriptor Feature-based Graph- based View- + - Compact
Eg Shape distributions Skeleton Light Field Descriptor + Compact Articulated object Best in SHREC - Not discriminative Computationally expensive There are three broad categories for 3D object representation: feature based method, graph based method, and view-based methods. I’ve talked about each of the methods in great detail at my proposal presentation, so here I will just summarize the strength and weakness of each of the methods. Represent shape as a point in high-dimensional space and two shapes are similar if they are close in space The feature-based method is further categorized into 4: global features, global feature distribution, spatial map and local features Graph based descriptor attempts to extract geometric meaning from a 3D shape using a graph by showing how shape components are linked together Sundar use a shape descriptor a skeletal graph that encodes geometric and topological information These methods are not very popular because even though they perform very well for matching articulated objects, however they are very computationally expensive

10 Deformational Plagiocephaly Measurements
Anthropometric landmark Physical measurements using calipers Template matching Landmark photographs Kelly et al. 1999 Subjective, time consuming, intrusive Anthropometric landmarks are Landmarks that identify certain points on the head that are commonly found across all head shapes. Use calipers to obtain certain measurements such as ratios The second approach is template matching: qualititatively match the shape of patient’s skull to a set of generic image templates, Templates contain images of skulls with varying degree of deformation. Expert finds the most similar template and assign the score corresponding to the template. The shortcomings of these methods is that they are intrusive … and so in my current work I will propose a quantification method that does not require any markers and is less intrusive and subjective. Cranial Index (CI) Oblique Cranial Length Ratio (OCLR) Hutchison et al. 2005

11 22q11.2DS Measurements Anthropometric landmark
2D template landmark + PCA 3D mean landmark + PCA Boehringer et al. Gabor wavelet + PCA to analyze 10 facial dysmorphologies Manual landmarks Boehringer applied landmarks to 2D images of the faces and used Gabor wavelets and Principal Component Analysis or PCA to transform tand describe he dataset Hutton used a Dense Surface Model where they manually placed landmarks on each of the 3D surface meshes and used it to align the faces to a mean face. PCA was used to describe the dataset Our method avoids the use of landmarks and the descriptors that we developed are flexible to generalize from describing our medical dataset to analyzing general 3D objects. Hutton et al. Align to average face + PCA

12 Outline Related Literature Datasets Base Framework 3D Shape Analysis
Conclusion

13 Datasets 22q11.2DS Deformational Plagiocephaly Heads SHREC
similar overall shape with subtle distinctions We obtained four datasets to develop and test the different shape analysis methodologies developed for this thesis Each dataset has different characteristics that help explore the different properties of the methodologies non similar shapes

14 22q11.2DS Dataset Dataset: 189 (53 + / 136 -),86 (43 + / 43 -)
Assessed by craniofacial experts Selected 9 facial features that characterize disease The 22q11.2DS dataset contains 3D face models of individuals affected and unaffected by 22q11.2DS The 3D face models in this dataset were collected at the Craniofacial Center at Seattle Children’s Hospital using 3dMD imaging system An automated pose normalization system was used to normalize the faces Groundtruth for the 22q11.2DS was determined through laboratory confirmation, while the groundtruth for the facial feature were done by doing a survey that lets crnaiofacial experts assess the 3D models For this thesis we will focus on 9 facial features

15 Deformational Plagiocephaly Dataset
Assessed by craniofacial experts 5 different affected areas of head The deformational plagiocephaly dataset contains 3D head models of individuals affected and unaffected by deformational plagiocephaly The 3D head models were otained and processed using a similar pipeline as the 22q11.2DS dataset

16 Heads Dataset 15 original objects - 7 classes
Randomly morph each object The heads dataset contains head shape of different classes of animals including humans

17 SHREC Dataset 425 objects - 39 classes

18 Outline Related Literature Datasets Base Framework 3D Shape Analysis
Conclusion Now I will talk about the base framework developed to extract features from the 3D models

19 Base Framework Feature extraction Feature 3D Shape aggregation
Curvature Surface normal Azimuth elevation angles Feature extraction Input: Surface mesh Ex: smoothed curvature The input to our base framework are 3D models that are represented as surface meshes Our base framework for feature extraction consists of two phases: Low level feature extraction: applies a low-level operator to every point on the surface mesh. After the first phase, every point on the surface mesh will have some low level feature value depending on the operator used. Some low level features that we have tested include Gaussian curvature, besl jain curvature, surface normal, azimuth elevation angle of normals The second phase performs mid-level feature aggregation and computes a vector of values for a given neighborhood of every point p on the surface mesh. Neighborhood size chosen empirically The feature aggregation results are then used to construct the different 3D object representations 3D Shape Analysis Feature aggregation Neighborhood radius histograms

20 2D azimuth elevation histogram
Learning salient points 2D longitude-latitude salient map 2D azimuth elevation histogram 3D Shape Analysis Our 3D shape analysis is divided into 3 parts: Our first descriptor is called the 2D longitude latitude salient map, which is constructed by first identifying interesting or salient points on the 3D objects and then using the patterns of these points to construct our descriptor. Our second descriptor is the 2D azimuth elevation histogram Lastly I will present how we use genetic programming to learn the quantification of shapes. Learning 3D shape quantification

21 2D azimuth elevation histogram
Learning salient points 2D longitude-latitude salient map 2D azimuth elevation histogram 3D Shape Analysis Let me start by explaining how we learn salient points Learning 3D shape quantification

22 Learning Salient Points
Salient points are application dependent Classifier learns characteristics of salient points SVM Classifier The identified salient points are application dependent The saliency methodology was developed to be specifically applicable to classification of craniofacial disorders but also appropriate for general use in 3D shape classification To find salient points on a 3D object, a learning approach was used. A salient point classifier is trained on a set of marked training points on the 3D objects provided by experts for a particular application. Histogram of low-level features of training points obtained using the base framework are then used to train the classifier For a particular application, the classifier will learn the characteristics of salient points on the surface of the 3D objects from that domain Saliency Prediction Model Training Data

23 Learning Salient Points
22q11.2DS Training on subset craniofacial landmarks Deformational Plagiocephaly Training points marked on flat areas on head The salient point classifier for the 22q11.2DS was trained on a subset of craniofacial anthropometric landmarks marked on 3D head objects. The points were selected to be well-defined points that both experts and non-experts could easily identify The training set consisted of human heads selected from the Heads database The figure on the right shows an example of manually marked salient points on the training data The histogram of low level features of the training points would then be used to train the SVM classifier to learn the characteristics of the salient points

24 Learning Salient Points – General 3D Objects
Training on craniofacial landmarks on different classes of heads Predicted salient points We also used the craniofacial landmarks to learn salient points on the shrec dataset that contains general 3D objects, however in addition to human faces we added heads of other animals. This figure shows the predicted salient points on the objects, colored by classifier confidence score

25 2D azimuth elevation histogram
Learning salient points 2D longitude-latitude salient map 2D azimuth elevation histogram 3D Shape Analysis Now that we have identified the salient points for each of the application, we want to construct our first 3D object signature. Learning 3D shape quantification

26 2D Longitude-Latitude Salient Map
Classification Salient Point Pattern Projection Retrieval Using the salient point patterns obtained the learning approach I previously described, we can construct the 3D object signature by projecting the patterns onto a 2D plane We tested our salient map on classification and retrieval tasks. In addition, We can also use the salient points to obtain 2D salient views to improve an exisiting view based descriptor called LFD and use it for retrieval Salient Views

27 Salient Point Pattern Projection
θ φ Input 3D mesh Predicted saliency Discretized saliency 2D map signature Discretize saliency according to score Map onto 2D plane via longitude-latitude transformation Before mapping the salient points onto a 2D plane, given the continuous classifier confidence score assigned by the saliency predicition model, the classifier confidence score range is discretized into a number of bins. Each salient point is assigned a label based on the bin into which its confidence score falls. Our signature is based on the salient point patterns of the 3D object mapped onto a 2D plane via longitude-latitute transformation. To obtain the 2D longitude-latitude map signature for an object, we calculate the longitude latitude positions of the points using the following formula. The final signature map is obtained by binning the longitude latitude values of the points into a fixed number of bins. We used 50x50..

28 Classification using 2D Map
By creating the signature it is now possible to perform classification of 3D objects in a given database These are the classification results of using the 2D map on our medical dataset. The classification accuracy of the map signature was compared to some of the state-of-the-art and best performing descriptors. SVM classifier LFD – Light Field Descriptor SPH – Spherical Harmonics D2 – Shape Distribution AAD – Angle Histogram

29 Retrieval using 2D Map Retrieval on SHREC
Next, we tested our salient map to perform retrieval tasks. Here I will show the retrieval results on the SHREC dataset Retrieval performance was measured using the average normalized relevant rank where score of 0 is best Our signature performs better than the global feature methods like D2 and AAD but slightly lower than the LFD and SPH. Note that LFD is not the best performer for all the classes, Our descriptor performs better in some classes for example in the eyeglasses class.

30 Salient Views Goal: improve LFD by selecting only 2D salient views to describe 3D object Discernible and useful in describing object Select Salient Views In this section, I will talk about how we use our salient points to improve on an existing descriptor In the related literature, I’ve talked about one of the existing descriptors called LFD which is the state-of-the-art best view based descriptor in SHREC, however it is very computationally expensive. This is because LFD extracts features from 100 2D silhouette image views and measures the similarity between two 3D objects by finding the best correspondence between the set of 2D views for the two objects. Our hypothesis is that salient views are the views that are discernible and most useful in describing 3D object

31 Salient Views Silhouette with contour salient points Greedy clustering
Surface normal vector ┴ camera view point Greedy clustering We define salient views as those 2D views or silhouettes with contour salient points We define a point as a contour salient point if it appears on the contour of the object where the surface normal vector of the point is perpendicular to the camera view point Perpendicularity is determined by calculating the dot product of the surface normal vector and camera view point and a treshold Salient points identified by the learning approach are quite dense and form regions, so apply a clusteirng algorithm to reduce the numer of points and produce more sparse placement of the points Results show the repeatability of the salient point learning and clustering algorithm

32 Selecting Salient Views
Accumulate # contour salient points Sort views based on # contour salient pts Select top K salient views Select top K distinct salient views (DSV) For each possible camera view point (total 100 view points), algorithm accumulates number of contour salient points that are visible for that view point Sort 100 view point based on number of contour salient point visible in the view Selects final top K salient views to construct descriptor Another variant of the algorithm selects the top K distinct salient views which finds views that are not too similar to each other

33 Salient Views - Number of views
Distinct Salient Views vs Light Field Descriptor Average score: (DSV) vs (LFD) Number of views: ~12 (DSV) vs 100 (LFD) The third experiment compares retrieval score when using the maximum number of distinct salient views to the retrieval score of existing LFD method. This table shows the average retrieval score for some of the classes in the SHREC database for the two methods. The overal average score does not differ much. Retrieval results differ slightly in ranks However our method greatly reduces computation time for descriptor computation

34 Salient Views - Runtime
Bottleneck: feature extraction step Feature extraction runtime comparison 15-fold speed up compare to LFD Reduce number of views to 10% Feature extraction phase is the bottleneck of the whole descvriptor computation Last experiment compares the run time speed of our method to existing LFD method. Feature extraction: Setup step that reads and normalized 3D objects 2D silhouette views are rendered Construct descriptor using rendered views The results show that feature extraction using the salient views provide a 15 fold speed up compared to using all 100 views

35 2D azimuth elevation histogram
Learning salient points 2D longitude-latitude salient map 2D azimuth elevation histogram 3D Shape Analysis I will now talk about our second descriptor: the global 2D azimuth elevation histogram Learning 3D shape quantification

36 Global 2D Azimuth-Elevation Angles Histogram
3D Shape Quantification for Deformational Plagiocephaly Classification of 22q11.2DS The azimuth elevation angles are one of the low level features we tested in the base framework We test this descriptor on our two medical applications: deformation plagiocephaly and 22q11.2DS

37 3D Shape Quantification for Deformational Plagiocephaly
Discretize azimuth elevation angles into 2D histogram Hypothesis: flat parts on head will create high-valued bins As I mentioned, assessing deformational plagicephaly is a very subjective process, and hence the goal here was to construct a shape signature to represent the flatness of the head and then define severity scores that would quantify the deformation The descriptor is constructed by computing the azimuth elevation angles for the surface normal vectors of every point on the head. The computed angles are then discretized into small number of bins. The value in each histogram bin would be the count of how many points of the mesh have that particular azimuth elevation angle combination The assumption for the construction is that the surface normal vectors of 3D points lying on a flat surfaces of the 3D head meshes will have much more homogenous set of angles than surface normal vectors of 3D points that lie on rounder surfaces, as a result the flat parts of the head will have high valued bins or peaks in the 2D histogram The Hypothesis here is that bin values can be used to differentiate plagiocephaly from typical head shapes

38 Shape Severity Scores for Posterior Plagiocephaly
Left Posterior Flatness Score (LPFS) Right Posterior Flatness Score (RPFS) Asymmetry Score (AS) = RPFS - LPFS Absolute Asymmetry Score (AAS) After we construct the 2D histogram, we can now define shape severity scores. Here we focus on posterior plagiocephaly which is the asymmetric flattening at the back of the head Since the heads are pose normalized we can use the histogram bins to refer to the areas of the head and define severity scores using selected bins We defined four severity scores listed here LPFS is the sum of histogram bins that are highlighted in red which basically cover the left back portion of the head RPFS is the sum of histogram bins that

39 Classification of Posterior Plagio
Absolute Asymmetry Score (AAS) vs Oblique Cranial Length Ratio (OCLR) We performed a number of experiments using the shape severity scores and all the results are in my thesis. Here I will present the results of using the absolute asymmetri score to classify posterior plagiocephaly and compare its performance to OCLR. This graph shows how AAS and OCLR classify the data. Each dot represents a patient. The patient’s are colored based on the expert severity scores, those scored 0 are in blue ……. A threshold value is automatically computed such that the threshold mazimizes the combination of sensitivtiy and specificity for classifying posterior plagiocephaly All DP cases with severe rating were above the threshold The AAS produced more accurate classifycation obtaining a higher

40 Classification of Posterior Plagio
Absolute Asymmetry Score (AAS) vs Oblique Cranial Length Ratio (OCLR) We performed a number of experiments using the shape severity scores and all the results are in my thesis. Here I will present the results of using the absolute asymmetri score to classify posterior plagiocephaly and compare its performance to OCLR. This graph shows how AAS and OCLR classify the data. Each dot represents a patient. The patient’s are colored based on the expert severity scores, those scored 0 are in blue ……. A threshold value is automatically computed such that the threshold mazimizes the combination of sensitivtiy and specificity for classifying posterior plagiocephaly All DP cases with severe rating were above the threshold The AAS produced more accurate classifycation obtaining a higher OCLR AAS # misclassified controls 8 2  

41 Classification of Deformational Plagiocephaly
Treat 2D histogram as feature vector Classify five plagiocephaly conditions In addition to defining severity scores, we can also use the 2D histogram as a feature vector and use it as a whole to classify the data. By using the 2D aziuth elevation histogram as a feature vector and using SVM classifier to perform classification we could classify the five different plagiocephaly conditions

42 Classification of 22q11.2DS Treat 2D histogram as feature vector
We also used the 2D histogram to analyze the 22q11.2DS data. This figure shows the projection of the histogram bins back onto the face. We can see that there is a difference in the pattern between individuals with 22q11.2DS and controls. All the experiments were performed using 10 fold cross validation SVM classifier

43 Classification of 22q11.2DS Facial Features
This table shows the classification accuracy of the nine different facial dysmorphologies associated with 22q using the global histogram method constructed with different number of bins.

44 2D azimuth elevation histogram
Learning salient points 2D longitude-latitude salient map 2D azimuth elevation histogram 3D Shape Analysis This last section will cover the use of genetic programming for learning 3D shape quantifications and investigates its application in analyzing craniofacial disorders Learning 3D shape quantification

45 Learning 3D Shape Quantification
Analyze 22q11.2DS and 9 associated facial features Goal: quantify different shape variations in different facial abnormalities For this part, we will focus the analysis on 22q11,2DS and its associated facial features The goal of the work for this part is to quantify or produce a numeric representation of the different shape variation that manifests in the different facial abnormalities in individuals with 22q11.2DS This figures shows the overview of the learning quantification framework The framework begins with the selection of the facial region that is most pertinent to a given facial abnormality. Features in the form of a 2D histogram of azimuth elevation angles of surface normals are extracted from the selected facial region. The framework continues by selecting features from the selected facial region using Adaboost. Genetic programming is then used to combine the selected features and produce the quantification of a given facial abnormality

46 Learning 3D Shape Quantification - Facial Region Selection
Focus on 3 facial areas Midface, nose, mouth Regions selected manually The nine different facial abnormalities cover three different areas on the face: midface, nose, mouth The facial regions are selected manually as the general area of each abnormality is known The nose is extracted using trapezium bounding box The mouth region is extracted using a rectangular bounding box And the midface region is extracted also using a bounding box that covers the middle of the face

47 Learning 3D Shape Quantification - 2D Histogram Azimuth Elevation
Using azimuth elevation angles of surface normal vectors of points in selected region Once the facial region pertinent to a given facial abnormality has been selected, features are extracted from the selected region The various 3D facial shape variations in the selected region are represented using a 2D histogram of azimuth elevation angles of surface normal vectors of the 3D points in the selected region, which is our second signature that we previously saw We can always treat the vector of histogram bin values as feature vector and directly use those for classification However, rather than a linear combination of the histogram bins we determine the bins that are most important and most discriminative in classifying the different facial abnormalities. We then use genetic programming to find the best way to combine the discriminative histogram bin values to generate quantification for the different facial abnormalities

48 Learning 3D Shape Quantification - Feature Selection
Determine most discriminative bins Use Adaboost learning Obtain positional information of important region on face To ndetermine the histogram bins that are most discriminative in classifying and quantifying given facial abnormalities, Adaboost learning is used to select the bins that optimize the classification performance This figure shows the selected bin values of the 2D histogram for classiying midface hypoplasia Positional information about which points in the selected region contribute to the selected bins can be projected back onto the face Shows correlation between features

49 Learning 3D Shape Quantification - Feature Combination
Use Genetic Programming (GP) to evolve mathematical expression Start with random population Individuals are evaluated with fitness measure Best individuals reproduce to form new population We did some preliminary experiments where we combined the selected discriminative bin values using weighted linear methods, however the results were less than satisfactory and highlighted the fact that the best combination is possibly non-linear Genetic programming methodology is used to combine the values of the selected discriminative histogram bins to produce mathematical expressions that quantify the shape variation of the different facial abnormalities Instead of thinking it in terms of genes or DNA string, you think of it in terms of programs Follow theory of evolution by evolving individual computer programs following the survival of fittest approach Uses genetic operators such as mutation and cross over to generate new individual to form a population Starting with a random population, every individual in the population is evaluated Individuals that perform best according to a given fitness measure reproduce with one another to produce a new population After a number of iterations, the best individual from all the populations is selected as the final solution

50 Learning 3D Shape Quantification - Genetic Programming
Individual: Tree structure Terminals e.g variables eg. 3, 5, x, y, … Function set e.g +, -, *, … Fitness measure e.g sum of square … x y 5 + * 5*(x+y) A quick summary of GP: Fundamental elements of an individual are its genes which form a code Individual program is a tree-like structure where there are two types of genes: functions and terminals Terminals are the leaves of the tree while functions are the nodes with children Function’s children provide arguments for the function

51 Learning 3D Shape Quantification - Feature Combination
22q11.2DS dataset Assessed by craniofacial experts Groundtruth is union of expert scores Goal: classify individual according to given facial abnormality Our analysis is focus on the 22q dataset Experiments were performed on the balanced 22q11.2DS consisting of 86 individuals each of which were assessed by three craniofacial experts who assigned scores to their facial features. The groundtruth for each individuals was set to be a binary label 0 and 1 and was computed to be the union of the expert scores

52 Learning 3D Shape Quantification - Feature Combination
Individual Terminal: selected histogram bins Function set: +,-,*,min,max,sqrt,log,2x,5x,10x Fitness measure: F1-measure Fitness function to Measure of performance of an individual using F-measure that gives a good balance between precision and recall The best individual in the population is the individual with maximum F-measure value The approach produces the tree structure of the best individual, which can then be translated into a mathematical expression X6 + X7 + (max(X7,X6)-sin(X8) + (X6+X6))

53 Learning 3D Shape Quantification - Experiment 1
Objective: investigate function sets Combo1 = {+,-,*,min,max} Combo2 = {+,-,*,min,max,sqrt,log2,log10} Combo3 = {+,-,*,min,max, 2x,5x,10x,20x,50x,100x} Combo4 = {+,-,*,min,max,sqrt,log2,log10, The objective of the first experimental set is to analyze the classification performance of the GP quantification method using different function sets. In this experiment, four different combinations of functions are compared The last two function sets were chosen to introduce weighting to the mathematical experessions Each GP function set combination was run a total of 10 times

54 Learning 3D Shape Quantification - Experiment 1
Best F-measure out of 10 runs This result shows the best F-measure out of the 10 iterations for each of the different GP function set combinations The best performing GP function set for each of the facial abnormalities is highlighted in bold Results show that for most of the facial abnormality the simpler function sets, Combo1 and Combo2 that do not include the weighted operations perform better than the weighted GP function set

55 Tree structure for quantifying midface hypoplasia
Tree structure of the best performing GP mathematical expression for quantifying midface hypoplasia The variables are the selected histogram bins ((X7-X7) + (X6+(((X6+X6)-X7)+(X7-X2)))+X7))+(X9-5X9+X7+X7) Xi are the selected histogram bins

56 Learning 3D Shape Quantification - Experiment 2
Objective: compare local facial shape descriptors The second experiment compares three different facial shape descriptors: (1) 2D histogram of azimuth elevation angles of points in the selected region (2) selected 2D histogram bin values as a feature vector, (3) genetic programming quantification results Results show that for all facial abnormalities, using the GP approach to quantify and classify the facial abnormalities performs the best Using the selected 2D histogram bin values performs worse than GP but better than using the full 2D histogram of all points in the selected facial region

57 Learning 3D Shape Quantification - Experiment 3
Objective: compare GP to global approach The third experiment compares the region-based GP approach to quantify and classify the different facial abnormalities to two global approaches in representing face and the various facial features

58 Learning 3D Shape Quantification - Experiment 4
Objective: predict 22q11.2DS This experiment analyzes how the learned GP quantification performed in predicting 22q11.2DS for individuals in the dataset. The best performing mathematical expression obtained by the GP approach for each of the nine facial abnormalities are evaluated and the resulting values is concatenated into a feature vector of dimension nine The resulting feature vector is then used to classify each individual in the dataset

59 Outline Related Literature Datasets Base Framework 3D Shape Analysis
Conclusion This last section will cover the use of genetic programming for learning 3D shape quantifications and investigates its application in analyzing craniofacial disorders

60 Contributions General methodology for 3D shape analysis
Learning approach to detect salient points 3D object signatures 2D longitude-latitude salient map 2D histogram of azimuth-elevation angles Methodology for quantification of craniofacial disorders

61 Future Directions Analyze other craniofacial disorders
Cleft lip/palate, craniofacial microsomia Association of shape changes Over time, pre/post op Genotype–phenotype disease association Translate 3D shape quantification into plain English language

62 Acknowledgements PhD Committee Members
Linda Shapiro; James Brinkley; Maya Gupta; Mark Ganther; Steve Seitz Collaborators at Seattle Children’s Hospital Craniofacial Center Michael Cunningham; Matthew Speltz; Brent Collett; Carrie Heike; Christa Novak Research Group This research is supported by the National Science Foundation under grant number DBI

63 Publications [1] 3D Head Shape Quantification for Infants with and without Deformational Plagiocephaly. I. Atmosukarto, L. G. Shapiro, J. R. Starr, C. L. Heike, B. Collett, M. L. Cunningham, M. L. Speltz. Accepted for publication in The Cleft-Palate Craniofacial Journal, 2009. [2] 3D Object Classification using Salient Point Patterns With Application to Craniofacial Research I. Atmosukarto , K. Wilamowska, C. Heike, L. G. Shapiro. Accepted for publication in Pattern Recognition, 2009. [3] The Use of Genetic Programming for Learning 3D Craniofacial Shape Quantification. I. Atmosukarto, L. G. Shapiro, C. Heike. Accepted in International Conference on Pattern Recognition, 2010. [4] 3D Object Retrieval Using Salient Views. I. Atmosukarto and L. G. Shapiro.  In ACM Multimedia Information Retrieval, 2010. [5] Shape-Based Classification of 3D Head Data. L.Shapiro, K. Wilamowska, I. Atmosukarto, J. Wu, C.  Heike, M. Speltz, and M. Cunningham. In International Conference on Image Analysis and Processing, 2009. [6] Automatic 3D Shape Severity Quantification and Localization for Deformational Plagiocephaly. I. Atmosukarto, L. Shapiro, M. Cunningham, and M. Speltz. In Proc. SPIE Medical Imaging: Image Processing, 2009. [7] A Learning Approach to 3D Object Classification. I. Atmosukarto, L. Shapiro. In Proc. S+SSPR, 2008. [8] A Salient-Point Signature for 3D Object Retrieval. I. Atmosukarto, L. G. Shapiro. In Proc. ACM Multimedia Information Retrieval, 2008.


Download ppt "3D Shape Analysis for Quantification, Classification and Retrieval"

Similar presentations


Ads by Google