Download presentation
Presentation is loading. Please wait.
Published byVictorio Galindo Faro Modified over 6 years ago
1
Texture Analysis for Pulmonary Nodules Interpretation and Retrieval
By Ryan Datteri MEDIX program Advisors: Dr. Daniela Raicu Dr. Jacob Furst
2
Motivation Increase the accuracy of medical diagnostics
Give radiologists comparable and similar images Make diagnostics less subjective Discover the best texture features to use for pulmonary nodule retrieval 11/17/2018
3
BRISC Content Based Image Retrieval System (CBIRS)
Four Retrieval methods implemented Gabor Markov Local Co-occurrence Global Co-occurrence Methods are all texture based 11/17/2018
4
Diagram of BRISC 11/17/2018
5
Texture Retrieval Gray Level Images
Diversity of shapes and sizes of nodules Use texture to retrieve similar nodules Precision: # of retrieved instances of the query nodule # of relevant images 11/17/2018
6
Similarity Measures For histogram comparison For vector comparison
Jeffrey-Divergence Chi-Square For vector comparison Euclidean Manhattan Chebychev 11/17/2018
7
Local Co-occurrence Variables
Intensity bins Allow statistical relevance to appear in the co-occurrence matrices Histogram bins Used to compute the similarity measure Window Size Subset to perform co-occurrence around each relevant pixel 11/17/2018
8
Local Co-occurrence Testing
Window sizes 5x5, 7x7, 9x9, 11x11, and 13x13 Number of bins for intensity 64, 128, 256, and 512 Number of bins for the histogram 8, 16, 32, 64, 80, and 96. 11/17/2018
9
Best Found Attributes 2020 images queried 149 unique nodules
Window Size = 5x5 Number of histogram bins = 96 Image Binning = 64 bins for intensity 11/17/2018
10
LIDC Diagram Case/ Patient Nodule 1 Nodule 2 Slice 1 Slice 2 Slice 3
Radiologist rating 1 Radiologist rating 2 Contour 2 Contour 3 Contour 1 11/17/2018
11
Evaluation Method #1 Precision =
# of retrieved instances of the query nodule # of retrieved images 11/17/2018
12
Results: Local Co-occurrence
11/17/2018
13
Results: Local vs. Global
11/17/2018
14
Texture Model Comparison
11/17/2018
15
Image Size and Radiologist Agreement
See the performance of the texture models on larger images Radiologist Agreement See which texture models are more attune to a radiologists’ perceptions 11/17/2018
16
Image Size 11/17/2018
17
Radiologist Agreement
11/17/2018
18
Combination of Features
Used all texture models Gabor filters Markov Random Fields Local Co-occurrence Global Co-occurrence Optimal similarity measure used 11/17/2018
19
Combination of Features Results
11/17/2018
20
Principal Component Analysis
Reduces features Combined pixel based texture models Gabor filters MRF Local Co-occurrence Included components that accounted for 99% variability in data Reduced data to 25 Components from 3712 features (.67% of original size) 11/17/2018
21
PCA Results: 1 Item Retrieved
11/17/2018
22
PCA Results 11/17/2018
23
PCA^2 Performed PCA on individual texture models
Reduced Feature Set = Components that accounted for 99% variability Applied PCA to the resulting data set Reduced Data to 119 Components from 3712 features (3.2% of the original feature set size) 11/17/2018
24
PCA^2 Results 11/17/2018
25
Evaluation Method #2 Evaluate using radiologists’ annotations
Results should relate to radiologists’ perceptions Sort the database based on the radiologists’ annotations using Jaccard’s similarity Sort another list using a texture model Assign values from n to 1 (n to the first item in the list, n - 1 to the second, etc . . .) to the lists 11/17/2018
26
#2 Continued Use small number for n
Radiologists will only look at small number of retrieved results Assigned values multiplied with value of the corresponding image in the other list Best case when Lists are exactly the same: Worst Case, no matching item in either list: 0 11/17/2018
27
Diagram of Evaluation Method #2
Annotation Ground Truth Example. Results in a value of 379 and 98.44% Precision ‘Distance’ for left list = value computed from Jaccard’s similarity on radiologists’ annotations ‘Distance’ for right list = value from the similarity measure used for a specifice texture model Score = reversed rank 11/17/2018
28
Texture Models Evaluated Using Method #2
11/17/2018
29
Texture Models Evaluation Methods #1 #2 1. All Features 2. Gabor 2. Markov 3. Markov 3. Gabor 4. Local 5. Global Texture Models perform in nearly the same rank as Evaluation Method 1 Reaffirms Evaluation Method 1 Some correlation between Radiologists’ perceptions and our texture models 11/17/2018
30
Radiological Agreement
11/17/2018
31
Similarity Measures Evaluated Using the Second Method
Global Local Gabor Markov Euclidean 25.75% 30.34% 26.40% Manhattan 25.61% Chebychev 25.43% Jeffrey Div. 30.25% 30.81% 24.40% Chi-Square 31.55% 32.68% 11/17/2018
32
Conclusions Local Co-occurrence performs better then Global Co-occurrence Local Co-occurrence performs comparably to Gabor and Markov Combination of Textures Performed the best PCA on all features resulted in .67% of the original features with a 3% loss in precision. PCA on each separate texture model and then on the combination of the resulting components had a 1% loss in precision with less features Evaluation methods are valid Evaluation #2 allows comparison not ‘how’ 11/17/2018
33
Conclusions Texture Models Evaluation Method 1 Evaluation Method 2
Texture Models Evaluation Method 1 Evaluation Method 2 Radiologist Annotation (Eval. #2) 4 Agree % Behind The Best 1 All Features 91.14 33.64 70.14 0% 2 MRF 87.00 32.68 70.42 -4.82% 3 Gabor 88.00 31.55 67.17 -8.20% 4 Local Co-occurrence 64.21 30.25 70.89 -29.57% 11/17/2018
34
Conclusions Texture Models Evaluation Method 1 Evaluation Method 2
Texture Models Evaluation Method 1 Evaluation Method 2 Radiologist Annotation (Eval. #2) 4 Agree % Behind The Best 1 All Features 91.14 33.64 70.14 0% 2 MRF 87.00 32.68 70.42 -4.82% 3 Gabor 88.00 31.55 67.17 -8.20% 4 Local Co-occurrence 64.21 30.25 70.89 -29.57% 11/17/2018
35
Conclusions Texture Models Evaluation Method 1 Evaluation Method 2
Texture Models Evaluation Method 1 Evaluation Method 2 Radiologist Annotation (Eval. #2) 4 Agree % Behind The Best 1 All Features 91.14 33.64 70.14 0% 2 MRF 87.00 32.68 70.42 -4.82% 3 Gabor 88.00 31.55 67.17 -8.20% 4 Local Co-occurrence 64.21 30.25 70.89 -29.57% 11/17/2018
36
Conclusions Texture Models Evaluation Method 1 Evaluation Method 2
Texture Models Evaluation Method 1 Evaluation Method 2 Radiologist Annotation (Eval. #2) 4 Agree % Behind The Best 1 All Features 91.14 33.64 70.14 0% 2 MRF 87.00 32.68 70.42 -4.82% 3 Gabor 88.00 31.55 67.17 -8.20% 4 Local Co-occurrence 64.21 30.25 70.89 -29.57% 11/17/2018
37
Next Steps Clustered Searching Textons
Combine all features with Radiologists’ Annotations See how the results improve Incorporate relevance feedback Clustered Searching Textons Develop the ‘how’ for evaluation #2 11/17/2018
38
References 1. M. Lam, T. Disney, M. Pham, D. Raicu, J. Furst, R. Susomboon, “Content-Based Image Retrieval for Pulmonary Computed Tomography Nodule Images”, SPIE Medical Imaging Conference, San Diego, CA, February 2007. 2. Cancer Facts and Figures, American Cancer Society, 2006. 3. What are the Key Statistics about Lung Cancer? American Cancer Society, 4. A. Corboy, W. Tsang, D. Raicu, J. Furst, "Texture-Based Image Retrieval for Computerized Tomography Databases", IEEE - Transactions on Information Technology in Biomedicine, 2006. 5. A. Materka and M.Strzelecki, “Texture analysis methods – a review,” tech. rep., Technical University of Lodz, Institute of Electronics, COST B11 report. 6. T. Andrysiak and M. Choras, “image retrieval based on hierarchical gabor filters,” International Journal Applied Computer Science 15 (4), pp , 2005. 7. C. Chen, L. Pau, and P. W. (eds.), The Handbook of Pattern Recognition and Computer Vision (2nd Edition), World Scientific Publishing Company, 1998. 8. Malik, J., Belongie, S., Shie, J., and Leung, T “Textons, contours and regions: Cue integration in image segmentation. In Proc. IEEE Intl. Conf. Computer Vision, Vol. 2, Corfu, Greece, pp 9. C. Wei, C. LI, R. Wilson. “A General Framework for Content-Based Medical Image Retrieval with its application to Mammograms”, Proc. SPIE Medical Imaging 2005, April 2005; 10. S. K. Kinoshita, P.M. de Azevedo-Marques, R.R. Pereira Júnior, J.A.H. Rodrigues, and R.M. Rangayyan, “Content-based retrieval of mammograms using visual features related to breast density patterns”. To appear in Journal of Digital Imaging, 2007. 11. C.-R. Shyu, C. Brodley, A. Kak, A. Kosaka, A. M. Aisen, and L. S. Broderick, “Assert: A physician-in-the-loop conten-based retrieval system for hrct image databases,” Computer Vision and Image Understanding 75, pp , July/August 1999. 11/17/2018
39
Special Thanks DePaul University MedIX Program
National Science Foundation Dr. Raicu Dr. Furst 11/17/2018
40
Questions???? 11/17/2018
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.