ANALYSIS OF A LOCALLY VARYING INTENSITY TEMPLATE FOR SEGMENTATION OF KIDNEYS IN CT IMAGES MANJARI I RAO UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
QR Code Recognition Based On Image Processing
System Challenges in Image Analysis for Radiation Therapy Stephen M. Pizer Kenan Professor Medical Image Display & Analysis Group University.
Extended Gaussian Images
Digital Image Processing
These improvements are in the context of automatic segmentations which are among the best found in the literature, exceeding agreement between experts.
Automatic Feature Extraction for Multi-view 3D Face Recognition
3D Skeletons Using Graphics Hardware Jonathan Bilodeau Chris Niski.
DTM Generation From Analogue Maps By Varshosaz. 2 Using cartographic data sources Data digitised mainly from contour maps Digitising contours leads to.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
6/9/2015Digital Image Processing1. 2 Example Histogram.
Surface Reconstruction from 3D Volume Data. Problem Definition Construct polyhedral surfaces from regularly-sampled 3D digital volumes.
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Medial Object Shape Representations for Image Analysis & Object Synthesis Stephen M. Pizer Kenan Professor Medical Image Display & Analysis Group.
University of North Carolina Comparison of Human and M-rep Kidneys Segmented from CT Images James Chen, Gregg Tracton, Manjari Rao, Sarang Joshi, Steve.
Medical Image Synthesis via Monte Carlo Simulation James Z. Chen, Stephen M. Pizer, Edward L. Chaney, Sarang Joshi Medical Image Display & Analysis Group,
Lotte Verbunt Investigation of leaf positioning accuracy of two types of Siemens MLCs making use of an EPID.
Clustering on Image Boundary Regions for Deformable Model Segmentation Joshua Stough, Stephen M. Pizer, Edward L. Chaney, Manjari Rao Medical Image Display.
12-Apr CSCE790T Medical Image Processing University of South Carolina Department of Computer Science 3D Active Shape Models Integrating Robust Edge.
Caudate Shape Discrimination in Schizophrenia Using Template-free Non-parametric Tests Y. Sampath K. Vetsa 1, Martin Styner 1, Stephen M. Pizer 1, Jeffrey.
Fat Curves and Representation of Planar Figures L.M. Mestetskii Department of Information Technologies, Tver’ State University, Tver, Russia Computers.
Computer-Aided Diagnosis and Display Lab Department of Radiology, Chapel Hill UNC Julien Jomier, Erwann Rault, and Stephen R. Aylward Computer.
tomos = slice, graphein = to write
Towards a Multiscale Figural Geometry Stephen Pizer Andrew Thall, Paul Yushkevich Medical Image Display & Analysis Group.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
Multimodal Interaction Dr. Mike Spann
1 SEGMENTATION OF BREAST TUMOR IN THREE- DIMENSIONAL ULTRASOUND IMAGES USING THREE- DIMENSIONAL DISCRETE ACTIVE CONTOUR MODEL Ultrasound in Med. & Biol.,
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
The Digital Image Dr. John Ryan.
Seeram Chapter 7: Image Reconstruction
Integral University EC-024 Digital Image Processing.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
Seeram Chapter 9: Image Manipulation in CT
AUTOMATIZATION OF COMPUTED TOMOGRAPHY PATHOLOGY DETECTION Semyon Medvedik Elena Kozakevich.
Digital Image Processing CCS331 Relationships of Pixel 1.
Automatic Minirhizotron Root Image Analysis Using Two-Dimensional Matched Filtering and Local Entropy Thresholding Presented by Guang Zeng.
Medical Image Analysis Image Registration Figures come from the textbook: Medical Image Analysis, by Atam P. Dhawan, IEEE Press, 2003.
Copyright © 2010 Siemens Medical Solutions USA, Inc. All rights reserved. Hierarchical Segmentation and Identification of Thoracic Vertebra Using Learning-based.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
PRINCIPLES AND APPROACHES 3D Medical Imaging. Introduction (I) – Purpose and Sources of Medical Imaging Purpose  Given a set of multidimensional images,
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Common Property of Shape Data Objects: Natural Feature Space is Curved I.e. a Manifold (from Differential Geometry) Shapes As Data Objects.
CT IMAGE RECONSTRUCTION  Hounsfield envisioned dividing a slice into a matrix of 3-dimensional rectangular boxes (voxels) of material (tissue). Conventionally,
Image Registration Advanced DIP Project
Part No...., Module No....Lesson No
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Statistical Models of Appearance for Computer Vision 主講人:虞台文.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
1 UNC, Stat & OR Hailuoto Workshop Object Oriented Data Analysis, I J. S. Marron Dept. of Statistics and Operations Research, University of North Carolina.
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
Chapter 6 Skeleton & Morphological Operation. Image Processing for Pattern Recognition Feature Extraction Acquisition Preprocessing Classification Post.
Whole Slide Image Stitching for Osteosarcoma detection Ovidiu Daescu Colaborators: Bogdan Armaselu and Harish Babu Arunachalam University of Texas at Dallas.
Level Set Segmentation ~ 9.37 Ki-Chang Kwak.
CIVET seminar Presentation day: Presenter : Park, GilSoon.
Clustering on Image Boundary Regions for Deformable Model Segmentation Joshua Stough, Stephen M. Pizer, Edward L. Chaney, Manjari Rao, Gregg.
Segmentation of Single-Figure Objects by Deformable M-reps
Bitmap Image Vectorization using Potrace Algorithm
Computed Tomography Image Reconstruction
Mean Shift Segmentation
Presentation transcript:

ANALYSIS OF A LOCALLY VARYING INTENSITY TEMPLATE FOR SEGMENTATION OF KIDNEYS IN CT IMAGES MANJARI I RAO UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL

OVERVIEW  Introduction  Background  Materials and Methods  Results  Conclusions and Discussion

External Beam Radiation Therapy (Image Types : CT, MRI, Ultrasound, PET etc.)  Simulation  Treatment Planning

Segmentation  Segmentation is a process of extraction of information from an image in such a way that the output image contains much less information than the original one, but the little information that it contains is much more relevant to the purpose of the task.  Medical Image Segmentation – Extraction of anatomical structures such as the kidney.

Segmentation in Treatment Planning  Manual Segmentation  Computer-based models  Combination techniques

Virtual Simulation  Combination of simulation and treatment planning  Detect tumor sites from different viewpoints  Design irradiation beam and orientation  Calculate dose distribution  Presence of patient not required

Manual Segmentation - Limitations  Dependent on the experience of the operator  Inter-operator and Intra-operator variability  Difficult to identify 3D structures on a 2D slice  Complexity is increased due to presence of infiltrating tumor and disease

OVERVIEW  Introduction  Background  Materials and Methods  Results  Conclusions and Discussion

Shape Representation Using Medial Models  A medial axis is the locus of centers of spheres that are bitangent to the surface of the object being represented  The center point of each such sphere is a point on the medial surface  M-reps are a class of medial models

Object Representation via M-reps  An m-rep consists of a grid of medial atoms, each of which is made of a hub and a pair of spokes  The atoms in an m-rep define the following: A medial position x in the mesh Two vectors of length r, which is the radius of the bitangent sphere inscribed inside the object. The vectors are the spokes that point to these points of tangency. The angle Θ formed between each of the spokes and the angular bisector b. A frame F = (n,b,b ┴ ), where n is normal to both b and b ┴, which defines the tangent plane to the medial sheet at x. The curvature η of the object about a point.

Object Representation via M-reps  M-reps define an object-based intrinsic coordinate system represented by (u, v, t,  ), where u and v represent the row and column corresponding to the position of the atom in the medial mesh t indicates which side of the medial locus the point lies on; t= -1 or +1 for internal medial points and varies around the crest from -1 through 0 at the boundary to +1  measures the distance along the spokes from the boundary, with  > 0 outside and  < 0 inside the boundary and  = -1 at the medial locus u v t  

Object Representation via M-reps  The medial locus is a curve for objects in 2D and a sheet for objects in 3D  It is sparsely sampled to produce an approximate surface of the m-rep or the “implied boundary”.

Segmentation using M-reps  Transformation of the whole kidney model based on both the similarity transform and Principal Geodesic Analysis or PGA -translation, rotation, scaling of the entire model and shape variation according to the principal modes of variation of a mean model.

Segmentation using M-reps  Deformation of each medial atom in the model- translation, rotation and scaling of each individual atom of the medial mesh.

Segmentation using M-reps  Atom deformation in this step is based on Geometric Typicality - Calculated by relating an atom’s coordinates predicted by the stage immediately previous to the current deformation stage, and by its neighbors at the current stage. Geometry to Image Match – Calculated based on a template defined in a ‘collar’ region about the boundary.

Segmentation using M-reps  Fine-scale surface refinement, i.e., each surface tile of the implied surface is shifted along its normal to optimize the objective function. This scale was not used in the present study.

Kidney Segmentation  Surrounded by crowded soft tissue environment – most soft tissue does not contrast well against its background.  High risk due to radiation exposure during treatment of abdominal and pelvic tumors. Hence, important in segmentation during treatment planning.  Simple organ, can be represented by a single-figure m-rep

Kidney as an object of interest for this study  Variation of intensities along the boundary of the kidney  Examples

Axial ViewCoronal View

OVERVIEW  Introduction  Background  Materials and Methods  Results  Conclusions and Discussion

Training and Target Images  Training Images To generate a mean kidney m-rep model To generate intensity profiles for the locally varying template

Training Images  CT Images – Siemens Somatom Plus 4 scanner  Raster resolution (number of pixels per slice) – 512 x 512  Pixel size – ranged from 0.098mm 2 to 0.156mm 2

Criteria for selection of Training Images  Presence of whole kidney(s)  Position of patient during scan

Criteria for selection of Training Images  Absence of contrast agent  Absence of tumor, disease or kidney stones

Criteria for selection of Training Images  No more than moderate motion artifacts

Criteria for selection of Training Images  Margin of at least 2cm on the superior and inferior edges of the kidney.  Slice thickness less than or equal to 5mm per slice.

OVERVIEW  Introduction  Background  Materials and Methods  Results  Conclusions and Discussion

Image Match based on a ‘Template’  A target pattern at different locations in an image - maximum response when the intensity values of the pixels in the image correlate to the values at the same locations specified by the template  Determined by training the model on a population of user-approved segmentations or the “answers”

Image match based on a ‘Template’  For an m-rep, the template is defined in the collar region about the boundary  The width of this region ranged from –0.3r to 0.3r with 0 at the boundary in this study  The template thus corresponds to a Gaussian centered about the boundary with a standard deviation of 0.15

Locally varying intensity template  Function of figural positions along the boundary of an m-rep  Generated from a population of training images  Several ‘profile types’ as compared to the Gaussian which had only one

Training  Manual segmentation of kidneys from the training data set  Conversion of resulting contours into blurred binary images  Deformation of an m-rep model into the blurred binary images to generate a mean model with principal geometric modes of variation via PGA  Segmentation of the training kidneys by deforming the mean model into the gray-scale training images

Training  Generation of intensity profiles at many points on the surface of the segmented kidneys  Classification of the each of the intensity profiles into one of three categories, based on the best match among three analytic filter types Final Step: Segmentation of kidneys in the target images using the locally varying intensity template to compute image match

Manual segmentation  Software used for radiation treatment planning – similar to drawing tools  Contours were traced on each cross-sectional slice  Intensity windowing to enhance the quality of the displayed image

Manual Slice-by-slice contouring using “Anastruct-Editor”

Generation of Training Binary Images  Output of hand-segmentation – series of contour ‘stacks’  Converted to binary images (have two intensity values, 0 for black and 1 for white)  Gaussian smoothing operator was used to blur the images. The  of the operator was approximately 1mm

Generation of the mean kidney m-rep  Single figure mesh of 15 atoms (5 rows and 3 columns) was fit into each of the training blurred binary images  The mean model and the corresponding principal modes of variation were obtained from this population of m-reps  Initial m-rep was replaced by the mean and the process was iterated until convergence

Generation of the locally varying Kidney Template  Initial Analytic Filters Light-to-dark - higher intensities in the kidney as compared to surrounding structures Dark-to-light - lower intensities in the kidney as compared to surrounding structures Notch - similar intensities in the kidney as well as surrounding structures with a narrow dark region in between

Initial Analytic Filters Light-to-dark Dark-to-light Notch Points along the normal Intensity

 Image corresponding to the light-to-dark filter

 Image corresponding to the dark-to-light filter

 Image corresponding to the notch filter

Generation of Profiles for the Template  End points of the spokes of the outer atoms in the medial lattice of an m-rep are linked together to form a quadrangular mesh surface  Surface was subdivided to provided a natural framework for defining the boundary at 2562 points on the surface of the m-rep

Generation of a Profiles for the Template  At each of the 2562 boundary points, normals were drawn (length from -0.3r to +0.3r with 0 at the boundary as defined by the collar)  Each normal was sampled at 11 points for each of the 2562 boundary points

Generation of Profiles for the Template  Segmentation of the training blurred binary images with the mean kidney m- rep  Generation of ‘profiles’ for each m- rep in this population with intensity information from corresponding gray- scale images

Generation of the locally varying Kidney Template  Responses of the profiles to each of the three analytic filters, given by the dot product, X.Y=  x i y i, i= 1,2,…, n, X -vector representing the profile Y -vector representing the filter  The highest response among the three filters for each point determined the filter type classification for that point

Classification of the profiles according to responses Light-to-dark Dark-to-light Notch

Converged Mean Filters Points along the normal Intensity Light-to-dark Dark-to-light Notch

Filter distribution on the surface of the left and right kidneys in the templates Left KidneyRight Kidney

Segmentation of Target Images

Axial ViewCoronal View

Segmentation of Target Images Axial ViewCoronal View

Accuracy Evaluation  VALMET –Software package involving voxel-based comparision Volume overlap - intersection of the two volumes divided by their union Maximum Surface Distance or Hausdorff Distance - the largest distance between two surfaces (Not symmetric) Mean absolute surface distance - how much on average the two surfaces differ (Also not symmetric) Curve 1 Curve 2 a b

OVERVIEW  Introduction  Background  Materials and Methods  Results  Conclusions and Discussion

Comparison using VALMET– Gaussian Vs Locally Varying Template  24 Target Cases (12 left and 12 right kidneys)  Segmentations were compared to human segmentations performed by two experts  Average increase in volume overlap over all the cases - 1.3%  Mean surface separation between human segmentations and M-rep segmentation cm for the Gaussian template and 0.32cm for the locally varying template

Comparison using VALMET– Gaussian Vs Locally Varying Template Mean Surface Separation Rater A Vs M-rep Rater B Vs M-rep

Results (Contd.) Axial ViewCoronal View

Results (Contd.) Axial ViewCoronal View

Results (Contd.) – ‘Error’ in Human Segmentation Sagittal View Coronal View

Results (Contd.) Coronal ViewSagittal View

OVERVIEW  Introduction  Background  Materials and Methods  Results  Conclusions and Discussion

Analysis of the Results  Locally varying template based segmentation showed improvement for 65% of the cases  Achieved a greater degree of automation in the entire segmentation process

Analysis of the Results Coronal ViewAxial View

Analysis of the Results Axial ViewCoronal View

Instances of Human Intervention Axial ViewCoronal View

Instances of Human Intervention Axial ViewCoronal View

Future Directions  More than three filter types to define the template  Statistical evaluation of variation of profiles along the boundary  Templates based on ‘weighted’ profiles, responses which are neighbor dependent  Higher density m-rep model