Cores (defined at right) are used to automatically segment, or outline, objects in grayscale images (Fig. 1). A core provides geometric information about.

Slides:



Advertisements
Similar presentations
Geometric Tolerances & Dimensioning
Advertisements

Feature extraction: Corners
Volumes by Slicing: Disks and Washers
Geometric Tolerances & Dimensioning
CSE554Cell ComplexesSlide 1 CSE 554 Lecture 3: Shape Analysis (Part II) Fall 2014.
電腦視覺 Computer and Robot Vision I
Geometric Tolerances J. M. McCarthy Fall 2003
6.2 - Volumes. Definition: Right Cylinder Let B 1 and B 2 be two congruent bases. A cylinder is the points on the line segments perpendicular to the bases.
APPLICATIONS OF INTEGRATION
Extended Gaussian Images
October 2, 2014Computer Vision Lecture 8: Edge Detection I 1 Edge Detection.
Tracking Tubular Shaped Objects CISMM: Computer Integrated Systems for Microscopy and Manipulation Collaborators: Bob Goldstein, Erin McCarthy, Michael.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Snakes - Active Contour Lecturer: Hagit Hel-Or
SolidWorks Teacher Guide Lesson9 School’s Name Teacher’s Name Date.
Image courtesy of National Optical Astronomy Observatory, operated by the Association of Universities for Research in Astronomy, under cooperative agreement.
Image courtesy of National Optical Astronomy Observatory, operated by the Association of Universities for Research in Astronomy, under cooperative agreement.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Extracting Branching Object Geometry via Cores Doctoral Dissertation Defense Yoni Fridman August 17, 2004 Advisor: Stephen Pizer Doctoral Dissertation.
Medial Object Shape Representations for Image Analysis & Object Synthesis Stephen M. Pizer Kenan Professor Medical Image Display & Analysis Group.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Feature extraction: Corners and blobs
Extension of M-VOTE: Improving Feature Detection
Fat Curves and Representation of Planar Figures L.M. Mestetskii Department of Information Technologies, Tver’ State University, Tver, Russia Computers.
An Overview of Cores Yoni Fridman The University of North Carolina at Chapel Hill Medical Image Display & Analysis Group Based on work by Fridman, Furst,
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
Towards a Multiscale Figural Geometry Stephen Pizer Andrew Thall, Paul Yushkevich Medical Image Display & Analysis Group.
Section 16.4 Double Integrals In Polar Coordinates.
Modeling and representation 1 – comparative review and polygon mesh models 2.1 Introduction 2.2 Polygonal representation of three-dimensional objects 2.3.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Circular Augmented Rotational Trajectory (CART) Shape Recognition & Curvature Estimation Presentation for 3IA 2007 Russel Ahmed Apu & Dr. Marina Gavrilova.
Vascular Attributes and Malignant Brain Tumors MICCAI November 2003 CONCLUSIONS References: [1] Aylward S, Bullitt E (2002) Initialization, noise, singularities.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
The isosurface is a 3D reconstruction of the DiO dataset. The surface structure exhibits the shape of the dendritic spine and color exhibits the concentration.
1 ECE 738 Paper presentation Paper: Active Appearance Models Author: T.F.Cootes, G.J. Edwards and C.J.Taylor Student: Zhaozheng Yin Instructor: Dr. Yuhen.
HOUGH TRANSFORM Presentation by Sumit Tandon
Lecture 2: Edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Introduction EE 520: Image Analysis & Computer Vision.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Simulated Microscopes CISMM: Computer Integrated Systems for Microscopy and Manipulation Collaborators: Mike Falvo, Tim O’Brien, Dorothy Erie (Department.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
CS 641 Term project Level-set based segmentation algorithms Presented by- Karthik Alavala (under the guidance of Dr. Jundong Liu)
CISC 110 Day 3 Introduction to Computer Graphics.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
MultiModality Registration Using Hilbert-Schmidt Estimators By: Srinivas Peddi Computer Integrated Surgery II April 6 th, 2001.
Image registration aligns the common features of two images. The open-source Insight Toolkit (ITK, funded by the National Library of Medicine) provides.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
1 Ι © Dassault Systèmes Ι Confidential Information Ι Instructor’s Guide to Teaching SolidWorks Software Lesson 9 School’s Name Teacher’s Name Date.
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
3/17/ : Surface Area and Volume of Cones Expectation: G1.8.1: Solve multistep problems involving surface area and volume of pyramids, prisms, cones,
Keypoint extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Bitmap Image Vectorization using Potrace Algorithm
Two-Dimensional Sketching
Curl and Divergence.
Fourier Transform: Real-World Images
Domain-Modeling Techniques
ECE 692 – Advanced Topics in Computer Vision
Segmenting 3D Branching Tubular Structures Using Cores
CSE 554 Lecture 3: Shape Analysis (Part II)
Direct Visualization of a DNA Glycosylase Searching for Damage
F.G.A. Faas, B. Rieger, L.J. van Vliet, D.I. Cherny 
Volume 88, Issue 6, Pages (June 2005)
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Cores (defined at right) are used to automatically segment, or outline, objects in grayscale images (Fig. 1). A core provides geometric information about the desired object. At each point along the core we know its location, the radius of the object, and the orientation of the core tangent direction. Overview Cores (defined at right) are used to automatically segment, or outline, objects in grayscale images (Fig. 1). A core provides geometric information about the desired object. At each point along the core we know its location, the radius of the object, and the orientation of the core tangent direction. Tracking Tubular Shaped Objects CISMM: Computer Integrated Systems for Microscopy and Manipulation Collaborators: Dorothy Erie, Garrett Matthews, Martin Guthold Project Lead: Russell M. Taylor II Investigators: Yonatan Fridman, Stephen Pizer October 2002 Tracking Tubes in 2D Using Cores Extending Tube Tracking to 3D So that spurious bumps and dimples on the object boundary don’t lead to wiggles in the backbone.So that spurious bumps and dimples on the object boundary don’t lead to wiggles in the backbone. To blur out noise so that the object is easier to locate.To blur out noise so that the object is easier to locate. Then march along the core by taking a small step in the tangent direction and repeating optimization.Then march along the core by taking a small step in the tangent direction and repeating optimization. Figure 1: A grayscale image of a strand of DNA segmented using cores. The red curve is the core. The blue curves are the DNA edges implied by the core location and radius. Solution: Compute the difference between the orientations of the core tangents at the two ends of the strand. The blue lines show the tangents of interest. The red curve shows the computed core. Solution: Find the DNA bending angle at the protein by analyzing the core’s tangent directions (blue arrows); determine whether the DNA wraps around the protein by comparing the expected length of the DNA to the computed length of its core. Applications Problem: Determine the bending angle induced by the heating of a bi- layer strand. Problem: Determine how a strand of DNA is bound to a protein. An object’s core also provides a complete re- presentation of the object (Fig. 2). For each core point, place a disk whose center is the specified core point and whose radius is as indicated – the union of these disks is exactly the desired object. Cores can also be extended to 3D for segmenting objects in volume images. Instead of using a core point with two sails to locate the object of interest, each core point now has a set of radially symmetric sails Figure 2: Reconstruction of a tube from its core. t x Figure 6: A representation of a 3D tube detector. x represents the coordinates of the core point (x,y,z) and t represents the core tangent direction. Figure 7: Slice of a noisy 3D image of a tube whose axis lies entirely in the plane displayed. The red curve is the core of the tube. in an extremely noisy synthetic 3D image of a tube. (Fig. 6). This representation makes the limiting assumption that the object of interest is perfectly tubular (i.e., an extended object circular cross-sections). A one- dimensional derivative of a three- dimensional Gaussian is placed at each sail tip, where the derivative is taken in the direction of the sail. (Fig. 6). This representation makes the limiting assumption that the object of interest is perfectly tubular (i.e., an extended object with circular cross-sections). A one- dimensional derivative of a three- dimensional Gaussian is placed at each sail tip, where the derivative is taken in the direction of the sail. Due to the added information provided by examining the image at more than two sail locations, the 3D algorithm is more robust than the 2D algorithm in the presence of noise. Figure 7 shows an accurately located core Note: 2D tube tracking software is available for download from the web page given below. Figure 4: Blurring a noisy image. The Mathematics of Cores A core is a medial axis (backbone) of an object in a blurred image. Why blur the image? A core is computed using a marching algorithm: A core is computed using a marching algorithm: Start by manually estimating a core point at one end of the object and placing a derivative of a 2D Gaussian at two “sail” points. Each Gaussian derivative acts as an edge detector – when convolved with the image, it gives a strong response if it’s aligned with an object edge.Start by manually estimating a core point at one end of the object and placing a derivative of a 2D Gaussian at two “sail” points. Each Gaussian derivative acts as an edge detector – when convolved with the image, it gives a strong response if it’s aligned with an object edge. Refine the estimated core point by simultaneously optimizing the two derivative of Gaussians’ fits to the image with respect to location (x,y), radius (r), and orientation (t).Refine the estimated core point by simultaneously optimizing the two derivative of Gaussians’ fits to the image with respect to location (x,y), radius (r), and orientation (t). Figure 5: Computing a core. (x,y) r t Figure 3a: A heated carbon nanotube with an aluminum layer. Figure 3b: A protein (purple) bound to a strand of DNA. 1.Aylward, SR, E Bullitt (2002). Initialization, noise, singularities, and scale in height ridge traversal for tubular object centerline extraction. IEEE Transactions on Medical Imaging, 21: Aylward, SR, SM Pizer, E Bullitt, D Eberly (1996). Intensity ridge and widths for tubular object segmentation and description. IEEE Workshop on Mathematical Methods in Biomedical Image Analysis, 56: Pizer, SM, D Eberly, BS Morse, DS Fritsch (1998). Zoom-invariant vision of figural shape: The mathematics of cores. Computer Vision and Image Understanding, 69: This work is built upon other work done in MIDAG, including that of Aylward, Bullitt, Eberly, Fritsch, Furst, Morse, and Pizer. Specifically, Aylward and Bullitt [1], [2] use a multi-scale image intensity ridge traversal method in which they separately search for position and width information, and define orientation implicitly. Also see [3] for more information on the mathematics of cores.