The Implementation of Markerless Image-based 3D Features Tracking System Lu Zhang Feb. 15, 2005.

Slides:



Advertisements
Similar presentations
Complex Networks for Representation and Characterization of Images For CS790g Project Bingdong Li 9/23/2009.
Advertisements

A Graph based Geometric Approach to Contour Extraction from Noisy Binary Images Amal Dev Parakkat, Jiju Peethambaran, Philumon Joseph and Ramanathan Muthuganapathy.
QR Code Recognition Based On Image Processing
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Proportion Priors for Image Sequence Segmentation Claudia Nieuwenhuis, etc. ICCV 2013 Oral.
Texture Segmentation Based on Voting of Blocks, Bayesian Flooding and Region Merging C. Panagiotakis (1), I. Grinias (2) and G. Tziritas (3)
Lecture 6 Image Segmentation
Medical Imaging Mohammad Dawood Department of Computer Science University of Münster Germany.
Chapter 10 Image Segmentation.
Shape From Texture Nick Vallidis March 20, 2000 COMP 290 Computer Vision.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
University of CreteCS4831 The use of Minimum Spanning Trees in microarray expression data Gkirtzou Ekaterini.
Segmentation Divide the image into segments. Each segment:
Objective of Computer Vision
Cutting complete weighted graphs Jameson Cahill Ido Heskia Math/CSC 870 Spring 2007.
Fitting a Model to Data Reading: 15.1,
Objective of Computer Vision
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Presented By : Murad Tukan
Automatic Camera Calibration
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
New Segmentation Methods Advisor : 丁建均 Jian-Jiun Ding Presenter : 蔡佳豪 Chia-Hao Tsai Date: Digital Image and Signal Processing Lab Graduate Institute.
Machine Vision for Robots
Image Segmentation Seminar III Xiaofeng Fan. Today ’ s Presentation Problem Definition Problem Definition Approach Approach Segmentation Methods Segmentation.
CS 6825: Binary Image Processing – binary blob metrics
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
Medical Imaging Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
Example: line fitting. n=2 Model fitting Measure distances.
Digital Image Processing CCS331 Relationships of Pixel 1.
Chapter 10 Image Segmentation.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
Color Image Segmentation Speaker: Deng Huipeng 25th Oct , 2007.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
A Region Based Stereo Matching Algorithm Using Cooperative Optimization Zeng-Fu Wang, Zhi-Gang Zheng University of Science and Technology of China Computer.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
CS654: Digital Image Analysis
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Zhongyan Liang, Sanyuan Zhang Under review for Journal of Zhejiang University Science C (Computers & Electronics) Publisher: Springer A Credible Tilt License.
Tracking Turbulent 3D Features Lu Zhang Nov. 10, 2005.
Low level Computer Vision 1. Thresholding 2. Convolution 3. Morphological Operations 4. Connected Component Extraction 5. Feature Extraction 1.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Digital Image Processing
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Medical Image Analysis Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Pixel Parallel Vessel Tree Extraction for a Personal Authentication System 2010/01/14 學生:羅國育.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
CSE 554 Lecture 8: Alignment
Course : T Computer Vision
CSSE463: Image Recognition Day 21
Vehicle Segmentation and Tracking in the Presence of Occlusions
ECE 692 – Advanced Topics in Computer Vision
Fall 2012 Longin Jan Latecki
CSSE463: Image Recognition Day 23
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Digital Image Processing Week III
CSSE463: Image Recognition Day 23
Image Segmentation.
CSSE463: Image Recognition Day 23
Presentation transcript:

The Implementation of Markerless Image-based 3D Features Tracking System Lu Zhang Feb. 15, 2005

Motivations Objective Find more efficient algorithms to implement 3D volume tracking based on 2D Image sequences. Problems in this topic 1. Huge datasets For only one data file: 128*128* Computation time Applications 1. On sensors 2. On robotics

Outline Previous work  Flowchart of the system  Algorithms Current work  Improved algorithms  Comparisons Future work  Problems unsolved  How to enhance computation speed

Previous Work The original image- based 2D dataset : Size: 512*512*40*(R, G, B) Flowchart and Modulus Input images Segmentation Feature extraction Classification Graph building Basic features classes Directed acyclic graph

Previous Work - Algorithms Modulus1: Segmentations  Global Thresholding: Problems: One threshold to all image sequences.  Iterative region growing method [1] After applying this method to segmented image sequences: VS

Previous Work-Feature Extraction Output from Feature extraction module viewID mx my areas labeling timeID label. Modulus2: Feature Extraction After gaining region information from segmentation stage, we can browse each region to find basic features: Areas – The count of all pixels in the region. Center of Gravity –The center of all points in one region.

Classification /Feature tracking Modulus3: Classification  One Assumption Time between successive data sets is small: we can assume the difference between a pair of views should not vary too much.  Euclidean Distance Classifier

Current Work-Improvement Modulus 1: Segmentation: Optimal Thresholding: Isodata algorithm  Segment images into two parts using a starting threshold value.  Calculate the mean (mf,0) of the foreground pixels and the mean (mb,0) of background pixels.  A new threshold value is now computed as the average of these two sample means.  The process is repeated, based upon the new threshold, until the threshold value does not change any more.

Previous Work - Algorithms Modulus1: Segmentations  Region growing: Purpose: Trying to separate overlapped objects Algorithms: Region growing based on Marr-Hildreth and sobel edge detectors

Current Work-Features extraction Feature Extraction  Diameter - Diameter is the distance between two points on the boundary of the region whose mutual distance is the maximum.  Major Axis of The Region – the major axis of the region is the line which minimizes: These two features are relatively robust, and the second feature: major axis can help detect the reflection part on objects.

Current Work-Features extraction Feature Extraction  Compute major axis PCA  Diameter 1. Rotate the X-Y coordinate to let the new X-coordinate is the major axis 2. Divide the 2D plane into four regions, find the furthest points on each region 3. Calculate the Euclidean distance

Current Work-Features extraction Experiment results from diameter detector

Current Work-Features extraction Experiment results of from feature extraction modulus: TimeID ViewID Mx My R G B areas diameter angle label

Current Work-Feature Extraction Modulus2: Feature Extraction Problems solving: Reflections: According to the experiment result on the right: to some big objects, their reflections which come from the distance transformation when we pre-projected 3D objects onto 2D image plane are distracted as different objects. Algorithms: Using the property of major axis: because they belong to the same object, their major axis should parallel or at least have similar angles to each other

Current work-Classification /Feature tracking Classification methods Euclidean Distance Classifier Evolution in time-varying images There are five different changes of regions between a pair of views.  Continuation: one feature continues from dataset at t1 to the next dataset at t2  Creation: new feature appear in t2  Dissipation: one feature weakens and becomes part of the background  Bifurcation: one feature in t1 separates into two or more features in t2.  Amalgamation: two or more features merge from one time step to the next.

Current Work Output from Classification module New class to preserve the output dataset from Classification module: class LabelTrack(). It preserve the information: 1. ViewID: camera positions, we will move camera around the object in order to restore 3D object. 2. timeID: time order, for each camera position, we will take several time- varying images 3. classID: class number after correspondence computation between a pair of images in time order 4. Label: the original region numbers before correspondence computaton 5. R, G, B: the color information for each pixel 6. Coordinate x, y: the 2D coordinate of the projection of 3D object. 7. Forward pointer: preserve the labeling information of the previous dataset 8. Backward pointer: preserve the labeling information of the next dataset

Future Work-Speed Enhancement The importance of computation time Size of mine dataset: 512*512*24*40(time orders)*N(camera positions) In [5], the computation time for 128^3*10 is 7 minutes. In the previous work, I use 4 minutes for 512*512*24*40. In the current work, most I/O operations have been removed, although the computation time is around 5 minutes. Most of the time is consumed on Marr-Hildreth edge detector.

REFERENCES [1] Snyder and Cowart, “An Iterative Approach to Region Growing”, IEEE transaction on PAMI, 1983 [2] Wesley E.Snyder and Hairong Qi, “Machine Vision”, Cambridge [3] Richard O.Duda, Peter Hart, David Stork, “Pattern Classification”, Prentice Hall [4] Rafael Gonzalez, Richard Woods,”Digital Image Processing”, 2 nd, Prentice Hall [5] D.Silver, Xin Wang, ”volume tracking”, Visualization '96. Proceedings.27 Oct.-1 Nov. 1996

Thanks Any questions?