Tracking Turbulent 3D Features Lu Zhang Nov. 10, 2005.

Slides:



Advertisements
Similar presentations
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Advertisements

Presentation in Aircraft Satellite Image Identification Using Bayesian Decision Theory And Moment Invariants Feature Extraction Dickson Gichaga Wambaa.
SOFT SCISSORS: AN INTERACTIVE TOOL FOR REALTIME HIGH QUALITY MATTING International Conference on Computer Graphics and Interactive Techniques ACM SIGGRAPH.
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Intelligent Systems Lab. Recognizing Human actions from Still Images with Latent Poses Authors: Weilong Yang, Yang Wang, and Greg Mori Simon Fraser University,
Texture Segmentation Based on Voting of Blocks, Bayesian Flooding and Region Merging C. Panagiotakis (1), I. Grinias (2) and G. Tziritas (3)
Discriminative and generative methods for bags of features
Medical Imaging Mohammad Dawood Department of Computer Science University of Münster Germany.
Medical Imaging Mohammad Dawood Department of Computer Science University of Münster Germany.
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
CS 376b Introduction to Computer Vision 04 / 08 / 2008 Instructor: Michael Eckmann.
Digital Image Processing: Revision
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
Medical Image Analysis
Virtual Control of Optical Axis of the 3DTV Camera for Reducing Visual Fatigue in Stereoscopic 3DTV Presenter: Yi Shi & Saul Rodriguez March 26, 2008.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Digital Image Processing & Pattern Analysis (CSCE 563) Course Outline & Introduction Prof. Amr Goneid Department of Computer Science & Engineering The.
Performance Evaluation of Grouping Algorithms Vida Movahedi Elder Lab - Centre for Vision Research York University Spring 2009.
Interactive Image Segmentation of Non-Contiguous Classes using Particle Competition and Cooperation Fabricio Breve São Paulo State University (UNESP)
CSE 185 Introduction to Computer Vision Pattern Recognition.
What’s Making That Sound ?
: Chapter 1: Introduction 1 Montri Karnjanadecha ac.th/~montri Principles of Pattern Recognition.
UNDERSTANDING DYNAMIC BEHAVIOR OF EMBRYONIC STEM CELL MITOSIS Shubham Debnath 1, Bir Bhanu 2 Embryonic stem cells are derived from the inner cell mass.
New Segmentation Methods Advisor : 丁建均 Jian-Jiun Ding Presenter : 蔡佳豪 Chia-Hao Tsai Date: Digital Image and Signal Processing Lab Graduate Institute.
Machine Vision for Robots
3D Fingertip and Palm Tracking in Depth Image Sequences
Image Segmentation Seminar III Xiaofeng Fan. Today ’ s Presentation Problem Definition Problem Definition Approach Approach Segmentation Methods Segmentation.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Digital Image Processing & Analysis Spring Definitions Image Processing Image Analysis (Image Understanding) Computer Vision Low Level Processes:
ENT 273 Object Recognition and Feature Detection Hema C.R.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
Medical Imaging Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
Digital Image Processing CCS331 Relationships of Pixel 1.
1 Multiple Classifier Based on Fuzzy C-Means for a Flower Image Retrieval Keita Fukuda, Tetsuya Takiguchi, Yasuo Ariki Graduate School of Engineering,
EECS 274 Computer Vision Segmentation by Clustering II.
Digital Image Processing & Analysis Fall Outline Sampling and Quantization Image Transforms Discrete Cosine Transforms Image Operations Image Restoration.
Color Image Segmentation Speaker: Deng Huipeng 25th Oct , 2007.
Character Identification in Feature-Length Films Using Global Face-Name Matching IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 11, NO. 7, NOVEMBER 2009 Yi-Fan.
Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #2.
CS654: Digital Image Analysis
The Implementation of Markerless Image-based 3D Features Tracking System Lu Zhang Feb. 15, 2005.
Tree and leaf recognition
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
A Multiresolution Symbolic Representation of Time Series Vasileios Megalooikonomou Qiang Wang Guo Li Christos Faloutsos Presented by Rui Li.
Digital Image Processing
Journal of Visual Communication and Image Representation
EE4328, Section 005 Introduction to Digital Image Processing Image Segmentation Zhou Wang Dept. of Electrical Engineering The Univ. of Texas at Arlington.
Efficient Belief Propagation for Image Restoration Qi Zhao Mar.22,2006.
Quiz Week 8 Topical. Topical Quiz (Section 2) What is the difference between Computer Vision and Computer Graphics What is the difference between Computer.
Using decision trees to build an a framework for multivariate time- series classification 1 Present By Xiayi Kuang.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
SUMMERY 1. VOLUMETRIC FEATURES FOR EVENT DETECTION IN VIDEO correlate spatio-temporal shapes to video clips that have been automatically segmented we.
Pixel Parallel Vessel Tree Extraction for a Personal Authentication System 2010/01/14 學生:羅國育.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
Unsupervised Classification
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
Course : T Computer Vision
Fast nearest neighbor searches in high dimensions Sami Sieranoja
Nonparametric Semantic Segmentation
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
Image Segmentation Techniques
CSSE463: Image Recognition Day 23
Finding Functionally Significant Structural Motifs in Proteins
Chap.8 Image Analysis 숙명여자대학교 컴퓨터과학과 최 영 우 2005년 2학기.
Presentation transcript:

Tracking Turbulent 3D Features Lu Zhang Nov. 10, 2005

Motivations Introduction Visualization techniques can help scientists to identify observed phenomena both in scientific simulation or practical circumstance. Application Storm, Hurricane, Ocean wave, Cloud…. Common features: multiple evolution time-varying huge dataset non-rigid

Outline Segmentations and Region growing  Thresholding  Region growing Features extraction  Different features Classification and Feature tracking  Tracking methods  Classes and structures

Overview The original dataset Flowchart and Modulus Input images Segmentation Feature extraction Classification Graph building Basic features classes Directed acyclic graph

Segmentations and Region growing Thresholding Global thresholding vs optimal thresholding Region Growing method Iterative region growing method [1]

Segmentations and Region growing Region Growing Basic features timeID viewID x y R G B

Features extraction Feature structure After gaining region information from segmentation stage, we can browse each region to find basic features  Areas – The count of all pixels in the region.  Center of Gravity –The center of all points in one region.  Diameter - Diameter is the distance between two points on the boundary of the region whose mutual distance is the maximum.  Perimeter - The number of pixels under each edge label.  Fourier descriptors – Fourier transform of boundary points.

Features extraction Output from Feature extraction module viewID mx my areas labeling timeID …..

Classification /Feature tracking Classification After feature extraction module, we can gain a list of feature information for each region in different views. One Assumption Because all the views have strictly time order, we can assume the difference between a pair of views should not vary too much.

Classification /Feature tracking Evolution in time-varying images There are five different changes of regions between a pair of views. Continuation: one feature continues from dataset at t1 to the next dataset at t2 Creation: new feature appear in t2 Dissipation: one feature weakens and becomes part of the background Bifurcation: one feature in t1 separates into two or more features in t2. Amalgamation: two or more features merge from one time step to the next.

Classification /Feature tracking Classification Several pattern recognition methods can be used here, eg.  Euclidean Distance classifier:  KNN classifier: Find the K-Nearest Neighbor feature clusters in dataset t1 and dataset t2.

Classification /Feature tracking Output from Classification module I create a new class to preserve the output dataset from Classification module: class LabelTrack(). It preserve the information: 1. ViewID: camera positions, we will move camera around the object in order to restore 3D object. 2. timeID: time order, for each camera position, we will take several time- varying images 3. classID: class number after correspondence computation between a pair of images in time order 4. Label: the original region numbers before correspondence computaton 5. R, G, B: the color information for each pixel 6. Coordinate x, y: the 2D coordinate of the projection of 3D object. 7. Forward pointer: preserve the labeling information of the previous dataset 8. Backward pointer: preserve the labeling information of the next dataset

Computation Time The importance of computation time Size of database: 512*512*24*40(time orders)*N(camera positions) In [5], the resolution is 128^3 with the computation time: 40 minutes. In my project, I use 3 minutes for 512*512*24*40. Because this is the framework of the whole project, there are a lot of I/O operations to see the temporary results. My expectation is 1 minutes for each camera position finally.

REFERENCES [1] Snyder and Cowart, “An Iterative Approach to Region Growing”, IEEE transaction on PAMI, 1983 [2] Wesley E.Snyder and Hairong Qi, “Machine Vision”, Cambridge [3] Richard O.Duda, Peter Hart, David Stork, “Pattern Classification”, Prentice Hall [4] Rafael Gonzalez, Richard Woods,”Digital Image Processing”, 2 nd, Prentice Hall [5] D.Silver, Xin Wang, ”volume tracking”, Visualization '96. Proceedings.27 Oct.-1 Nov. 1996

Thanks Any questions?