Bryan Willimon, Stan Birchfield, Ian Walker Department of Electrical and Computer Engineering Clemson University IROS 2010 Rigid and Non-Rigid Classification.

Slides:



Advertisements
Similar presentations
Bryan Willimon, Steven Hickson, Ian Walker, and Stan Birchfield IROS 2012 Vila Moura, Algarve An Energy Minimization Approach to 3D Non- Rigid Deformable.
Advertisements

Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Learning Techniques for Video Shot Detection Under the guidance of Prof. Sharat Chandran by M. Nithya.
IntroductionIntroduction AbstractAbstract AUTOMATIC LICENSE PLATE LOCATION AND RECOGNITION ALGORITHM FOR COLOR IMAGES Kerem Ozkan, Mustafa C. Demir, Buket.
Segmentation of Floor in Corridor Images for Mobile Robot Navigation Yinxiao Li Clemson University Committee Members: Dr. Stanley Birchfield (Chair) Dr.
Patch to the Future: Unsupervised Visual Prediction
Computer Vision Chapter 6 Color.
Object Recognition & Model Based Tracking © Danica Kragic Tracking system.
Automatic Feature Extraction for Multi-view 3D Face Recognition
Shape Classification Based on Skeleton Path Similarity Xingwei Yang, Xiang Bai, Deguang Yu, and Longin Jan Latecki.
Bryan Willimon Master’s Thesis Defense Interactive Perception for Cluttered Environments.
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
MRI Image Segmentation for Brain Injury Quantification Lindsay Kulkin 1 and Bir Bhanu 2 1 Department of Biomedical Engineering, Syracuse University, Syracuse,
1 Robotics and Biology Laboratory – Department of Computer Science Dov Katz Oliver Brock June 1 st, 2007 New England Manipulation Symposium Extracting.
A Bayesian algorithm for tracking multiple moving objects in outdoor surveillance video Department of Electrical Engineering and Computer Science The University.
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Ch. 3: Forward and Inverse Kinematics
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Tracking Video Objects in Cluttered Background
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
A Real-Time for Classification of Moving Objects
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
A fuzzy video content representation for video summarization and content-based retrieval Anastasios D. Doulamis, Nikolaos D. Doulamis, Stefanos D. Kollias.
Image Processing David Kauchak cs458 Fall 2012 Empirical Evaluation of Dissimilarity Measures for Color and Texture Jan Puzicha, Joachim M. Buhmann, Yossi.
Speaker: Chi-Yu Hsu Advisor: Prof. Jian-Jung Ding Leveraging Stereopsis for Saliency Analysis, CVPR 2012.
Autonomous Learning of Object Models on Mobile Robots Xiang Li Ph.D. student supervised by Dr. Mohan Sridharan Stochastic Estimation and Autonomous Robotics.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
1 Mean shift and feature selection ECE 738 course project Zhaozheng Yin Spring 2005 Note: Figures and ideas are copyrighted by original authors.
COLOR HISTOGRAM AND DISCRETE COSINE TRANSFORM FOR COLOR IMAGE RETRIEVAL Presented by 2006/8.
Digital Image Processing CCS331 Relationships of Pixel 1.
1 Multiple Classifier Based on Fuzzy C-Means for a Flower Image Retrieval Keita Fukuda, Tetsuya Takiguchi, Yasuo Ariki Graduate School of Engineering,
Automatic Minirhizotron Root Image Analysis Using Two-Dimensional Matched Filtering and Local Entropy Thresholding Presented by Guang Zeng.
Bryan Willimon, Steven Hickson, Ian Walker, and Stan Birchfield Clemson University IROS Vilamoura, Portugal An Energy Minimization Approach to 3D.
A Region Based Stereo Matching Algorithm Using Cooperative Optimization Zeng-Fu Wang, Zhi-Gang Zheng University of Science and Technology of China Computer.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Classification of Clothing using Interactive Perception BRYAN WILLIMON, STAN BIRCHFIELD AND IAN WALKER CLEMSON UNIVERSITY CLEMSON, SC USA ABSTRACT ISOLATION.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
Sean M. Ficht.  Problem Definition  Previous Work  Methods & Theory  Results.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Bryan Willimon IROS 2011 San Francisco, California Model for Unfolding Laundry using Interactive Perception.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Machine Vision ENT 273 Regions and Segmentation in Images Hema C.R. Lecture 4.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Tracking Groups of People for Video Surveillance Xinzhen(Elaine) Wang Advisor: Dr.Longin Latecki.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Computational Vision CSCI 363, Fall 2012 Lecture 17 Stereopsis II
Digital Image Processing CCS331 Relationships of Pixel 1.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Machine Vision ENT 273 Lecture 4 Hema C.R.
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Multimedia Content-Based Retrieval
Histogram—Representation of Color Feature in Image Processing Yang, Li
Brain Hemorrhage Detection and Classification Steps
Vehicle Segmentation and Tracking in the Presence of Occlusions
Vehicle Segmentation and Tracking from a Low-Angle Off-Axis Camera
Eric Grimson, Chris Stauffer,
Content-Based Image Retrieval
Content-Based Image Retrieval
Multiple Instance Learning: applications to computer vision
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Course 6 Stereo.
Image Segmentation.
Presentation transcript:

Bryan Willimon, Stan Birchfield, Ian Walker Department of Electrical and Computer Engineering Clemson University IROS 2010 Rigid and Non-Rigid Classification Using Interactive Perception

What is Interactive Perception? Interactive Perception is the concept of gathering information about a particular object through interaction Raccoons and cats use this technique to learn about their environment using their front paws.

What is Interactive Perception? The information gathered is: Either complementing information obtained through vision Or adding new information that cannot be determined through vision alone

Previous Related Work on Interactive Perception P. Fitzpatrick. First Contact: an active vision approach to segmentation. IROS 2003 Complementing: Segmentation through image differencing Adding New Information: Learning about prismatic and revolute joints on planar rigid objects D. Katz and O. Brock. Manipulating articulated objects with interactive perception. ICRA 2008 Previous work focused on rigid objects

Goal of Our Approach Isolated Object Classify Object Learn about Object

Color Histogram Labeling Use color values (RGB) of the object to create a 3-D histogram Each histogram is normalized by number of pixels in object to create a probability distribution Each histogram is then compared to histograms of previous objects for a match using histogram intersection White area is found by using same technique as in graph- based segmentation and used as a binary mask to locate object in image

Skeletonization Use binary mask from previous step to create a skeleton of the object Skeleton is a single-pixel wide outline of the area Prairie-fire analogy Iteration 1Iteration 3Iteration 5Iteration 7Iteration 9Iteration 10Iteration 11Iteration 13Iteration 15Iteration 17Iteration 47

Monitoring Object Interaction Use KLT feature points to track movement of the object as the robot interacts with it Only concerned with feature points on the object and disregard all other points Calculate distance between each feature point every f length frames (f length =5)

Monitoring Object Interaction (cont.) Idea: Like features keep a constant inter-feature distance, features from different groups have variable intra-distance Features were separated into groups by measuring the intra-distance amount after f length frames If the intra-distance between two features changes by less than a threshold, then they are within the same group Otherwise, they are within different groups Separate groups relate to separate parts of an object

Labeling Revolute Joints using Motion For each feature group, create an ellipse that encapsulates all features Calculate major axis of ellipse using PCA End points of major axis correspond to a revolute joint and the endpoint of the extremity

Labeling Revolute Joints using Motion (cont.) Using the skeleton, locate intersection points and end points Intersection points (Red) = Rigid or Non-rigid joints End points (Green) = Interaction points Interaction points are locations that the robot uses to “push” or “poke” the object

Labeling Revolute Joints using Motion (cont.) Map estimated revolute joint from major axis of ellipse to actual joint in skeleton After multiple interactions from the robot, a final skeleton is created with revolute joints labeled (red)

Experimental Results Sorting using socks and shoes Articulated rigid object - pliers Classification experiment - toys

*D. Katz and O. Brock. Manipulating articulated objects with interactive perception. ICRA 2008 Comparing objects of the same type to that of similar work* Pliers from our results compared to shears in their results* Our approachKatz-Brock approach Results Articulated rigid object (Pliers) Revolute Joint

Final Skeleton used for Classification Results Classification (cont.)Experiment (Toys)

1234

5678

Classification Experiment without use of Skeleton *Rows = Query image, Columns = Database image Results Classification (cont.)Experiment Misclassification

Classification Experiment with use of Skeleton *Rows = Query image, Columns = Database image Results Classification (cont.)Experiment Classification Corrected

Results Sorting (cont.) using socks and shoes

Classification Experiment without use of Skeleton Misclassification

Results Sorting (cont.) using socks and shoes Classification Experiment with use of Skeleton Classification Corrected

Conclusion  The results demonstrated that our approach provided a way to classify rigid and non-rigid objects and label them for sorting and/or pairing purposes  Most of the previous work only considers planar rigid objects  This approach builds on and exceeds previous work in the scope of “interactive perception”  We gather more information with interaction like a skeleton of the object, color, and movable joints.  Other works only look to segment the object or find revolute and prismatic joints

Future Work  Create a 3-D environment instead of a 2-D environment  Modify classification area to allow for interactions from more than 2 directions  Improve the gripper of the robot for more robust grasping  Enhance classification algorithm and learning strategy  Use more characteristics to properly label a wider range of objects

Questions?