Download presentation
Presentation is loading. Please wait.
Published byEvan Boyd Modified over 8 years ago
1
Bryan Willimon, Stan Birchfield, Ian Walker Department of Electrical and Computer Engineering Clemson University IROS 2010 Rigid and Non-Rigid Classification Using Interactive Perception
2
What is Interactive Perception? Interactive Perception is the concept of gathering information about a particular object through interaction Raccoons and cats use this technique to learn about their environment using their front paws.
3
What is Interactive Perception? The information gathered is: Either complementing information obtained through vision Or adding new information that cannot be determined through vision alone
4
Previous Related Work on Interactive Perception P. Fitzpatrick. First Contact: an active vision approach to segmentation. IROS 2003 Complementing: Segmentation through image differencing Adding New Information: Learning about prismatic and revolute joints on planar rigid objects D. Katz and O. Brock. Manipulating articulated objects with interactive perception. ICRA 2008 Previous work focused on rigid objects
5
Goal of Our Approach Isolated Object Classify Object Learn about Object
6
Color Histogram Labeling Use color values (RGB) of the object to create a 3-D histogram Each histogram is normalized by number of pixels in object to create a probability distribution Each histogram is then compared to histograms of previous objects for a match using histogram intersection White area is found by using same technique as in graph- based segmentation and used as a binary mask to locate object in image
7
Skeletonization Use binary mask from previous step to create a skeleton of the object Skeleton is a single-pixel wide outline of the area Prairie-fire analogy Iteration 1Iteration 3Iteration 5Iteration 7Iteration 9Iteration 10Iteration 11Iteration 13Iteration 15Iteration 17Iteration 47
8
Monitoring Object Interaction Use KLT feature points to track movement of the object as the robot interacts with it Only concerned with feature points on the object and disregard all other points Calculate distance between each feature point every f length frames (f length =5)
9
Monitoring Object Interaction (cont.) Idea: Like features keep a constant inter-feature distance, features from different groups have variable intra-distance Features were separated into groups by measuring the intra-distance amount after f length frames If the intra-distance between two features changes by less than a threshold, then they are within the same group Otherwise, they are within different groups Separate groups relate to separate parts of an object
10
Labeling Revolute Joints using Motion For each feature group, create an ellipse that encapsulates all features Calculate major axis of ellipse using PCA End points of major axis correspond to a revolute joint and the endpoint of the extremity
11
Labeling Revolute Joints using Motion (cont.) Using the skeleton, locate intersection points and end points Intersection points (Red) = Rigid or Non-rigid joints End points (Green) = Interaction points Interaction points are locations that the robot uses to “push” or “poke” the object
12
Labeling Revolute Joints using Motion (cont.) Map estimated revolute joint from major axis of ellipse to actual joint in skeleton After multiple interactions from the robot, a final skeleton is created with revolute joints labeled (red)
13
Experimental Results Sorting using socks and shoes Articulated rigid object - pliers Classification experiment - toys
14
*D. Katz and O. Brock. Manipulating articulated objects with interactive perception. ICRA 2008 Comparing objects of the same type to that of similar work* Pliers from our results compared to shears in their results* Our approachKatz-Brock approach Results Articulated rigid object (Pliers) Revolute Joint
15
Final Skeleton used for Classification Results Classification (cont.)Experiment (Toys)
16
1234
17
5678
18
Classification Experiment without use of Skeleton *Rows = Query image, Columns = Database image Results Classification (cont.)Experiment Misclassification
19
Classification Experiment with use of Skeleton *Rows = Query image, Columns = Database image Results Classification (cont.)Experiment Classification Corrected
20
Results Sorting (cont.) using socks and shoes 12 345
21
Classification Experiment without use of Skeleton Misclassification
22
Results Sorting (cont.) using socks and shoes Classification Experiment with use of Skeleton Classification Corrected
23
Conclusion The results demonstrated that our approach provided a way to classify rigid and non-rigid objects and label them for sorting and/or pairing purposes Most of the previous work only considers planar rigid objects This approach builds on and exceeds previous work in the scope of “interactive perception” We gather more information with interaction like a skeleton of the object, color, and movable joints. Other works only look to segment the object or find revolute and prismatic joints
24
Future Work Create a 3-D environment instead of a 2-D environment Modify classification area to allow for interactions from more than 2 directions Improve the gripper of the robot for more robust grasping Enhance classification algorithm and learning strategy Use more characteristics to properly label a wider range of objects
25
Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.