Jivko Sinapov, Kaijen Hsiao and Radu Bogdan Rusu Proprioceptive Perception for Object Weight Classification.

Slides:



Advertisements
Similar presentations
Is Random Model Better? -On its accuracy and efficiency-
Advertisements

Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
An Introduction of Support Vector Machine
My name is Dustin Boswell and I will be presenting: Ensemble Methods in Machine Learning by Thomas G. Dietterich Oregon State University, Corvallis, Oregon.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
An Introduction of Support Vector Machine
Support Vector Machines and Margins
An Overview of Machine Learning
Indian Statistical Institute Kolkata
Discriminative and generative methods for bags of features
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Lesson learnt from the UCSD datamining contest Richard Sia 2008/10/10.
CS292 Computational Vision and Language Pattern Recognition and Classification.
Chapter 5: Linear Discriminant Functions
Robust supervised image classifiers by spatial AdaBoost based on robust loss functions Ryuei Nishii and Shinto Eguchi Proc. Of SPIE Vol D-2.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
Presented by Zeehasham Rasheed
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Support Vector Machines
K-means Based Unsupervised Feature Learning for Image Recognition Ling Zheng.
1 Linear Classification Problem Two approaches: -Fisher’s Linear Discriminant Analysis -Logistic regression model.
For Better Accuracy Eick: Ensemble Learning
Who would be a good loanee? Zheyun Feng 7/17/2015.
Resilient Machines Through Continuous Self-Modeling Pattern Recognition Seung-Hyun Lee Soft Computing Lab. Josh Bongard,Victor Zykov, and Hod.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
CSE 185 Introduction to Computer Vision Pattern Recognition.
CLassification TESTING Testing classifier accuracy
Machine Learning in Spoken Language Processing Lecture 21 Spoken Language Processing Prof. Andrew Rosenberg.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 24 – Classifiers 1.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
Last lecture summary. Basic terminology tasks – classification – regression learner, algorithm – each has one or several parameters influencing its behavior.
Transfer Learning Task. Problem Identification Dataset : A Year: 2000 Features: 48 Training Model ‘M’ Testing 98.6% Training Model ‘M’ Testing 97% Dataset.
MA4102 – Data Mining and Neural Networks Nathan Ifill University of Leicester Image source: Antti Ajanki, “Example of k-nearest neighbor.
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Visual Information Systems Recognition and Classification.
Introduction Use machine learning and various classifying techniques to be able to create an algorithm that can decipher between spam and ham s. .
CISC Machine Learning for Solving Systems Problems Presented by: Ashwani Rao Dept of Computer & Information Sciences University of Delaware Learning.
Handwritten digit recognition
Interactive Learning of the Acoustic Properties of Objects by a Robot
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
1 CSCI 3202: Introduction to AI Decision Trees Greg Grudic (Notes borrowed from Thomas G. Dietterich and Tom Mitchell) Intro AIDecision Trees.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
Gravity and Center of Gravity. What do you know about gravity? Gravity is a force that pulls things together based on there mass and how close they are.
CS378 Final Project The Netflix Data Set Class Project Ideas and Guidelines.
Classification Ensemble Methods 1
Chapter 5: Credibility. Introduction Performance on the training set is not a good indicator of performance on an independent set. We need to predict.
… Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.
K nearest neighbors algorithm Parallelization on Cuda PROF. VELJKO MILUTINOVIĆ MAŠA KNEŽEVIĆ 3037/2015.
Notes on HW 1 grading I gave full credit as long as you gave a description, confusion matrix, and working code Many people’s descriptions were quite short.
Combining multiple learners Usman Roshan. Decision tree From Alpaydin, 2010.
6.S093 Visual Recognition through Machine Learning Competition Image by kirkh.deviantart.com Joseph Lim and Aditya Khosla Acknowledgment: Many slides from.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Collaborative Grasp Planning with Multiple Object Representations Peter Brook Matei Ciocarlie Kaijen Hsiao.
Debrup Chakraborty Non Parametric Methods Pattern Recognition and Machine Learning.
Next, this study employed SVM to classify the emotion label for each EEG segment. The basic idea is to project input data onto a higher dimensional feature.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Support Vector Machines Part 2. Recap of SVM algorithm Given training set S = {(x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) | (x i, y i )   n  {+1, -1}
Unveiling Zeus Automated Classification of Malware Samples Abedelaziz Mohaisen Omar Alrawi Verisign Inc, VA, USA Verisign Labs, VA, USA
Classifiers!!! BCH364C/391L Systems Biology / Bioinformatics – Spring 2015 Edward Marcotte, Univ of Texas at Austin.
Trees, bagging, boosting, and stacking
When to engage in interaction – and how
Machine Learning Week 1.
Combining Base Learners
Boosting Nearest-Neighbor Classifier for Character Recognition
Interactive Object Recognition Using Proprioceptive Feedback
The Combination of Supervised and Unsupervised Approach
Model generalization Brief summary of methods
Tactile features for prosthesis perception.
Presentation transcript:

Jivko Sinapov, Kaijen Hsiao and Radu Bogdan Rusu Proprioceptive Perception for Object Weight Classification

What is Proprioception? “It is the sense that indicates whether the body is moving with required effort, as well as where the various parts of the body are located in relation to each other.” - Wikipedia

3 Why Proprioception?

4 Full Empty

Hard Softvs Why Proprioception?

Lifting: gravity, effort, etc.

Pushing: friction, mass, etc.

Squeezing: compliance, flexibility

Power, “Play and Exploration in Children and Animals”, 2000

Related Work: Proprioception “Learning Haptic Representations of Objects”: [ Natale et al (2004) ]

Related Work: Proprioception Proprioceptive Object Recognition [ Bergquist et al (2009) ]

Perception Problem for PR2: Is the bottle full or empty?

General Approach Let the robot experience what full and empty bottles “feel” like Use prior experience to classify new bottles as either full or empty

Behavior: Power, “Play and Exploration in Children and Animals”, 2000

Behaviors 1) Unsupported Holding2) Lifting

Data Representation Behavior Execution: [J i, E i, C i ] Recorded Data: Joint Positions Efforts Class Label {full, empty}

Example Recorded Joint Efforts of Left Arm:

Classification Procedure [J i, E i, ?] Feature Extraction Recognition Model Pr( ‘full’ )Pr( ‘empty’ )

Recognition Model X =[J i, E i, ?] Recognition Model

X =[J i, E i, ?] Recognition Model Find N closest neighbors to X in joint-feature space

Recognition Model X =[J i, E i, ?] Recognition Model Find N closest neighbors to X in joint-feature space Train classifier C on the N neighbors that maps effort features to class label

Recognition Model X =[J i, E i, ?] Recognition Model Find N closest neighbors to X in joint-feature space Train classifier C on the N neighbors that maps effort features to class label Use trained classifier C to label X

Objects: Procedure: Place object on table Robot grasps it and performs the current behavior (either hold or lift) in a random position in space Robot puts object back down on table in random position; repeat. Each behavior performed 100 times on each bottle in both full and empty states A total of 2 x 5 x 100 x 2 = 2000 behavior executions Training Procedure

Evaluation 5 fold cross-validation: at each iteration, data with 4 out of the five bottles is used for training, and the rest used for testing Three classification algorithms evaluated: K-Nearest Neighbors Support Vector Machine (quadratic kernel) C4.5 Tree

Chance Accuracy: 50%

Can the robot boost recognition rate by applying a behavior multiple times?

How much training data is necessary?

(lift)

Application to Regression X =[J i, E i, ?] Recognition Model Find N closest neighbors to X in joint-feature space Train regression model C on the N neighbors that maps effort features to class label Use trained regression model C to label X

Regression Results

Mean Abs. Error = lbs

Regression Results Mean Abs. Error = lbs Chance error = lbs

Application to Sorting Task Sorting task: Place empty bottles in trash Move full bottles on other side of table

Application to Sorting Task

Sorting Task: video

Application to a new recognition task Full or empty?

Behavior: 40 trials with full box and 40 trials with empty box Recognition Accuracy: % (all three algorithms) slide object across table

Sliding task: video

Conclusion Behavior-grounded approach to proprioceptive perception Implemented as a ROS package: - This work has been submitted to ICRA 2011.

Future Work More advanced proprioceptive feature extraction Multi-modal object perception: Auditory 3D Tactile