Julia Starke, Christian Eichmann, Simon Ottenhaus and Tamim Asfour

Slides:



Advertisements
Similar presentations
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Advertisements

 The objective of the work of an bio mechatronic is to develop an artificial hand which can be used for functional substitution of the natural hand (prosthetics)
Dimension reduction (1)
« هو اللطیف » By : Atefe Malek. khatabi Spring 90.
What is Statistical Modeling
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
CSC2535: 2013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Calibration & Curve Fitting
Summarized by Soo-Jin Kim
NUS CS5247 A dimensionality reduction approach to modeling protein flexibility By, By Miguel L. Teodoro, George N. Phillips J* and Lydia E. Kavraki Rice.
Cao et al. ICML 2010 Presented by Danushka Bollegala.
Chapter 2 Robot Kinematics: Position Analysis
Outline 1-D regression Least-squares Regression Non-iterative Least-squares Regression Basis Functions Overfitting Validation 2.
Last lecture summary. Basic terminology tasks – classification – regression learner, algorithm – each has one or several parameters influencing its behavior.
Learning a Kernel Matrix for Nonlinear Dimensionality Reduction By K. Weinberger, F. Sha, and L. Saul Presented by Michael Barnathan.
Statistical analysis Outline that error bars are a graphical representation of the variability of data. The knowledge that any individual measurement.
CROSS-VALIDATION AND MODEL SELECTION Many Slides are from: Dr. Thomas Jensen -Expedia.com and Prof. Olga Veksler - CS Learning and Computer Vision.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Optimal Reverse Prediction: Linli Xu, Martha White and Dale Schuurmans ICML 2009, Best Overall Paper Honorable Mention A Unified Perspective on Supervised,
CSC2535: Lecture 4: Autoencoders, Free energy, and Minimum Description Length Geoffrey Hinton.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Stats Methods at IC Lecture 3: Regression.
MECH 373 Instrumentation and Measurements
High Dimensional Probabilistic Modelling through Manifolds
Big data classification using neural network
Statistical analysis.
Unsupervised Learning of Video Representations using LSTMs
Data Transformation: Normalization
PREDICT 422: Practical Machine Learning
DEEP LEARNING BOOK CHAPTER to CHAPTER 6
CEE 6410 Water Resources Systems Analysis
CSC2535: Computation in Neural Networks Lecture 11 Extracting coherent properties by maximizing mutual information across space or time Geoffrey Hinton.
Deep Feedforward Networks
Dimensionality Reduction
12. Principles of Parameter Estimation
LECTURE 11: Advanced Discriminant Analysis
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Statistical analysis.
Data Mining K-means Algorithm
Outlier Processing via L1-Principal Subspaces
Statistical Data Analysis - Lecture10 26/03/03
کاربرد نگاشت با حفظ تنکی در شناسایی چهره
Compositional Human Pose Regression
Machine Learning Basics
Dynamical Statistical Shape Priors for Level Set Based Tracking
Structure learning with deep autoencoders
PCA vs ICA vs LDA.
Multivariate environmental characterization of samples
Goodfellow: Chapter 14 Autoencoders
10701 / Machine Learning Today: - Cross validation,
Chap. 7 Regularization for Deep Learning (7.8~7.12 )
Biointelligence Laboratory, Seoul National University
Basis Expansions and Generalized Additive Models (2)
Autoencoders hi shea autoencoders Sys-AI.
Autoencoders Supervised learning uses explicit labels/correct output in order to train a network. E.g., classification of images. Unsupervised learning.
12. Principles of Parameter Estimation
A system for automatic animation of piano performances
Goodfellow: Chapter 14 Autoencoders
Iterative Projection and Matching: Finding Structure-preserving Representatives and Its Application to Computer Vision.
Presentation transcript:

Julia Starke, Christian Eichmann, Simon Ottenhaus and Tamim Asfour Synergy-Based, Data-Driven Generation of Object-Specific Grasps for Anthropomorphic Hands Julia Starke, Christian Eichmann, Simon Ottenhaus and Tamim Asfour We are going to explore methods in dimensionality reduction as a control strategy for operating robotic hands, in particular those that look like the ones we humans work with everyday.

Shape and Grasp Manipulation artificial hands have traditionally faced control difficulties when looking to mimic the capabilities of the human hand the challenge is a consequence of differing hand kinematics physiological constraints Starke directs her focus on anthropomorphic hands–hands that resemble anatomical structure of the human hand

Underactuated Robotic Hands full actuation implies an actuator for each degree of freedom a fully actuated anthropomorphic hand would have at least 20 DOF an underactuated hand simplifies control complexity by controlling the same number of joints with fewer actuators, but faces drawbacks in dexterity provides the passive compliance required to grasp objects with a variety of geometries let’s see a few examples:

Hand Synergies hand synergies represent the intrinsic mechanism by which a hand operates the apparent control is moving the angle of each finger joint–this mechanism lives in a high-dimensional space experiments in neurophysiology validate that each joint is not exactly independent of the rest in the context of grasping the lower dimensional space X that can approximate the original joint state space Y as f(X) is the space of hand synergies

Discovering Synergies with PCA Principal Component Analysis (PCA) is a conventional unsupervised learning technique used to discover synergies in a dataset of grasps on various objects PCA finds the lower dimensional space that maximizes the variance of the original dataset when projected into this space The first two principal components capture about 80% of human grasps

Non-Linear Dimensionality Reduction Starke suggests that the correlation between finger motions are highly non-linear one such non-linear method is known as Gaussian Process Latent Variable Models (GPLVM) GPLVM outperforms the reconstruction and generation of grasps non-linear methods introduce further complexity into the selection of control parameters to achieve a desired grasp

Starke’s Main Contribution clusters representing grasp types and/or object geometries can be found in the original high dimensional space projecting these grasps into a lower dimensional space does not preserve these clusters sampling the latent space for a specific grasp type with a specific object size is therefore nearly impossible successfully discovers a subspace of synergies that cluster together with respect to the above parameters, allowing for object-specific grasp generation

Data Acquisition

Data Acquisition

Data Acquisition with the CyberGlove III

Data Acquisition Procedure each of the five grasp types is conducted with ten objects three times each per subject the objects are selected from the YCB Object Set and the KIT Object Set Flat Hand Calibration Pose Grasp Objects Record Static Grasp Pose Flat Hand Calibration Pose

Deep Autoencoder Requirements the latent space should allow for low-dimensional encoding of grasp poses the latent space should be encoded in a way that allows intuitive generation of different grasp types for a given object size vanilla VAEs and AEs are able to reduce dimensionality but fail to cluster grasp types together in the lower dimensional space add additional constraints to the Loss function to capture the significance of preserving grasp type clusters

Deep Autoencoder Architecture Original Space Y belongs to R^16 Latent space Z belongs to R^3 Encoder is g: R^16 to R^3 Decoder is h: R^4 to R^16 g(y) = z, h(z, object_size) = y_hat

Deep Autoencoder Loss Mean Square Error: minimize the sum of the squared differences between ground truth and prediction Add a penalizing term for y belonging to the same grasp type: the MSE between the encoded representations Remove a penalizing term for y belonging to different grasp types Loss = alpha*MSE(y, y_hat) + beta*MSE(g(y), g(x belonging to same grasp type)) + gamma*MSE(g(y), g(x belonging to different grasp type))

Deep Autoencoder Training Gaussian noise with standard deviation of 0.1 added to input of AE during training to avoid overfitting and increase the size of the grasp type clusters decoder is provided with relative size of the object (i.e. object size normalized by the size of the subject’s hand) 90/10 train/test split with a stratified 10-fold-cross-validation 10 iterations of independent training

Deep Autoencoder Performance outperforms PCA by 5^O introducing object size as a parameter to the decoder reduces the overall reproduction error by 26 % a latent space with 3 parameters has marginal gain in performance from 2 parameters, but the distribution of grasp types is more distinct in 3D reversed AE architecture outperforms conventional mirrored architecture

Deep Autoencoder Grasp Generation

Deep Autoencoder Latent Distribution First Synergy (Latent parameter 1) only parameter that controls the thumb in its full range of motion (needed for pinch grasps) mainly controls flexion of index, middle, and ring (needed for lateral grasps) Second Synergy (Latent parameter 2) mainly controls finger adduction joints (i.e. spread of fingers needed for spherical and disk grasps) Third Synergy (Latent parameter 3) fine-tuning of proximal interphalangeal joints (needed for cylindrical grasps)

References Starke, Julia & Eichmann, Christian & Ottenhaus, Simon & Asfour, Tamim. (2018). Synergy-Based, Data- Driven Generation of Object-Specific Grasps for Anthropomorphic Hands. 327-333. 10.1109/HUMANOIDS.2018.8624990.