Manipulation in Human Environments Aaron Edsinger & Charlie Kemp Humanoid Robotics Group MIT CSAIL
Domo 29 DOF 6 DOF Series Elastic Actuator (SEA) arms 4 DOF SEA hands 2 DOF SEA neck Active vision head Stereo cameras Gyroscope Sense joint angle + torque 15 node Linux cluster
Manipulation in Human Environments Human environments are designed to match our cognitive and physical abilities Work with everyday objects Collaborate with people Perform useful tasks
Applications Aging in place Cooperative manufacturing Household chores
Three Themes Let the body do the thinking Collaborative manipulation Task relevant features
Let the Body do the Thinking Design Passive compliance Force control Human morphology
Let the Body do the Thinking Compensatory behaviors Reduce uncertainty Modulate arm stiffness Aid perception (motion, visibility) Test assumptions (explore)
Let the Body Do the Thinking
Collaborative Manipulation Complementary actions Person can simplify perception and action for the robot Robot can provide intuitive cues for the human Requires matching to our social interface
Collaborative Manipulation Social amplification
Collaborative Manipulation A third arm: Hold a flashlight Fixture a part Extend our physical abilities: Carry groceries Open a jar Expand our workspace: Place dishes in a cabinet Hand a tool Reach a shelf
Task Relevant Features What is important? What is irrelevant? *Distinct from object detection/recognition.
Structure In Human Environments Donald Norman The Design of Everyday Objects
Structure In Human Environments Human environments are constrained to match our cognitive and physical abilities Sense from above Flat surfaces Objects for human hands Objects for use by humans
Why are tool tips common? Single, localized interface to the world Physical isolation helps avoid irrelevant contact Helps perception Helps control
Tool Tip Detection Visual + motor detection method Kinematic Estimate Visual Model
Mean Pixel Error for Automatic and Hand Labelled Tip Detection
Mean Pixel Error for Hand Labeled, Multi-Scale Detector, and Point Detector
Model-Free Insertion Active tip perception Arm stiffness modulation Human interaction
Other Examples Circular openings Handles Contact Surfaces Gravity Alignment
Future: Generalize What You've Learned Across objects Perceptually map tasks across objects Key features map to key features Across manipulators Motor equivalence Manipulator details may be irrelevant
RSS 2006 Workshop Manipulation for Human Environments Robotics: Science and Systems University of Pennsylvania , August 19th, 2006 manipulation.csail.mit.edu/rss06
Summary Importance of Task Relevant Features Example of the tool tip Large set of hand tools Robust detection (visual + motor) Kinematic estimate Visual model
In Progress Perform a variety of tasks Insertion Pouring Brushing
Learning from Demonstration
The Detector Responds To Fast Motion Convex
Video from Eye Camera Motion Weighted Edge Map Multi-scale Histogram (Medial-Axis, Hough Transform for Circles) Local Maxima
Defining Characteristics Geometric Isolated Distal Localized Convex Cultural/Design Far from natural grasp location Long distance relative to hand size
Other Task Relevant Features?
Detecting the Tip
Include Scale and Convexity
Distinct Perceptual Problem Not object recognition How should it be used Distinct methods and features
Use The Hand's Frame Combine weak evidence Rigidly grasped
Acquire a Visual Model