Hybrid Position-Based Visual Servoing

Slides:



Advertisements
Similar presentations
University of Karlsruhe September 30th, 2004 Masayuki Fujita
Advertisements

Zhengyou Zhang Vision Technology Group Microsoft Research
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Robot Vision SS 2005 Matthias Rüther 1 ROBOT VISION Lesson 10: Object Tracking and Visual Servoing Matthias Rüther.
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Computer vision: models, learning and inference
Where has all the data gone? In a complex system such as Metalman, the interaction of various components can generate unwanted dynamics such as dead time.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Structure and Motion from Line Segments in Multiple Images Camillo J. Taylor, David J. Kriegman Presented by David Lariviere.
Laser Scan Matching in Polar Coordinates with Application to SLAM
Quantifying Generalization from Trial-by-Trial Behavior in Reaching Movement Dan Liu Natural Computation Group Cognitive Science Department, UCSD March,
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Model Independent Visual Servoing CMPUT 610 Literature Reading Presentation Zhen Deng.
Vision-Based Motion Control of Robots
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Monash University Dept Research Forum Active Sensing for Mobile and Humanoid Robots - Lindsay Kleeman Active Sensing for Mobile and Humanoid Robots.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
MEAM 620 Project Report Nima Moshtagh.
Robust Lane Detection and Tracking
Image-based Control Convergence issues CMPUT 610 Winter 2001 Martin Jagersand.
Research Overview A/Prof Lindsay Kleeman Intelligent Robotics Research Centre Monash University.
Estimating the Driving State of Oncoming Vehicles From a Moving Platform Using Stereo Vision IEEE Intelligent Transportation Systems 2009 M.S. Student,
Inverse Kinematics Jacobian Matrix Trajectory Planning
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Camera Parameters and Calibration. Camera parameters From last time….
Overview and Mathematics Bjoern Griesbach
Vision Guided Robotics
Automatic Camera Calibration
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Out-of-plane Rotations Environment constraints ● Surveillance systems ● Car driver images ASM: ● Similarity does not remove 3D pose ● Multiple-view database.
오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
Hand Tracking for Virtual Object Manipulation
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Correspondence-Free Determination of the Affine Fundamental Matrix (Tue) Young Ki Baik, Computer Vision Lab.
Self-Calibration and Metric Reconstruction from Single Images Ruisheng Wang Frank P. Ferrie Centre for Intelligent Machines, McGill University.
Karman filter and attitude estimation Lin Zhong ELEC424, Fall 2010.
Jorge Almeida Laser based tracking of mutually occluding dynamic objects University of Aveiro 2010 Department of Mechanical Engineering 10 September 2010.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
3D Object Modelling and Classification Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University,
Robot Vision SS 2007 Matthias Rüther 1 ROBOT VISION Lesson 9: Robots & Vision Matthias Rüther.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Looking at people and Image-based Localisation Roberto Cipolla Department of Engineering Research team
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Visual Odometry David Nister, CVPR 2004
Chayatat Ratanasawanya Min He April 6,  Recall previous presentation  The goal  Progress report ◦ Image processing ◦ depth estimation ◦ Camera.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Model Refinement from Planar Parallax Anthony DickRoberto Cipolla Department of Engineering University of Cambridge.
Camera calibration from multiple view of a 2D object, using a global non linear minimization method Computer Engineering YOO GWI HYEON.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Paper – Stephen Se, David Lowe, Jim Little
Shape Recovery Using Robust Light Striping
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek
Vehicle Segmentation and Tracking in the Presence of Occlusions
Dongwook Kim, Beomjun Kim, Taeyoung Chung, and Kyongsu Yi
Combining Geometric- and View-Based Approaches for Articulated Pose Estimation David Demirdjian MIT Computer Science and Artificial Intelligence Laboratory.
SENSOR BASED CONTROL OF AUTONOMOUS ROBOTS
Chapter 4 . Trajectory planning and Inverse kinematics
Presentation transcript:

Hybrid Position-Based Visual Servoing Visual Perception and Robotic Manipulation Springer Tracts in Advanced Robotics Chapter 6 Hybrid Position-Based Visual Servoing Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia

Overview Motivation for hybrid visual servoing Visual measurements and online calibration Kinematic measurements Implementation of controller and IEKF Experimental comparison of hybrid visual servoing with existing techniques

Motivation Manipulation tasks for a humanoid robot are characterized by: Autonomous planning from internal models Arbitrarily large initial pose error Background clutter and occluding obstacles Cheap sensors  camera model errors Light, compliant limbs  kinematic calibration errors Metalman: upper-torso humanoid hand-eye system

Visual Servoing Image-based visual servoing (IBVS): Robust to calibration errors if target image known Depth of target must be estimated Large pose error can cause unpredictable trajectory Position-based visual servoing (PBVS): Allows 3D trajectory planning Sensitive to calibration errors End-effector may leave field of view Linear approximations (affine cameras, etc) Deng et al (2002) suggest little difference between visual servoing schemes

Conventional PBVS Endpoint open-loop (EOL): Controller observes only the target End-effector pose estimated using kinematic model and calibrated hand-eye transformation Not affected by occlusion of the end-effector Endpoint closed-loop (ECL): Controller observes both target and end-effector Less sensitive to kinematic calibration errors but fails when the end-effector is obscured Accuracy depends on camera model and 3D pose reconstruction method

Proposed Scheme Hybrid position-based visual servoing using fusion of visual and kinematic measurements: Visual measurements provide accurate positioning Kinematic measurements provide robustness to occlusions and clutter End-effector pose is estimated from fused measure-ments using Iterated Extended Kalman Filter (IEKF) Additional state variables included for on-line calibration of camera and kinematic models Hybrid PBVS has the benefits of both EOL and ECL control and the deficiencies of neither.

Coordinate Frames EOL ECL Hybrid

PBVS Controller Conventional approach (Hutchinson et al, 1999). Control error (pose error): WHE estimated by visual/kinematic fusion in IEKF. Proportional velocity control signal:

Implementation

Visual Measurements Gripper tracked using active LED features, represented by an internal point model Gi C gi image plane camera centre 3D gripper model measurements IEKF measurement model:

Camera Model Errors In practical system, baseline and verge angle may not be known precisely. 2b* scaled reconstruction -* affine reconstruction left image plane 2b left camera centre right camera centre right image plane reconstruction

Camera Model Errors How does scale error affect pose estimation? Consider the case of translation only by TE: Predicted measurements: Actual measurements: Relationship between actual and estimated pose: Estimated pose for different objects in the same position with same scale error is different!

Camera Model Errors Scale error will cause non-convergence of PBVS! Although the estimated gripper and object frames align, the actual frames are not aligned.

Visual Measurements To remove model errors, scale term is estimated by IEKF using modified measurement equation: Scale estimate requires four observed points with at least one in each stereo field.

Kinematic Model Kinematic measurement from PUMA is BHE Measurement prediction (for IEKF): Hand-eye transformation BHW is treated as a dynamic bias and estimated in the IEKF Estimating BHW requires visual estimation of WHE, and is therefore dropped from the state vector if the gripper is obscured.

Kalman Filter Kalman filter state vector (position, velocity, calibration parameters): Measurement vector (visual + kinematic): Dynamic models: Constant velocity model for pose Static model for calibration parameters Initial state from kinematic measurements.

Constraints Three points required for visual pose recovery Stereo measurements required for scale estimation LED association required multiple observed LEDs Estimation of BHW requires visual observations Use a hierarchy of estimators (nL,R = no. points): nL,R < 3: EOL control, no estimation of K1 or BHW nL > 3 xor nR > 3: Hybrid control, no K1 nL,R > 3: Hybrid control (visual + kinematic) Excluded state variables are discarded by setting rows and columns of Jacobian to zero

LED Measurement LEDs centroids measured with red colour filter Measured and model LEDs associated using a global matching procedure. Robust global matching requires  3 LEDs. Predicted LEDs Observed LEDs

Experimental Results Positioning experiment: Accuracy evaluation: Align midpoint between thumb and forefinger at coloured marker A Align thumb and forefinger on line between A and B Accuracy evaluation: Translation error: distance between midpoint of thumb/forefinger and A Orientation error: angle between line joining thumb/forefinger and line joining A/B

Positioning Accuracy Hybrid controller, initial pose (right camera only) Hybrid controller, final pose (right camera only)

Positioning Accuracy ECL controller, final pose (right camera only) EOL controller, final pose (right camera only)

Positioning Accuracy Accuracy measured over 5 trial per controller.

Tracking Robustness Initial pose: gripper outside FOV (ECL control) Gripper enters field of view (Hybrid control, stereo) Final pose: gripper obscured (Hybrid control, mono)

Tracking Robustness Translational component of pose error Hybrid stereo Hybrid mono Hybrid stereo Hybrid mono EOL EOL Translational component of pose error Estimated scale (camera calibration parameter)

Baseline Error Error introduced in calibrated baseline: Baseline scaled between 0.7 to 1.5 Hybrid PBVS performance in presence of error:

Verge Error Error introduced in calibrated verge: Offset between –6 to +8 degrees Hybrid PBVS performance in presence of error:

Servoing Task

Conclusions We have proposed a hybrid PBVS scheme to solve problems in real-world tasks: Kinematic measurements overcome occlusions Visual measurements improve accuracy and overcome calibration errors Experimental results verify the increased accuracy and robustness compared to conventional methods.