Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.

Slides:



Advertisements
Similar presentations
University of Karlsruhe September 30th, 2004 Masayuki Fujita
Advertisements

Zhengyou Zhang Vision Technology Group Microsoft Research
TEMPLATE DESIGN © The basic model for a trigonometric setup requires that the HID be seen by at least two cameras at any.
Real-Time Template Tracking
Development of a system to reproduce the drainage from Tsujun Bridge for environment education Hikari Takehara Kumamoto National College of Technology.
Chayatat Ratanasawanya Min He May 13, Background information The goal Tasks involved in implementation Depth estimation Pitch & yaw correction angle.
EUCLIDEAN POSITION ESTIMATION OF FEATURES ON A STATIC OBJECT USING A MOVING CALIBRATED CAMERA Nitendra Nath, David Braganza ‡, and Darren Dawson EUCLIDEAN.
Computer vision: models, learning and inference
Hybrid Position-Based Visual Servoing
UAV pose estimation using POSIT algorithm
Two-view geometry.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Camera calibration and epipolar geometry
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
Vision-Based Motion Control of Robots
Accurate Non-Iterative O( n ) Solution to the P n P Problem CVLab - Ecole Polytechnique Fédérale de Lausanne Francesc Moreno-Noguer Vincent Lepetit Pascal.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Uncalibrated Geometry & Stratification Sastry and Yang
MEAM 620 Project Report Nima Moshtagh.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Mobile Robotics: 10. Kinematics 1
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Projected image of a cube. Classical Calibration.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Hand Signals Recognition from Video Using 3D Motion Capture Archive Tai-Peng Tian Stan Sclaroff Computer Science Department B OSTON U NIVERSITY I. Introduction.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
Overview and Mathematics Bjoern Griesbach
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Definition of an Industrial Robot
Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots Xavier Pérez *, Cecilio Angulo *, Sergio Escalera + and Diego Pardo *
Class material vs. Lab material – Lab 2, 3 vs. 4,5, 6 BeagleBoard / TI / Digilent GoPro.
오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교
Multi-view geometry.
1 Robust estimation techniques in real-time robot vision Ezio Malis, Eric Marchand INRIA Sophia, projet ICARE INRIA Rennes, projet Lagadic.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Single-view geometry Odilon Redon, Cyclops, 1914.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Robot Vision SS 2007 Matthias Rüther 1 ROBOT VISION Lesson 9: Robots & Vision Matthias Rüther.
Autonomous Navigation for Flying Robots Lecture 6.3: EKF Example
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Stein Unbiased Risk Estimator Michael Elad. The Objective We have a denoising algorithm of some sort, and we want to set its parameters so as to extract.
Enchaînement de tâches robotiques Tasks sequencing for sensor-based control Nicolas Mansard Supervised by Francois Chaumette Équipe Lagadic IRISA / INRIA.
V ISION -B ASED T RACKING OF A M OVING O BJECT BY A 2 DOF H ELICOPTER M ODEL : T HE S IMULATION Chayatat Ratanasawanya October 30, 2009.
3D Reconstruction Using Image Sequence
Typical DOE environmental management robotics require a highly experienced human operator to remotely guide and control every joint movement. The human.
Chayatat Ratanasawanya Min He April 6,  Recall previous presentation  The goal  Progress report ◦ Image processing ◦ depth estimation ◦ Camera.
Camera Model Calibration
Determining 3D Structure and Motion of Man-made Objects from Corners.
Computacion Inteligente Least-Square Methods for System Identification.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Gaits Cost of Transportation Wheeled Mobile Robots Most popular locomotion mechanism Highly efficient Simple mechanical implementation Balancing is.
Dezhen Song CSE Dept., Texas A&M Univ. Hyunnam Lee
Two-view geometry Computer Vision Spring 2018, Lecture 10
Common Classification Tasks
Multiple View Geometry for Robotics
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
Chapter 4 . Trajectory planning and Inverse kinematics
Lecture 15: Structure from motion
Presentation transcript:

Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson

Overview Introduction Basic components of visual servoing Image-based visual servo (IBVS) Position-based visual servo (PBVS) Stability analysis Conclusion Questions/comments

Introduction Visual servo (VS) control – the use of computer vision data to control the motion of a robot. Relies on techniques from image processing, computer vision, and control theory Two camera configurations: ▫Eye-in-hand: camera is mounted on a robot manipulator or on a mobile robot. ▫Camera is fixed in the workspace

Basic components of VS Error function ▫The goal is to minimize the error Design of s: ▫Consists of a set of features that are readily available in the image data (IBVS), or ▫Consists of a set of 3D parameters, which must be estimated from image measurements (PBVS)

Basic components of VS (Cont’d) Interaction matrix (feature Jacobian) Design of the controller ▫Can be done quite simply once s is selected ▫The most straightforward approach is to design a velocity controller In practice, it is impossible to know perfectly Le or Le+

Image-based visual servo (IBVS) The classical IBVS schemes use the image-plane coordinates of a set of points to define s. m - the pixel coordinates of a set of image points. a - the camera intrinsic parameters. For a 3D point X=(x,y,z) in the camera frame, using the projection model the point is at x=(u,v) in the image, the interaction matrix is

Image-based visual servo (IBVS) To control the 6DOF, at least three points are necessary. If the feature vector is chosen as x=(x 1,x 2,x 3 ), the Jacobian matrix can be However, more than 3 points are usually considered because there will exist cases for which L x is singular. Moreover, it is not possible to differentiate the global minima (poses for which e=0) when they exist.

IBVS: Estimating the interaction matrix 1.If L e is known; i.e., if the current z of each point is available 2.L e is unknown, but the desired z is available 3.Same condition as in 1.

Example of IBVS positioning

IBVS result: case 1

IBVS result: case 2

IBVS result: case 3

IBVS with a Stereovision system A straightforward extension of the IBVS approach. If a 3D point is visible in both left and right images, it is possible to use as visual features. The 3D coordinates of any point observed in both images can be estimated easily by a triangulation process, it is therefore possible and quite natural to use these 3D coordinates in the features set s.

Position-based visual servo (PBVS) PBVS schemes use the pose of the camera w.r.t. some reference coordinate frame to define s. Computing that pose from a set of measurements in an image requires the camera intrinsic parameters and the 3D model of the object observed. m - the pixel coordinates of a set of image points. a - the camera intrinsic parameters and the 3D model of the object.

PBVS: definition of s s=(t, θu) 1.If t is defined relative to the object frame, we have Following the developments presented: determining L e and the estimate of its inverse, the control law is

PBVS result: case 1

PBVS: definition of s s=(t, θu) 2.If t is defined w.r.t. the current camera frame, we have the corresponding control law is

PBVS result: case 2

Stability analysis - IBVS Local asymptotic stability (v c =0 and e≠e * ) can be ensured when number of visual feature in the vector s is greater than 6. Global asymptotic stability cannot be guaranteed.

Stability analysis - PBVS Global stability is achievable when all pose parameters are perfect. Robustness: small points position computation errors in the image can lead to pose errors that may impact the accuracy and the stability of the system significantly.

Conclusion IBVS or PBVS is better? – performance tradeoffs Stability: no strategy provides perfect properties Correct estimation of 3D parameters is important for IBVS, but crucial for PBVS. In PBVS, the vision sensor is considered as a 3D sensor, which leads to errors. In IBVS, the vision sensor is considered as a 2D sensor; therefore, it is robust to errors in calibration and image noise.

Questions/comments