UAV pose estimation using POSIT algorithm

Slides:



Advertisements
Similar presentations
TEMPLATE DESIGN © The basic model for a trigonometric setup requires that the HID be seen by at least two cameras at any.
Advertisements

Development of a system to reproduce the drainage from Tsujun Bridge for environment education Hikari Takehara Kumamoto National College of Technology.
Chayatat Ratanasawanya Min He May 13, Background information The goal Tasks involved in implementation Depth estimation Pitch & yaw correction angle.
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
EUCLIDEAN POSITION ESTIMATION OF FEATURES ON A STATIC OBJECT USING A MOVING CALIBRATED CAMERA Nitendra Nath, David Braganza ‡, and Darren Dawson EUCLIDEAN.
University of Bridgeport
Computer vision: models, learning and inference
3/5/2002Phillip Saltzman Video Motion Capture Christoph Bregler Jitendra Malik UC Berkley 1997.
Mapping: Scaling Rotation Translation Warp
Kinematics Pose (position and orientation) of a Rigid Body
Forward and Inverse Kinematics CSE 3541 Matt Boggus.
Two-view geometry.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Camera calibration and epipolar geometry
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
MASKS © 2004 Invitation to 3D vision Lecture 11 Vision-based Landing of an Unmanned Air Vehicle.
Ch. 2: Rigid Body Motions and Homogeneous Transforms
3-D Geometry.
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Projected image of a cube. Classical Calibration.
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Camera Parameters and Calibration. Camera parameters From last time….
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
Automatic Camera Calibration
ME/ECE Professor N. J. Ferrier Forward Kinematics Professor Nicola Ferrier ME Room 2246,
Metric Self Calibration From Screw-Transform Manifolds Russell Manning and Charles Dyer University of Wisconsin -- Madison.
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Camera Geometry and Calibration Thanks to Martial Hebert.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Kinematics of Robot Manipulator
Chapter 2 Robot Kinematics: Position Analysis
Geometric Camera Models and Camera Calibration
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
SS5305 – Motion Capture Initialization 1. Objectives Camera Setup Data Capture using a Single Camera Data Capture using two Cameras Calibration Calibration.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
KINEMATIC CHAINS AND ROBOTS (II). Many machines can be viewed as an assemblage of rigid bodies called kinematic chains. This lecture continues the discussion.
12.1 – Reflections 12.5 – Symmetry M217 – Geometry.
Robot Kinematics: Position Analysis 2.1 INTRODUCTION  Forward Kinematics: to determine where the robot ’ s hand is? (If all joint variables are known)
Method determinate angle of rotation of an IMU application for UAV Trinh Dinh Quan Southern TaiWan University.
CS-498 Computer Vision Week 7, Day 2 Camera Parameters Intrinsic Calibration  Linear  Radial Distortion (Extrinsic Calibration?) 1.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Kinematics. The function of a robot is to manipulate objects in its workspace. To manipulate objects means to cause them to move in a desired way (as.
Cherevatsky Boris Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin
Chapter 2: Description of position and orientation Faculty of Engineering - Mechanical Engineering Department ROBOTICS Outline: Introduction. Descriptions:
Existing Draganflyer Projects and Flight Model Simulator Demo Chayatat Ratanasawanya 5 February 2009.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Chayatat Ratanasawanya May 18, Overview Recalls Progress & Achievement Results 2.
II-1 Transformations Transformations are needed to: –Position objects defined relative to the origin –Build scenes based on hierarchies –Project objects.
Arizona’s First University. Command and Control Wind Tunnel Simulated Camera Design Jacob Gulotta.
Coordinate Systems and Transformations
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
End effector End effector - the last coordinate system of figure Located in joint N. But usually, we want to specify it in base coordinates. 1.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Character Animation Forward and Inverse Kinematics
Ch. 2: Rigid Body Motions and Homogeneous Transforms
Paper – Stephen Se, David Lowe, Jim Little
Depth from disparity (x´,y´)=(x+D(x,y), y)
Structure from motion Input: Output: (Tomasi and Kanade)
A study of the paper Rui Rodrigues, João P
CHAPTER 2 FORWARD KINEMATIC 1.
Multiple View Geometry for Robotics
Two-view geometry.
Multi-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
Structure from motion Input: Output: (Tomasi and Kanade)
Chapter 4 . Trajectory planning and Inverse kinematics
Translation in Homogeneous Coordinates
Presentation transcript:

UAV pose estimation using POSIT algorithm Chayatat Ratanasawanya Min He July 21, 2010

Overview The goal Experimental setup POSIT algorithm Homogeneous transformation Optitrack system Result calculation Results Conclusion Questions/comments

Experimental goal To be able to know the pose of the UAV from images taken by the on-board camera. Use POSIT algorithm in a part of the process Calculated pose is compared to the reading from the Optitrack system.

Test setup

Test setup Move the Q-ball around the test area in 17 different locations Q-ball pose is different in each location For each location, take a picture using on-board camera and record Optitrack pose reading Process the pictures offline using POSIT algorithm. Calculate Q-ball pose using homogeneous transformation and inverse kinematics. Results are compared to Optitrack readings.

POSIT algorithm Developers: Daniel DeMenthon & Philip David The algorithm determines the pose of an object relative to the camera from a set of 2D image points Image coordinates of min. 4 non-coplanar feature points POSIT Daniel DeMenthon and Philip David are at the university of Maryland Rotation matrix of object wrt. camera 3D world coordinates of the same points Translation of object wrt. camera Camera intrinsic parameters Reference: http://www.cfar.umd.edu/~daniel/SoftPOSIT.txt

Homogeneous transformation Homogeneous transformation is a matrix which shows how one coordinate frame is related to another. It is used to convert the location of a point between two frames. y x z Frame A y x z Frame C (dx, dy, dz)

Homogeneous transformation Multiplication: Inverse: y x z Frame A y x z Frame B BTA y x z Frame C CTB

Forward kinematics The process of deriving the transformation matrix from a known transformation (rotation and translation) between two frames y x z Frame A y x z Frame A y x z Frame C y x z Frame A ψ (dx, dy, dz)  θ

Inverse kinematics The process of deriving the transformation (rotation and translation) between two frames from a known transformation matrix Translation Inverse kinematics formula Rotation angles

Inverse kinematics formulas ψ y x z θ 

Optitrack system A motion capture system It tracks the movement of IR reflectors attached to an object in the workspace using six IR cameras Origin of the workspace (world) coordinates has to be set up during system calibration Point cloud mode: gives x,y,z-coordinates of individual reflector in a group Trackable mode: gives pose of an object defined by a group of reflectors

Q-ball trackable object

Result calculation y x z Box frame, B y x z Q-ball frame, Q y x z Cam frame, C CTB POSIT y x z World frame, W

Result calculation y x z Q-ball frame, Q y x z Box frame, B z x y Cam frame, C WTB y x z World frame, W

Result calculation y x z Q-ball frame, Q y x z Box frame, B z x y Cam frame, C QTC y x z World frame, W

Result calculation Inverse kinematics formula Translation y x z Q-ball frame, Q y x z Box frame, B z x y Cam frame, C Inverse kinematics formula CTB WTB QTC y x z World frame, W Translation Rotation angles

Results Experiment Optitrack measurements of Q-ball pose Calculation of Q-ball pose using POSIT algorithm x (cm) y (cm) z (cm) roll (deg) yaw (deg) pitch (deg) 1 -55.92 78.27 -33.51 -6.432 -11.43 -8.805 -56.9388 88.6125 -31.7526 -2.1859 -11.8127 -12.1935 2 -46.26 63.91 -32.31 -1 -9.369 -0.6592 -46.1600 71.5917 -31.5440 0.8890 -8.4284 -3.5944 3 -66.03 66.32 17.57 -10.48 -20.46 0.028 -63.2006 51.1865 15.7156 -9.7964 -18.8214 7.8211 4 3.569 103.7 -40.82 -5.253 0.423 -11.02 18.7217 109.0871 -47.2341 -6.7481 5.8695 -14.7075 5 32.92 103.2 -54.8 -1.326 20.29 -11.55 29.4440 109.7598 -55.9636 -5.4079 17.9145 -15.6576 6 61.71 103.1 -65.01 12.11 31.45 -12.75 62.8314 109.1026 -70.7933 3.4818 34.5913 -12.1920 7 82.7 102.2 -70.74 13.1 41.74 -13.25 85.1300 104.3310 -78.3997 5.1808 47.0439 -6.5138 8 23.1 103.5 -40.13 9.587 16.73 -12.26 34.8409 108.5184 -48.2129 4.2641 23.8152 -12.6185 9 -53.57 92.41 -52.75 -5.341 -16.65 -12.19 -49.9633 98.3089 -54.4165 -0.7202 -15.7037 -14.3379 10 -46.95 92.62 -72.83 -1.814 -25.09 -13.02 -45.4508 86.6950 -71.7959 3.3547 -23.8781 -10.2785 11 -41.74 92.17 -65.05 1.685 -10.55 -10.03 -38.9600 98.1515 -65.5449 3.8103 -8.2466 -13.2455 12 -62.84 103.6 -9.212 1.721 -10.9 -11.78 -68.0221 105.1375 -11.2639 5.5375 -11.9621 -13.3388 13 -59.93 100.9 -2.383 -0.6124 -5.947 0.5845 -66.7789 103.5136 5.9820 -1.1543 -7.3512 0.6478 14 -54.06 66.81 -5.23 6.3 -9.559 11.55 -53.8803 83.6447 -2.1709 3.7949 -9.1901 5.5188 15 -60.56 76.76 -18.05 0.1576 -24.6 1.029 -60.5642 76.0576 -18.4691 -0.3815 -24.2316 1.3763 16 -53.01 76.78 -26.84 -0.9526 -23.04 0.681 -56.0662 83.8747 -28.5519 1.6016 -23.9035 -2.3402 17 -43.46 79.25 -40.21 -8.653 -10.15 -9.22 -46.0110 87.4031 -41.7292 -4.6202 -12.5088 -11.3917

Results: Translational DOF

Results: Rotational DOF

Results: Error Sources of error: Max. Error x = 15 cm Max. Error y = 17 cm Max. Error z = 8 cm Max. Error roll = 8.5 deg Max. Error yaw = 7 deg Max. Error pitch = 8 deg Sources of error: Optitrack measurement accuracy of 4 cm Imaginary c.g. of Q-ball trackable object does not correspond exactly to c.g. of Q-ball used to define QTC

Conclusion POSIT algorithm can be used to estimate the pose of the UAV offline A 3D object of known dimension must be in the scene At least 4 non-coplanar feature points must be seen in image. Position of the object in the world frame must be known (frame WTB)

Summary Experimental goal and setup POSIT algorithm Results of POSIT & homogeneous transformation Optitrack system How to calculate the result Test results

Thank you