Www.cvip.uofl.edu Active Vision Sensor Planning of CardEye Platform Sherif Rashad, Emir Dizdarevic, Ahmed Eid, Chuck Sites and Aly Farag ResearchersSponsor.

Slides:



Advertisements
Similar presentations
QR Code Recognition Based On Image Processing
Advertisements

CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
1 pb.  camera model  calibration  separation (int/ext)  pose Don’t get lost! What are we doing? Projective geometry Numerical tools Uncalibrated cameras.
Two-view geometry.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
P.1 JAMES S. Bethel Wonjo Jung Geomatics Engineering School of Civil Engineering Purdue University APR Sensor Modeling and Triangulation for an.
Image Correspondence and Depth Recovery Gene Wang 4/26/2011.
Used slides/content with permission from
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
3D Measurements by PIV  PIV is 2D measurement 2 velocity components: out-of-plane velocity is lost; 2D plane: unable to get velocity in a 3D volume. 
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
The Pinhole Camera Model
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Stereo Ranging with verging Cameras Based on the paper by E. Krotkov, K.Henriksen and R. Kories.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Virtual Imaging Peripheral for Enhanced Reality Aaron Garrett, Ryan Hannah, Justin Huffaker, Brendon McCool.
CSE 6367 Computer Vision Stereo Reconstruction Camera Coordinate Transformations “Everything should be made as simple as possible, but not simpler.” Albert.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Stereoscopic PIV.
Automatic Camera Calibration
Computer vision: models, learning and inference
CSE 681 Ray Tracing Geometry. The Camera Model Based on a simpile pin-hole camera model –Simplest lens model –Pure geometric optics – based on similar.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.2: Sensors Jürgen Sturm Technische Universität München.
Constraints-based Motion Planning for an Automatic, Flexible Laser Scanning Robotized Platform Th. Borangiu, A. Dogar, A. Dumitrache University Politehnica.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
IPD Technical Conference February 19 th 2008 Friction Roller Identification Jack Weatherford Hermitage Automation & Controls.
Integral University EC-024 Digital Image Processing.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Metrology 1.Perspective distortion. 2.Depth is lost.
Geometric Camera Models
Computational Framework for Performance Characterization of 3-D Reconstruction Techniques from Sequence of Images Ahmed Eid and Aly Farag Computer Vision.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
A General-Purpose Platform for 3-D Reconstruction from Sequence of Images Ahmed Eid, Sherif Rashad, and Aly Farag Computer Vision and Image Processing.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Geometry of Multiple Views
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
CS332 Visual Processing Department of Computer Science Wellesley College Analysis of Motion Recovering observer motion.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Feature Matching. Feature Space Outlier Rejection.
Visual Odometry David Nister, CVPR 2004
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Smart Camera Network Localization Using a 3D Target John Kassebaum Nirupama Bulusu Wu-Chi Feng Portland State University.
© TMC Computer School HC20203 VRML HIGHER DIPLOMA IN COMPUTING Chapter 4 – Object Manipulation in VRML.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
CMSC5711 Image processing and computer vision
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Building Omnicam and Multisensor Cameras
Epipolar geometry.
Common Classification Tasks
Overview Pin-hole model From 3D to 2D Camera projection
CMSC5711 Image processing and computer vision
Geometric Camera Models
Multiple View Geometry for Robotics
Course 6 Stereo.
Presentation transcript:

Active Vision Sensor Planning of CardEye Platform Sherif Rashad, Emir Dizdarevic, Ahmed Eid, Chuck Sites and Aly Farag ResearchersSponsor US Army Mounted Maneuver BattleSpace Lab

Objective The main objective of this project is to design and implement an active sensor planning algorithm for the CardEye platform. For this system, generalized camera parameters such as position, orientation, and optical settings have to be determined according to the new position of the robot arm so that its features are within the field of view of the CardEye cameras and are in focus.

System Overview Robot Arm Super Computer Transmit current coordinates to super computer using the serial port Reading robot coordinates from super computer Sensor Planning Sending planned parameters to CardEye to adjust the cameras’ settings according to the new settings CardEye active vision system Sending the captured images of robot arm to be displayed

CardEye Platform This platform uses an agile trinocular vision head contains three CCD cameras (c’, c”, c’’’) with their lenses for the automated zoom and focus. The cameras are placed at equal distances from each other. The cameras can translate (t) along their mounts to change the baseline distance. At the same time, the cameras can rotate towards each other to fixate to a point in space by changing the vergence angle (  ). The target is assumed to be inside a sphere that has a radius (R).

Geometry for Sensor Planning X Y Z O C’ C’’’ C’’ t t t d G R l l l    Object contained in a sphere Optical axis Fixation point Optical Center Camera Parameters: translation ( t ) vergence angle (  ) filed of view angle(  ) (for zoom setting) R l GC’ 

The system's fixation point is the center C of the sphere. The center of the sphere is at distance d from the origin along the z axis. Every time, the sensor planning module will have the radius R and the distance d only (to be calculated from the initial position and the current coordinates of the robot). For suitable planning, we should calculate t for translation,  for adjusting the vergence angle so all cameras can fixate on the same point in 3D space, and  to set the zoom of the cameras. Sensor Planning

System Constraints (a) Overlap Constraint 3t 2 C’ C’’’ G R 3t 2  d’ O’ where By maximizing , overlap area is also maximized. By decreasing t, we increase the overlap.

System Constraints (b) Disparity Constraint 3t 2 C’ C’’’ R 3t 2 P G d’   O’ By increasing t, more adequate depth information can be recovered from the imaged object. Total Angular Disparity

For effective reconstruction, the images must display adequate depth information (increase t) and have a fairly large overlap area (decrease t). Solution Solution: 1. Analyze the effect of object distance on overlap and disparity angles and compute translation t. 2. Normalize the translation values based on the physical range of the system translation. 3. Estimate the system workspace 4. Repeat step 1 and compute t as function of object distance d. Analysis of System Constraints

Five cases of object size are analyzed and their solution for t is estimated for each case: Case 1: Case 1: 0.2m <R < 0.3m, 1.200m < d < 7m t = d d [m] Case 2: Case 2: 0.3m < R < 0.5m, 1.925m < d < 7m t = 0: d2 + 0:04702 d [m] Case 3: Case 3: 0:5m < R < 0.7m, 2.650m < d < 7m t = 0: d2 + 0:05530 d [m] Case 4: Case 4: 0.7m < R <0.9 m, 3.375m < d< 7m t = 0: d2 + 0:06668 d [m] Case 5: Case 5: 0:9m < R <1.0m, 4.100m < d < 7m t = 0: d2 + 0:08380 d [m] 2nd-order polynomial equation for sensor placement t 3rd-order polynomial equation was for the voltage used to control the zoom in lenses

After Sensor Planning Sample of Results (1) Before Sensor Planning At d=2.091m and R=0.399m

At d=1.525m and R=0.200m After Sensor Planning Before Sensor Planning Sample of Results (2)