Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alex Norton Advisor: Dr. Huggins February 28, 2012 Senior Project Progress Report Bradley University.

Slides:



Advertisements
Similar presentations
Image Rectification for Stereo Vision
Advertisements

Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
The fundamental matrix F
Lecture 11: Two-view geometry
CSE473/573 – Stereo and Multiple View Geometry
Correcting Projector Distortions on Planar Screens via Homography
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Two-view geometry.
Lecture 8: Stereo.
Relations between image coordinates Given coordinates in one image, and the tranformation Between cameras, T = [R t], what are the image coordinates In.
Camera calibration and epipolar geometry
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Lecture 21: Multiple-view geometry and structure from motion
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
Projected image of a cube. Classical Calibration.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
Stereo Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision (with slides by James Rehg and.
COMP322/S2000/L271 Stereo Imaging Ref.V.S.Nalwa, A Guided Tour of Computer Vision, Addison Wesley, (ISBN ) Slides are adapted from CS641.
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/12/11 Many slides adapted from Lana Lazebnik,
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
CSE 6367 Computer Vision Stereo Reconstruction Camera Coordinate Transformations “Everything should be made as simple as possible, but not simpler.” Albert.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Computer vision: models, learning and inference
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
3D Stereo Reconstruction using iPhone Devices Final Presentation 24/12/ Performed By: Ron Slossberg Omer Shaked Supervised By: Aaron Wetzler.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
Stereoscopic Analyzer On-Set Assistance System for 3D Capturing Frederik Zilly.
Multi-view geometry.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alexander Norton Advisor: Dr. Huggins April 26, 2012 Senior Capstone Project Final Presentation.
By: Alex Norton Advisor: Dr. Huggins November 15, 2011
Epipolar geometry Epipolar Plane Baseline Epipoles Epipolar Lines
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Geometric Camera Models
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
stereo Outline : Remind class of 3d geometry Introduction
Feature Matching. Feature Space Outlier Rejection.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Solving for Stereo Correspondence Many slides drawn from Lana Lazebnik, UIUC.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Vision Sensors for Stereo and Motion Joshua Gluckman Polytechnic University.
3D Reconstruction Using Image Sequence
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
Two-view geometry.
Two-view geometry.
Depth Analysis With Stereo Camera
Two-view geometry.
Multi-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
Presentation transcript:

Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alex Norton Advisor: Dr. Huggins February 28, 2012 Senior Project Progress Report Bradley University ECE Department

Presentation Outline Review of Proposed Project  Project Overview  Original Proposed Schedule Tasks Completed  Webcams setup  Calibration mode software Remaining Tasks  Run mode software  Improve existing software Revised Schedule

Project Overview Two horizontally aligned, slightly offset cameras taking a pair of images at the same time By matching corresponding pixels between the two images, the distances to objects can be calculated using triangulation This depth information can be used to create a 3-D image and terrain map

Original Proposed Schedule Tentative Schedule for Spring 2012 WeeksAlex NortonMatthew Foster 1Assemble camera setup 2Configure calibration rigEnsure OpenCV runs correctly on lab computers 3Begin writing OpenCV code for calibration mode Begin writing OpenCV code for run mode 4Continue writing OpenCV code for calibration mode Continue writing OpenCV code for run mode 5Continue writing OpenCV code for calibration mode Continue writing OpenCV code for run mode 6Continue writing OpenCV code for calibration mode Continue writing OpenCV code for run mode 7Test and debug calibration mode codeContinue writing OpenCV code for run mode 8Test and debug calibration mode codeContinue writing OpenCV code for run mode 9Test run mode code with calibrated cameras 10Debug calibration mode codeDebug run mode code 11Debug calibration mode codeDebug run mode code 12Test and debug complete computer vision code 13Test and debug complete computer vision code 14Prepare for final presentation

Tasks Completed Webcams setup Creates “capture” objects for both webcams Takes a set of images each time the “enter” key is pressed Displays the saved set of images in two windows Saves the images to a specified folder to use for further image processing

Webcams Setup Output

Necessity of Calibration Produces the rotation and translation matrices needed to rectify sets of images Rectification makes the stereo correspondence more accurate and more efficient Failing to calibrate the cameras is a possible reason for why past groups have failed to get accurate results

Calibration Mode Software Input is a list of sets of images of a chessboard, and the number of corners along the length and width of the chessboard Read in the left and right image pairs, find the chessboard corners, and set object and image points for the images where all the chessboards could be found Given this list of found points on the chessboard images, the code calls cvStereoCalibrate() to calibrate the cameras

Calibration Mode Software This calibration gives us the camera matrix M and the distortion vector D for the two cameras; it also yields the rotation matrix R, the translation vector T, the essential matrix E, and the fundamental matrix F The accuracy of the calibration is assessed by checking how nearly the points in one image lie on the epipolar lines of the other image

Calibration Mode Software The code then moves on to computing the rectification maps using Bouguet’s method with cvStereoRectify() The rectified images are then computed using cvRemap() The disparity map is then computed by using cvFindStereoCorrespondenceBM()

Calibration Mode Software Two methods for stereo rectification  Hartley’s Method: uses the fundamental matrix, does not require the cameras to be calibrated, produces more distorted images than Bouguet’s method  Bouget’s Method: uses the rotation and translation parameters from two calibrated cameras, also outputs the reprojection matrix Q used to project two dimensional points into three dimensions

Calibration Mode Software Matrices Rotation matrix R, Translation Vector T : extrinsic matrices, put the right camera in the same plane as the left camera, which makes the two image planes coplanar Fundamental matrix F: intrinsic matrix, relates the points on the image plane of one camera in pixels to the points on the image plane of the other camera in pixels

Calibration Mode Software Matrices Essential Matrix E: intrinsic matrix, relates the physical location of the point P as seen by the left camera to the location of the same point as seen by the right camera Camera matrix M, distortion matrix D: intrinsic matrices, calculated and used within the function

Calibration Mode Software Example of bad chessboard image

Calibration Mode Software Output when bad chessboard images are run through the calibration software

Calibration Mode Software Example of good chessboard image

Calibration Mode Software Output when good chessboard images are run through the calibration software

Remaining Tasks Use triangulation to determine distances to objects Calculate the error in the distance measurements Minimize the error in both the camera calibration and the distance measurements

Revised Schedule Schedule for Spring 2012 WeeksAlex NortonMatthew Foster 7Test and debug calibration mode code 8 9 Write OpenCV code for run mode 10 Write OpenCV code for run mode 11 Test and debug run mode code 12 Test and debug run mode code 13 Test and debug complete computer vision code 14 Test and debug complete computer vision code, prepare for final presentation

Questions??