Vision-based Registration for AR Presented by Diem Vu Nov 20, 2003.

Slides:



Advertisements
Similar presentations
Vanishing points  .
Advertisements

Development of a system to reproduce the drainage from Tsujun Bridge for environment education Hikari Takehara Kumamoto National College of Technology.
Computer vision: models, learning and inference
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
More on single-view geometry
The fundamental matrix F
3D reconstruction.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Two-View Geometry CS Sastry and Yang
Mapping: Scaling Rotation Translation Warp
Single View Metrology A. Criminisi, I. Reid, and A. Zisserman University of Oxford IJCV Nov 2000 Presentation by Kenton Anderson CMPT820 March 24, 2005.
Image alignment Image from
Single-view metrology
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Uncalibrated Geometry & Stratification Sastry and Yang
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Many slides and illustrations from J. Ponce
CAU Kiel DAGM 2001-Tutorial on Visual-Geometric 3-D Scene Reconstruction 1 The plan for today Leftovers and from last time Camera matrix Part A) Notation,
Planar Matchmove Using Invariant Image Features Andrew Kaufman.
Image Stitching and Panoramas
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Registration for Augmented Reality Neil Birkbeck 3/27/2006.
Scene planes and homographies. Homographies given the plane and vice versa.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
3D Computer Vision and Video Computing 3D Vision Topic 8 of Part 2 Visual Motion (II) CSC I6716 Spring 2004 Zhigang Zhu, NAC 8/203A
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology 3D Engines and Scene Graphs Spring 2012.
Epipolar geometry Class 5. Geometric Computer Vision course schedule (tentative) LectureExercise Sept 16Introduction- Sept 23Geometry & Camera modelCamera.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Active Pursuit Tracking in a Projector-Camera System with Application to Augmented Reality Shilpi Gupta and Christopher Jaynes University of Kentucky.
WSCG2008, Plzen, 04-07, Febrary 2008 Comparative Evaluation of Random Forest and Fern classifiers for Real-Time Feature Matching I. Barandiaran 1, C.Cottez.
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
CSCE 643 Computer Vision: Structure from Motion
Metrology 1.Perspective distortion. 2.Depth is lost.
Single View Geometry Course web page: vision.cis.udel.edu/cv April 9, 2003  Lecture 20.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Multi-linear Systems and Invariant Theory in the Context of Computer Vision and Graphics CS329 Amnon Shashua.
Source: Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on Author: Paucher, R.; Turk, M.; Adviser: Chia-Nian.
EECS 274 Computer Vision Affine Structure from Motion.
Feature Matching. Feature Space Outlier Rejection.
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Markerless Augmented Reality Platform Design and Verification of Tracking Technologies Author:J.M. Zhong Date: Speaker:Sian-Lin Hong.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
EECS 274 Computer Vision Projective Structure from Motion.
Model Refinement from Planar Parallax Anthony DickRoberto Cipolla Department of Engineering University of Cambridge.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Homogeneous Coordinates (Projective Space)
Epipolar geometry.
Real-Time Image Mosaicing
More on single-view geometry class 10
A study of the paper Rui Rodrigues, João P
Multi-linear Systems and Invariant Theory
3D reconstruction class 11
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
An Introduction of Marker and Markerless In AR
Presentation transcript:

Vision-based Registration for AR Presented by Diem Vu Nov 20, 2003

Markerless Tracking using Planar Structure in the Scene. G. Simon, A.W. Fitzgibbon and A. Zisserman, Calibration-Free Augmented Reality. K.N Kutulakos and J.R. Vallino, 1998.

Planar-surface tracking. Camera position can be recovered from planar homography. Planar structure is common in almost all scenarios.

y x z HwHw World to image homography Image to image homography

World to image homography Consider our tracking plane is the plane Z=0 y x z HwHw

Projection matrix

y x z P

y x z P

If K and H w are known, then r 1, r 2 and t can be recovered, thus P. Question: How to compute H w ? Direct. Indirect.

Direct measurement of H w Select 4 points {x k } on a rectangle in the scene. Compute H which maps the unit square to {x k }. (0,0) (0,1)(1,1) () (1,0)

Direct measurement of H w Select 4 points {x k } on a rectangle in the scene. Compute H which maps the unit square to {x k }. Compute H w =Hdiag(1, 1 / s,1) (0,0) (0,s)(1,s) () (1,0)

Indirect measurement of H w y x z

y x z

Algorithm summary Compute (direct measure). For each frame i, compute frame to frame homography (RANSAC) Compute by:

Other … Using only 2 points in direct method ?? Matching the frame i with frame 0 in order to reduce error. Estimate intrinsic parameters K Hand-off mechanism.

Possible problems? Homography is only up-to-scale? Plain surface (no texture) or moving objects in the foreground ? Depth order, occlusion ? Speed ?

Affine virtual object representation Represent virtual objects so that their projection can be computed as a linear combination of the projection of the fiducial points.

Project a point from its affine coordinates

Compute affine coordinates from projection along two viewing direction

Algorithm Setup the affine basis

Algorithm Setup the affine basis Locate the object in 2 frames.

Algorithm Setup the affine basis Locate the object in 2 frames. Compute the affine coordinates for each point.

Algorithm Setup the affine basis Locate the object in 2 frames. Compute the affine coordinates for each point. Compute projection of the object and render the object in each frame.

Camera viewing direction  and  are the first and second row of  2x3. The camera viewing direction expressed in the coordinate frame of the affine basis points:  =   

Depth order w is the z-value of point p (x,y,z).

Advantages No need any metric information. Able to use with the existing hardware to accelerate graphics operations. Can be used to improve tracking.

Limitation Affine constraints. Lost of metric information.