Mapping: Scaling Rotation Translation Warp

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

Chapter 4 Euclidean Vector Spaces
6.4 Best Approximation; Least Squares
Announcements. Structure-from-Motion Determining the 3-D structure of the world, and/or the motion of a camera using a sequence of images taken by a moving.
3D reconstruction.
Computer Graphics Lecture 4 Geometry & Transformations.
Transformations We want to be able to make changes to the image larger/smaller rotate move This can be efficiently achieved through mathematical operations.
1 Computer Graphics Chapter 6 2D Transformations.
Geometry (Many slides adapted from Octavia Camps and Amitabh Varshney)
HCI 530 : Seminar (HCI) Damian Schofield. HCI 530: Seminar (HCI) Transforms –Two Dimensional –Three Dimensional The Graphics Pipeline.
Motion Analysis Slides are from RPI Registration Class.
Linear Algebra and SVD (Some slides adapted from Octavia Camps)
Ch. 2: Rigid Body Motions and Homogeneous Transforms
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
CSCE 590E Spring 2007 Basic Math By Jijun Tang. Applied Trigonometry Trigonometric functions  Defined using right triangle  x y h.
3-D Geometry.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Introduction to Robotics Lecture II Alfred Bruckstein Yaniv Altshuler.
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
CV: 3D sensing and calibration
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
MSU CSE 803 Stockman CV: Matching in 2D Matching 2D images to 2D images; Matching 2D images to 2D maps or 2D models; Matching 2D maps to 2D maps.
Orthogonality and Least Squares
Representation CS4395: Computer Graphics 1 Mohan Sridharan Based on slides created by Edward Angel.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
CS 450: Computer Graphics 2D TRANSFORMATIONS
Lecture 7: Matrix-Vector Product; Matrix of a Linear Transformation; Matrix-Matrix Product Sections 2.1, 2.2.1,
Geometric Intuition Randy Gaul. Vectors, Points and Basis Matrices Rotation Matrices Dot product and how it’s useful Cross product and how it’s useful.
Mathematical Fundamentals
Recap of linear algebra: vectors, matrices, transformations, … Background knowledge for 3DM Marc van Kreveld.
CS 480/680 Computer Graphics Representation Dr. Frederick C Harris, Jr. Fall 2012.
Geometric Transformation. So far…. We have been discussing the basic elements of geometric programming. We have discussed points, vectors and their operations.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
CSE 681 Review: Transformations. CSE 681 Transformations Modeling transformations build complex models by positioning (transforming) simple components.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
视觉的三维运动理解 刘允才 上海交通大学 2002 年 11 月 16 日 Understanding 3D Motion from Images Yuncai Liu Shanghai Jiao Tong University November 16, 2002.
Metrology 1.Perspective distortion. 2.Depth is lost.
Geometric Camera Models
Jinxiang Chai CSCE441: Computer Graphics 3D Transformations 0.
Jinxiang Chai Composite Transformations and Forward Kinematics 0.
Computer Graphics 2D Transformations. 2 of 74 Contents In today’s lecture we’ll cover the following: –Why transformations –Transformations Translation.
16/5/ :47 UML Computer Graphics Conceptual Model Application Model Application Program Graphics System Output Devices Input Devices API Function.
CS 376b Introduction to Computer Vision 04 / 28 / 2008 Instructor: Michael Eckmann.
Geometric Transformations
3D Transformation A 3D point (x,y,z) – x,y, and z coordinates
Fall 2004CS-321 Dr. Mark L. Hornick 1 2-D Transformations World Coordinates Local/Modelling Coordinates x y Object descriptions Often defined in model.
Affine Geometry.
1 Representation. 2 Objectives Introduce concepts such as dimension and basis Introduce coordinate systems for representing vectors spaces and frames.
CSCE 552 Fall 2012 Math By Jijun Tang. Applied Trigonometry Trigonometric functions  Defined using right triangle  x y h.
Camera Model Calibration
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Instructor: Mircea Nicolescu Lecture 9
Jinxiang Chai CSCE441: Computer Graphics 3D Transformations 0.
1 Teaching Innovation - Entrepreneurial - Global The Centre for Technology enabled Teaching & Learning, N Y S S, India DTEL DTEL (Department for Technology.
Image Warping 2D Geometric Transformations
Homogeneous Coordinates They work, but where do they come from? Jonathan Senning
Homogeneous Coordinates (Projective Space)
Geometric Camera Models
CV: Matching in 2D Matching 2D images to 2D images; Matching 2D images to 2D maps or 2D models; Matching 2D maps to 2D maps MSU CSE 803 Stockman.
2D Geometric Transformations
CSCE441: Computer Graphics 2D/3D Transformations
The Pinhole Camera Model
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Presentation transcript:

Mapping: Scaling Rotation Translation Warp Matching in 2D Mapping: Scaling Rotation Translation Warp

Introduction How to make and use correspondences between images and maps, images and models, images and other images All work is done in two dimensions

Registration of 2D Data Equation and figure show an invertible mapping points of a model M and points of an image I. Actually, M and I can be any 2D coordinate space and each represent a map, a model, or image. The mapping is invertible: functions g-1 and h-1 exist. Image registration is the process by which points of two images similar viewpoints of essentially the same scene are geometrically transformed so that corresponding feature points of the two images have the same coordinates after transformation.

Representation of points A 2D point has two coordinates and is conveniently represented as either a row vector P=[x,y] or column vector P=[x,y]t . Homogeneous Coordinates: It is often convenient notationally for computer processing to use homogeneous coordinates for points, especially when affine transformations are used. The homogeneous coordinates of a 2D point P=[x,y]t are [sx,sy,s]t, where s is a scale factor, commonly 1.0

AFFINE MAPPING FUNCTIONS A large class of useful spatial transformations can be represented by multiplication of a matrix and a homogeneous point. Scaling Rotation Translation

Scaling A common operation is scaling Uniform scaling changes all coordinates in the same way, or equivalently changes the size of all objects in the same way. Figure shows a 2D point P=[1,2] scaled by a factor of 2 to obtain the new point P’=[2,4].

Scaling Scaling is a linear transformation, meaning that it can be easily represented in terms of the scale factor applied to the two basis vectors for 2D Euclidean space. For example, [1,2]=1[1,0]+2[0,1] and 2[1,2]=2(1[1,0]+2[0,1] )=[2,4]

Rotation A second common operation is rotation about a point in 2D space. Figure shows a 2D point P=[x,y] rotated by angle θ counterclockwise about the origin to obtain the new point P’=[x’,y’].

Rotation As for any linear transformation, we take the columns of the matrix to be the result of the transformation applied to the basis vectors; transformation of any other vectors can be expressed as a linear combination of the basis vectors.

Orthogonal and Orthonormal Transformations Orthogonal A set of vectors is orthogonal if all pairs of vectors in the set are perpendicular (have scalar product of zero). Orthonormal A set of vectors is orthonormal if it is an orthogonal set and all vectors have unit length. A rotation preserves both the length of the basis vectors and their orthogonality.

Translation Often, point coordinates need to be shifted by some constant amount, which is equivalent to changing the origin of the coordinate system. Since translation does not map the origin [0,0] to itself, we cannot model it using a simple 2x2 matrix as has been done for scaling and rotation: In other words, it is not a linear operation.

Translation The matrix multiplication shown in equation can be used to model the translation D of point [x,y] so that [x’,y’]=D([x,y])=[x+x0,y+y0]

Exercise: Rotation about a point Give the 3x3 matrix that represents a π/2 rotation of the plane about the point [5,8] . Hint: First derive the matrix D-5,-8 that translates the point [5,8] to the origin of a new coordinate frame. The matrix which we want will be the combination D5,8Rπ/2 D-5,-8 Check that your matrix correctly transforms points [5,8], [6,8] and [5,9].

Rotation, Scaling and Translation Figure shows a common situation: An image I[r,c] is taken by a square-pixel camera looking perpendicularly down on a planar workspace W[x,y]. This conversion can be done by composing a rotation R, a scaling S and a translation D as given and denoted wPj=Dx0,y0SsRθiPj.

Rotation, Scaling and Translation

Rotation, Scaling and Translation For example, we have iP1=[100,60] and wP1=[200,100]. iP2=[380,120] and wP2=[300,200]. Control points are clearly distinguishable and easily measured points to establish known correspondences between different coordinate spaces. θ is easily determined independent of the other parameters as follows: θi=arctan((iy2-iy1)/(ix2-ix1)) θi=arctan(60/280)=12.09o θw=arctan((wy2-wy1)/(wx2-wx1)) θw=arctan(100/100)=45o θ= θw- θi θ=32.91o Once θ is determined, all sin and cos elements are known: There are 3 equations and 3 unknowns which can be solved for s and x0 ,y0.

Object Recognition and Location Example (left) Model object and (right) three holes detected in an image

Object Recognition and Location Example We will use the hypothesized correspondences (A,H2) and (B,H3) to deduce the transformation

Object Recognition and Location Example The direction of the vector from A to B in the model is θ1=arctan(9.0/8.0)=0.844 Corresponding vector from H2 to H3 in the image θ2=arctan(12.0/0.0)= π/2=1.571 The rotation is thus θ=0.727 radians The known matching coordinates for points A in the model and H2 in the image, we obtain the following system, where u0,v0 are the unknown translation components in the image plane. The two resulting equations readily produce u0=15.3 and v0=-5.95

Object Recognition and Location Example Having a complete spatial transformation, we can now compute the location of any model points in the image space, including the grip points R=[29,19] and Q=[32,12]. As shown next, model point R transforms to image point iR=[24.4, 27.4] Using Q=[32,12] as the input to the transformation outputs the image location iQ=[31.2, 24.2] for the other grip point.

2D OBJECT RECOGNITION VIA AFFINE MAPING Methods of recognizing 2D objects through mapping of model points onto image points Local-Feature-Focus Method Pose-Clustering Algorithm Geometric Hashing The methods are meant to determine if a given image contains an instance of an object model and to determine the pose (position and orientation) of the object with respect to camera.

2D OBJECT RECOGNITION VIA AFFINE MAPING A 2D model and 3 matching image of an airplane part

Pose Clustering We have seen that an alignment between model and image features using an RST transformation can be obtained from two matching control points. Obtaining the matching control points automatically may not be easy due to ambiguous matching possibilities. The pose-clustering approach computes an RST alignment for all possible control point pairs and then checks for a cluster of similar parameter sets.

Common line-segment junctions used in matching

Example Pose Detection Problem Assume that the pairs of combined type LX or TY are used in this example

Example Pose Detection Problem