Digital Imaging and Remote Sensing Laboratory Automatic Tie-Point and Wire-frame Generation From Oblique Aerial Imagery Seth Weith-Glushko Advisor: Carl.

Slides:



Advertisements
Similar presentations
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
Advertisements

By Cynthia Rodriguez University of Texas at San Antonio
The fundamental matrix F
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
1 pb.  camera model  calibration  separation (int/ext)  pose Don’t get lost! What are we doing? Projective geometry Numerical tools Uncalibrated cameras.
Computer Vision Detecting the existence, pose and position of known objects within an image Michael Horne, Philip Sterne (Supervisor)
Mapping: Scaling Rotation Translation Warp
CS 4363/6353 INTRODUCTION TO COMPUTER GRAPHICS. WHAT YOU’LL SEE Interactive 3D computer graphics Real-time 2D, but mostly 3D OpenGL C/C++ (if you don’t.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Image alignment Image from
Structure from motion.
Chapter 5 Orthogonality
A Study of Approaches for Object Recognition
8. Geometric Operations Geometric operations change image geometry by moving pixels around in a carefully constrained way. We might do this to remove distortions.
Rolando Raqueno, Advisor Credits for Winter Quarter, 2002: 2
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Many slides and illustrations from J. Ponce
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Spectral contrast enhancement
Image Registration January 2001 Gaia3D Inc. Sanghee Gaia3D Seminar Material.
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
Machine Vision for Robots
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Epipolar geometry The fundamental matrix and the tensor
Recognition and Matching based on local invariant features Cordelia Schmid INRIA, Grenoble David Lowe Univ. of British Columbia.
WASET Defence, Computer Vision Theory and Application, Venice 13 th August – 14 th August 2015 A Four-Step Ortho-Rectification Procedure for Geo- Referencing.
Week 2 - Wednesday CS361.
Local invariant features Cordelia Schmid INRIA, Grenoble.
3D SLAM for Omni-directional Camera
Digital Face Replacement in Photographs CSC2530F Project Presentation By: Shahzad Malik January 28, 2003.
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS 1 Shading (Shading) & Smooth Shading Graphics.
Orthorectification using
Imaging Geometry for the Pinhole Camera Outline: Motivation |The pinhole camera.
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
Image Preprocessing: Geometric Correction Image Preprocessing: Geometric Correction Jensen, 2003 John R. Jensen Department of Geography University of South.
CSCE 643 Computer Vision: Structure from Motion
Radiometric Correction and Image Enhancement Modifying digital numbers.
Progress Report Development of a Driver Alert System for Road Safety.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
Lecture 3 The Digital Image – Part I - Single Channel Data 12 September
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Introduction to Soft Copy Photogrammetry
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Image Synthesis Rabie A. Ramadan, PhD 4. 2 Review Questions Q1: What are the two principal tasks required to create an image of a three-dimensional scene?
L6 – Transformations in the Euclidean Plane NGEN06(TEK230) – Algorithms in Geographical Information Systems by: Irene Rangel, updated by Sadegh Jamali.
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
INTRODUCTION TO GIS  Used to describe computer facilities which are used to handle data referenced to the spatial domain.  Has the ability to inter-
3D reconstruction from uncalibrated images
Image Registration Advanced DIP Project
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
Methods for 3D Shape Matching and Retrieval
Content Based Color Image Retrieval vi Wavelet Transformations Information Retrieval Class Presentation May 2, 2012 Author: Mrs. Y.M. Latha Presenter:
MultiModality Registration Using Hilbert-Schmidt Estimators By: Srinivas Peddi Computer Integrated Surgery II April 6 th, 2001.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
Digital Image Processing CSC331
Image Quality Measures Omar Javed, Sohaib Khan Dr. Mubarak Shah.
COS 429 PS3: Stitching a Panorama Due November 10 th.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Ancona - Municipality Hall The principal point PP is the intersection of the three heights of the triangle with vertices the three V.P. The V.P. procedure.
Introduction to Computer Graphics
- photometric aspects of image formation gray level images
INTRODUCTION TO GEOGRAPHICAL INFORMATION SYSTEM
Two-view geometry Computer Vision Spring 2018, Lecture 10
Recognition and Matching based on local invariant features
DIGITAL PHOTOGRAMMETRY
Presentation transcript:

Digital Imaging and Remote Sensing Laboratory Automatic Tie-Point and Wire-frame Generation From Oblique Aerial Imagery Seth Weith-Glushko Advisor: Carl Salvaggio Research Proposal November 7, 2003

Digital Imaging and Remote Sensing Laboratory Table of Contents The Problem Specific Aims Proposed Solution A description of the individual transforms and algorithms used to make up the proposed algorithms Methods Timetable Budget References

Digital Imaging and Remote Sensing Laboratory The Problem Using photogrammetric techniques, tie matching algorithms and bundle adjustment algorithms, it is possible to create a 3D model using a series of images of an object These images must be around an axis (for objects) or ortho-rectified (for landscapes) for the algorithms to work We want to be able to use oblique imagery for input into the algorithms By using oblique imagery, more pixels are available to describe the features of a 3D object than would be in an ortho-rectified image

Digital Imaging and Remote Sensing Laboratory Specific Aims Current algorithms only work on ortho-graphic (for tie-points) and axial imagery (for wire-frame generation) The goal is to draw from both photogrammetry and computer graphics to develop two algorithms which can work in unison A tie-point algorithm that works on oblique imagery A bundle adjustment algorithm that can use oblique imagery In the future, this algorithm would become part of a suite of algorithms that could generate an accurate 3D model of scene independent of the type of imagery used

Digital Imaging and Remote Sensing Laboratory Proposed Solution The algorithm below relies heavily on the use of INS (Inertial Navigational System) data The input would be a series of oblique images made around a common area The output would be a matrix of matching points and a file using a common 3D format Image Processing Ortho-rectification Transform Input: A series of oblique images around a common area Histogram processing Converts an oblique image into an ortho-rectified image

Digital Imaging and Remote Sensing Laboratory Proposed Solution Definition of Points Point matching algorithm Geometric Transform Use the Laplacian of Gaussian (LoG) filter to find points of high frequency (i.e. edges) Use point distance comparison, point scale comparison and point angle comparison to find matching points Using matched points, generate a geometric transformation and use registration as indicator of “goodness” of points

Digital Imaging and Remote Sensing Laboratory Proposed Solution Inverse Geometric Transform Inverse Orthorectification Transform Bundle Adjustment Algorithm Perform an inverse geometric transformation using previous transformation matrix Convert the orthorectified image back into its original oblique form Using pairs of matched points, define points in 3D space Output: Matrix containing matched pairs between images

Digital Imaging and Remote Sensing Laboratory Proposed Solution Interface with 3D System Using software libraries and defined 3D points, generate an output file Output: 3D wire-frame mesh of a scene

Digital Imaging and Remote Sensing Laboratory Ortho-rectification Transform The ortho-rectification transform converts an oblique image into an ortho-rectified (flat) image by means of a linear equation There are four unknowns. We can use the fiducial points of an image as the four points we need to solve for the constants

Digital Imaging and Remote Sensing Laboratory Image Processing Image processing needs to be performed because images with dissimilar digital count affect the point generation operator Histogram matching is performed to minimize this dissimilarity Images courtesy C. Salvaggio

Digital Imaging and Remote Sensing Laboratory Definition of Points To define points the Laplacian of Gaussian operator is used Walli found that if there is an edge in an image, a thresholded filtered image would show a point at that edge Images courtesy K. Walli

Digital Imaging and Remote Sensing Laboratory Point Matching Algorithms To match defined points, three algorithms are primarily used: pixel distance match, scale match, angle match Another matching criteria is LoG maxima similarity 25%

Digital Imaging and Remote Sensing Laboratory Pixel Distance Match This algorithm works by comparing distances between pixels in a matrix Distance= # Same Elements: 2 3 2

Digital Imaging and Remote Sensing Laboratory Scale Match This algorithm works by comparing ratios of distances between pixels in a matrix Distance= Compare Ratios= Ratios of distances from like points is equal!

Digital Imaging and Remote Sensing Laboratory Angle Match This algorithm works by comparing the angle formed by the triangle of 3 pixels in a matrix Vertice Point Set 1 Point Set 2 Vertice 1 2 3

Using the matched pixels, solutions to the affine polynomial problem are found Using the affine polynomial, one ortho-rectified image is geometrically transformed so that it can register with another ortho-rectified image. Quality metrics are performed to determine whether the registration is good. Hence, the matched points are good. Digital Imaging and Remote Sensing Laboratory Geometric Transformation

Digital Imaging and Remote Sensing Laboratory Inverse Transformations The matched points are put through an inverse geometric transformation and an inverse ortho- rectification transform to return the points to their original oblique pixel form A matrix of matched points is output

Digital Imaging and Remote Sensing Laboratory Bundle Adjustment Algorithm Bundle adjustment algorithms allow the mapping of 2D points into 3D space using more than two images around a common point The algorithm estimates the underlying plane geometry of a scene Images courtesy M. Pollefeys

Digital Imaging and Remote Sensing Laboratory 3D Library Interfacing Using these 3D points generated from the bundle adjustment algorithm, a triangle mesh is created which forms the structure of the wire-frame scene Also, a texture map is generated from bundle adjustment. This map is overlaid on the mesh The full model is saved to a generic 3D format

Digital Imaging and Remote Sensing Laboratory Methods Using a programming environment, engineering code will be developed to determine the feasibility of this algorithm If it is feasible, quality metrics will be applied to determine effectiveness Visual analysis, absolute mean variance, and deviation from a polynomial model (RMSDE) can be used to check tie- point generation Visual analysis and post-photogrammetric analysis can be used to check wire-frame generation

Digital Imaging and Remote Sensing Laboratory Timetable September 1, 2003 – November 15, 2003 Search for previous research, background knowledge November 15, 2003 – April 1, 2004 Development of algorithm and engineering code April 1, 2004 – May 15, 2004 Complete paper, poster and presentation

Digital Imaging and Remote Sensing Laboratory Budget No money will be required for this project as the investigator has all of the resources he currently requires 2 credits will be required for both Winter and Spring Quarters Due to the nature of the contract, most of the work performed will be done on a pay basis Flexibility in the experimenter’s schedule was required

Digital Imaging and Remote Sensing Laboratory References Honkavaara, Eija and Anton Hogholen. “Automatic Tie Point Extraction in Aerial Triangulation.” International Society for Photogrammetry and Remote Sensing, 16th Congress, Vienna, July Moffitt, Francis H. and Edward M. Mikhail. Photogrammetry. 3rd Ed. New York: Harper and Row, Pollefeys, M. “3D Geometry from Images.” 15 Oct Walli, Karl C. “Multisensor Image Registration Utilizing the LOG Filter and FWT.” Diss. Rochester Institute of Technology, Wolf, Paul R. Elements of Photogrammetry. 2nd Ed. New York: McGraw-Hill, 1983.

Digital Imaging and Remote Sensing Laboratory Questions Are there any?