Image Preprocessing: Geometric Correction Image Preprocessing: Geometric Correction Jensen, 2003 John R. Jensen Department of Geography University of South.

Slides:



Advertisements
Similar presentations
Digital Image Processing
Advertisements

QR Code Recognition Based On Image Processing
Major Operations of Digital Image Processing (DIP) Image Quality Assessment Radiometric Correction Geometric Correction Image Classification Introduction.
1:14 PM  Involves the manipulation and interpretation of digital images with the aid of a computer.  Includes:  Image preprocessing (rectification and.
PCA + SVD.
Mapping: Scaling Rotation Translation Warp
Image Preprocessing Image Preprocessing.
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
NAPP Photo Five Pockets near Dubois. Google Earth.
Geometric Correction Lecture 5 Feb 18, What and why  Remotely sensed imagery typically exhibits internal and external geometric error. It is.
With support from: NSF DUE in partnership with: George McLeod Prepared by: Geospatial Technician Education Through Virginia’s Community Colleges.
Motion Analysis (contd.) Slides are from RPI Registration Class.
8. Geometric Operations Geometric operations change image geometry by moving pixels around in a carefully constrained way. We might do this to remove distortions.
Lecture 9: Image alignment CS4670: Computer Vision Noah Snavely
Computer Vision Lecture 3: Digital Images
Introduction to ArcGIS for Environmental Scientists Module 2 – GIS Fundamentals Lecture 5 – Coordinate Systems and Map Projections.
Geometric Correction Dr. John R. Jensen Department of Geography
Georeferencing Getting maps and satellite images into GIS.
1 Image Pre-Processing. 2 Digital Image Processing The process of extracting information from digital images obtained from satellites Information regarding.
Applied Cartography and Introduction to GIS GEOG 2017 EL Lecture-3 Chapters 5 and 6.
Image Registration January 2001 Gaia3D Inc. Sanghee Gaia3D Seminar Material.
CSE554Laplacian DeformationSlide 1 CSE 554 Lecture 8: Laplacian Deformation Fall 2012.
Geometric Correction of Imagery
Geometric Correction It is vital for many applications using remotely sensed images to know the ground locations for points in the image. There are two.
Remote Sensing Image Rectification and Restoration
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Chapter 3: Image Restoration Geometric Transforms.
Camera Geometry and Calibration Thanks to Martial Hebert.
NAPP Photo Five Pockets near Dubois.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Operations and Morphing.
Orthorectification using
Digital Image Processing Lecture 7: Geometric Transformation March 16, 2005 Prof. Charlene Tsai.
Digital Image Processing Lecture 6: Image Geometry
EE663 Image Processing Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Spring 2012Meeting 2, 7:20PM-10PM1 Image Processing with Applications-CSCI567/MATH563 Lectures 3, 4, and 5: L3. Representing Digital Images; Zooming. Bilinear.
February 3 Interpretation of Digital Data Bit and Byte ASCII Binary Image Recording Media and Formats Geometric corrections Image registration Projections.
CS482 Selected Topics in Digital Image Processing بسم الله الرحمن الرحيم Instructor: Dr. Abdullah Basuhail,CSD, FCIT, KAU, 1432H Chapter 2: Digital Image.
Digital Image Processing Definition: Computer-based manipulation and interpretation of digital images.
GEOMETRIC OPERATIONS. Transformations and directions Affine (linear) transformations Translation, rotation and scaling Non linear (Warping transformations)
Principle Component Analysis (PCA)
Image Registration Advanced DIP Project
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Geoprocessing and georeferencing raster data
Unsupervised Classification
Image Enhancement Band Ratio Linear Contrast Enhancement
Remote sensing/digital image processing. Color Arithmetic red+green=yellow green+blue=cyan red+blue=magenta.
Remote Sensing Part 2 Atmospheric Interactions & Pre-processing.
Geometric Correction of Remote Sensor Data
Arithmetic and Geometric Transformations (Chapter 2) CS474/674 – Prof. Bebis.
CSE 554 Lecture 8: Alignment
Geometric Preprocessing
Coordinate Transformations
Distortions in imagery:
NAPP Photo Five Pockets near Dubois.
GEOGRAPHICAL INFORMATION SYSTEM
Chapter 6: Image Geometry 6.1 Interpolation of Data
Image Geometry and Geometric Transformation
Lecture 7: Image alignment
Image Rectificatio.
Outline Announcement Local operations (continued) Linear filters
CSCE 441: Computer Graphics Image Warping
Lecture 3: Camera Rotations and Homographies
Digital Image Processing
Spatial Data Entry via Digitizing
Facultad de Ingeniería, Centro de Cálculo
Image Stitching Linda Shapiro ECE/CSE 576.
DIGITAL IMAGE PROCESSING Elective 3 (5th Sem.)
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Image Preprocessing: Geometric Correction Image Preprocessing: Geometric Correction Jensen, 2003 John R. Jensen Department of Geography University of South Carolina Columbia, South Carolina John R. Jensen Department of Geography University of South Carolina Columbia, South Carolina 29208

Geometric Correction There are two basic types of geometric correction: * Image-to-image registration - useful for registering two or more images together when it is not necessary to have the interpreted output in a formal map projection. Image-to-image registration may also be used to perform simple non-quantitative change detection. * Image-to-map rectification - Useful when preparing images and interpreted output for presentation in a rigorous map projection using a known geoid and datum. Especially valuable when performing digital change detection. There are two basic types of geometric correction: * Image-to-image registration - useful for registering two or more images together when it is not necessary to have the interpreted output in a formal map projection. Image-to-image registration may also be used to perform simple non-quantitative change detection. * Image-to-map rectification - Useful when preparing images and interpreted output for presentation in a rigorous map projection using a known geoid and datum. Especially valuable when performing digital change detection. Jensen, 2003

Image-to-Map Geometric Rectification Image-to-map rectification requires two basic operations: * Spatial Interpolation Using Coordinate Transformation * Intensity Interpolation Image-to-map rectification requires two basic operations: * Spatial Interpolation Using Coordinate Transformation * Intensity Interpolation Jensen, 2003 We will focus our attention on image-to-map rectification because: it is the most widely adopted geometric correction methodology, and it is the most widely adopted geometric correction methodology, and the image-to-image registration process is very similar. the image-to-image registration process is very similar. We will focus our attention on image-to-map rectification because: it is the most widely adopted geometric correction methodology, and it is the most widely adopted geometric correction methodology, and the image-to-image registration process is very similar. the image-to-image registration process is very similar.

Spatial Interpolation Using Coordinate Transformations where: x and y are positions in the output-rectified image or map, and x’ and y’ represent corresponding positions in the original input image. where: x and y are positions in the output-rectified image or map, and x’ and y’ represent corresponding positions in the original input image. For moderate distortions in a relatively small area of an image, a 1 st order, six-parameter, affine transformation is sufficient to rectify the imagery to a geographic frame of reference: Jensen, 2003

Spatial Interpolation Using Coordinate Transformations This first order transformation can model six kinds of distortion in the remote sensor data, including: translation in x and y, translation in x and y, scale changes in x and y, scale changes in x and y, skew, and skew, and rotation. rotation. This first order transformation can model six kinds of distortion in the remote sensor data, including: translation in x and y, translation in x and y, scale changes in x and y, scale changes in x and y, skew, and skew, and rotation. rotation. Jensen, 2003

How Different Affine Transformations Fit a Hypothetical Surface How Different Affine Transformations Fit a Hypothetical Surface Original surface Original surface 1 st order 2 nd order 3 rd order Jensen, 2003

Spatial Interpolation Logic Jensen, 2003 The goal is to fill a matrix that is in a standard map projection with the appropriate values from a non- planimetric image.

Spatial Interpolation Using Coordinate Transformation All of the original GCPs selected are usually not used to compute the final six-parameter coefficients and constants used to rectify the input image. There is an iterative process that takes place. First, all of the original GCPs (e.g., 20 GCPs) are used to compute an initial set of six coefficients and constants. The root mean squared error (RMSE) associated with each of these initial 20 GCPs is computed and summed. Then, the individual GCPs that contributed the greatest amount of error are determined and deleted. After the first iteration, this might only leave 16 of 20 GCPs. A new set of coefficients is then computed using the16 GCPs. The process continues until the RMSE reaches a user-specified threshold (e.g., <1 pixel error in the x - direction and <1 pixel error in the y-direction). The goal is to remove the GCPs that introduce the most error into the multiple-regression coefficient computation. When the acceptable threshold is reached, the final coefficients and constants are used to rectify the input image to an output image in a standard map projection as previously discussed. Jensen, 2003

Spatial Interpolation Using Coordinate Transformation where: x orig and y orig are are the original row and column coordinates of the GCP in the image and x’ and y’ are the computed or estimated coordinates in the original image when we utilize the six coefficients. Basically, the closer these paired values are to one another, the more accurate the algorithm (and its coefficients). The square root of the squared deviations represents a measure of the accuracy of each GCP. By computing RMS error for all GCPs, it is possible to (1) see which GCPs contribute the greatest error, and 2) sum all the RMS error. where: A way to measure the accuracy of a geometric rectification algorithm (actually, its coefficients) is to compute the Root Mean Squared Error (RMS error ) for each ground control point using the equation: Jensen, 2003

Characteristics of Ground Control Points Point Number Order of Points Deleted Easting on Map X1 Northing on Map Y1 X’ pixel Y’ Pixel Total RMS error after this point deleted ,627, ,6803,627, … ,7003,632, Total RMS error with all 20 GCPs used: If we delete GCP #20, the RMSE will be 8.452

Image-to-Map Geometric Rectification Intensity Interpolation: Unfortunately, geometric correction algorithms rarely direct us to go to an integer row and column in the original imagery (e.g., row 2, column 2) to get a brightness value to fill a location in the rectified output image. Rather, the location is usually a floating point number (e.g., column 2.4, row 2.7). Therefore, intensity interpolation algorithms (often referred to as resampling) are used to obtain a brightness value from the desired location in the original image and then place this value in the output matrix. Unfortunately, geometric correction algorithms rarely direct us to go to an integer row and column in the original imagery (e.g., row 2, column 2) to get a brightness value to fill a location in the rectified output image. Rather, the location is usually a floating point number (e.g., column 2.4, row 2.7). Therefore, intensity interpolation algorithms (often referred to as resampling) are used to obtain a brightness value from the desired location in the original image and then place this value in the output matrix. Intensity Interpolation: Unfortunately, geometric correction algorithms rarely direct us to go to an integer row and column in the original imagery (e.g., row 2, column 2) to get a brightness value to fill a location in the rectified output image. Rather, the location is usually a floating point number (e.g., column 2.4, row 2.7). Therefore, intensity interpolation algorithms (often referred to as resampling) are used to obtain a brightness value from the desired location in the original image and then place this value in the output matrix. Unfortunately, geometric correction algorithms rarely direct us to go to an integer row and column in the original imagery (e.g., row 2, column 2) to get a brightness value to fill a location in the rectified output image. Rather, the location is usually a floating point number (e.g., column 2.4, row 2.7). Therefore, intensity interpolation algorithms (often referred to as resampling) are used to obtain a brightness value from the desired location in the original image and then place this value in the output matrix. Jensen, 2003

Image-to-Map Geometric Rectification There are several intensity interpolation (resampling) algorithms, including: * Nearest Neighbor * Nearest Neighbor * Bilinear Interpolation * Bilinear Interpolation * Cubic Convolution * Cubic Convolution There are several intensity interpolation (resampling) algorithms, including: * Nearest Neighbor * Nearest Neighbor * Bilinear Interpolation * Bilinear Interpolation * Cubic Convolution * Cubic Convolution Jensen, 2003

Nearest-Neighbor Resampling The brightness value closest to the predicted x’, y’ coordinate is assigned to the output x,y coordinate. Jensen, 2003

Bilinear Interpolation Assigns output pixel values by interpolating brightness values in two orthogonal direction in the input image. It basically fits a plane to the 4 pixel values nearest to the desired position ( x’, y’ ) and then computes a new brightness value based on the weighted distances to these points. For example, the distances from the requested ( x’, y’ ) position at 2.4, 2.7 in the input image to the closest four input pixel coordinates (2,2; 3,2; 2,3;3,3) are computed. Also, the closer a pixel is to the desired x’,y’ location, the more weight it will have in the final computation of the average. where Z k are the surrounding four data point values, and D 2 k are the distances squared from the point in question ( x’, y’ ) to the these data points. Jensen, 2003

Bilinear Interpolation Jensen, 2003

Cubic Convolution Assigns values to output pixels in much the same manner as bilinear interpolation, except that the weighted values of 16 pixels surrounding the location of the desired x’, y’ pixel are used to determine the value of the output pixel. where Z k are the surrounding four data point values, and D 2 k are the distances squared from the point in question ( x’, y’ ) to the these data points. Jensen, 2003

Cubic Convolution Jensen, 2003