Reverse-Projection Method for Measuring Camera MTF

Slides:



Advertisements
Similar presentations
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Advertisements

QR Code Recognition Based On Image Processing
Image Processing Lecture 4
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Image Processing A brief introduction (by Edgar Alejandro Guerrero Arroyo)
CS 551 / CS 645 Antialiasing. What is a pixel? A pixel is not… –A box –A disk –A teeny tiny little light A pixel is a point –It has no dimension –It occupies.
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
Face Recognition in Hyperspectral Images Z. Pan, G. Healey, M. Prasad and B. Tromberg University of California Published at IEEE Trans. on PAMI Vol 25,
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 4 Edge Detection
6/9/2015Digital Image Processing1. 2 Example Histogram.
1 Image filtering Hybrid Images, Oliva et al.,
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Motion Analysis (contd.) Slides are from RPI Registration Class.
Digital Image Processing Chapter 5: Image Restoration.
Filters and Edges. Zebra convolved with Leopard.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Computer Vision P. Schrater Spring 2003
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Chapter 10: Image Segmentation
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
A critical review of the Slanted Edge method for MTF measurement of color cameras and suggested enhancements Prasanna Rangarajan Indranil Sinharoy Dr.
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
By Meidika Wardana Kristi, NRP  Digital cameras used to take picture of an object requires three sensors to store the red, blue and green color.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
Image Processing Edge detection Filtering: Noise suppresion.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
Copyright Howie Choset, Renata Melamud, Al Costa, Vincent Lee-Shue, Sean Piper, Ryan de Jonckheere. All Rights Reserved Computer Vision.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Autonomous Robots Vision © Manfred Huber 2014.
Edges.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Image Segmentation by Histogram Thresholding Venugopal Rajagopal CIS 581 Instructor: Longin Jan Latecki.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
CS654: Digital Image Analysis Lecture 11: Image Transforms.
Digital Image Processing CSC331
Image Features (I) Dr. Chang Shu COMP 4900C Winter 2008.
Reading 1D Barcodes with Mobile Phones Using Deformable Templates.
Edges Edges = jumps in brightness/color Brightness jumps marked in white.
Image Enhancement in the Spatial Domain.
Winter in Kraków photographed by Marcin Ryczek
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
Miguel Tavares Coimbra
Heechul Han and Kwanghoon Sohn
Digital Image Processing Lecture 20: Representation & Description
Degradation/Restoration Model
Linear Filters and Edges Chapters 7 and 8
Detection of discontinuity using
Motion and Optical Flow
Exposing Digital Forgeries Through Chromatic Aberration Micah K
MTF Evaluation for FY-2G Based on Lunar
Fourier Transform: Real-World Images
Computer Vision Lecture 4: Color
Fitting Curve Models to Edges
By: Kevin Yu Ph.D. in Computer Engineering
Image filtering Hybrid Images, Oliva et al.,
Image Processing Today’s readings For Monday
Image filtering
Image filtering
Topic 1 Three related sub-fields Image processing Computer vision
Detecting Hidden Message Using Higher Order Statistical Models Hany Farid By Jingyu Ye Yiqi Hu.
Winter in Kraków photographed by Marcin Ryczek
IT472 Digital Image Processing
Demosaicking Problem in Digital Cameras
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Reverse-Projection Method for Measuring Camera MTF Stan Birchfield

MTF measures camera sharpness Optical transfer function (OTF) Fourier transform of impulse response Modulation transfer function (MTF) Magnitude of OTF Spatial frequency response (SFR) MTF for digital imaging systems Slanted edge Simplest target for computing SFR High MTF Low MTF  Sharp camera  Blurry camera SFR ≈ MTF

What affects MTF?   MTF affected by Lens Sensor Electronics Target Pose Lighting intensity and color Color channel ROI Orientation Algorithm  Slanted edge 

Variability of results Same: Sensor Target Lighting conditions Pose Different: Crops from 25 to 50 pixels wide Angles between 5 and 20 degrees MTF30 Set of ~20 total cropped images

Variability of results Same: Sensor Target Lighting conditions Pose Different: Crops from 25 to 50 pixels wide Angles between 5 and 20 degrees Set of ~20 total cropped images

Variability of results Same: Sensor Target Lighting conditions Pose Angle Different: Crops from 25 to 50 pixels wide These are all crops of the same image!

ISO 12233: The standard way to estimate MTF [2000, 2014] Slanted edge approach Define ROI (region of interest) either manual or automatic Project 2D array onto 1D array Differentiate Gamma expansion undo OECF (opto-electronic conversion function) Apply Hamming window Compute DFT (discrete Fourier transform) Luminance from RGB Magnitude of DFT yields estimate of MTF Find points along edge Fit line to points

ISO 12233: The standard way to estimate MTF [2000, 2014] Slanted edge approach Define ROI (region of interest) either manual or automatic Project 2D array onto 1D array Differentiate Gamma expansion undo OECF (opto-electronic conversion function) Apply Hamming window Compute DFT (discrete Fourier transform) Luminance from RGB Magnitude of DFT yields estimate of MTF Find points along edge Fit line to points

Comparison Standard approach Proposed approach Centroid of derivative along rows Ordinary least squares Forward projection Step 4: Find points along edge vs. Step 5: Fit line to points Step 6: Project from 2D to 1D Ridler-Calvard segmentation Total least squares Reverse projection

Step 4: Find points along edge Standard approach Proposed approach Derivative along rows For each row: Compute 1D derivative using FIR filter Compute centroid Local Assumes near-vertical orientation Ridler-Calvard segmentation Repeat: m1  graylevel mean below t m2  graylevel mean above t t  (m1 + m2) /2 Global Any orientation T. W. Ridler, S. Calvard, Picture thresholding using an iterative selection method, IEEE Trans. System, Man and Cybernetics, SMC-8:630-632, 1978.

Step 5: Fit line to points Standard approach Proposed approach Ordinary least squares y = mx + b (slope-intercept form) Minimizes vertical error Affected by orientation Total least squares ax + by + c = 0 (standard form) Minimizes perpendicular error Any orientation

Step 6: Project from 2D to 1D Standard approach Proposed approach Forward projection For each 2D pixel: Project to nearest 1D bin Then average Reverse projection For each 1D bin: Project average of 2D pixels (maybe interpolated) some bins may be empty all bins have values!

Forward vs. reverse projection Forward projection bump Reverse projection (All other steps identical)

What causes the bump? Forward projection Reverse projection period of noise = amount of oversampling Forward projection Reverse projection

Stability across subsamplings more stable

Experiment 1: Different orientations gap 0o to 45o, increments of 5o (50x50 crops) Details: 4:1 contrast ratio, 580 lux, 5540 K, AGC off, JPEG

MTF curves of reverse projection consistency across orientation

Experiment 2: Vertical crop Number of rows: 3 to 50

Experiment 3: Horizontal crop Number of columns: 5 to 50

Experiment 4: Square crop Side length: 5 to 50

MTF curve comparison 50 x 50 crop 5o orientation Bilinear interpolation Bicubic interpolation No interpolation  interpolation alone does not explain discrepancy

Conclusion Existing implementations of ISO 12233 exhibit variability ± 0.07 in MTF30 due to crop alone significant digits Proposed solution: Ridler-Calvard to find points total least squares to fit line reverse projection Advantages: stability invariance to orientation Remaining questions: Is high-frequency bump caused by forward projection? Is reverse projection more or less accurate? More experiments?

Thanks! Special thanks to Andrew Juenger Erik Garcia Kanzah Qasim