High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga CPSC 643 Presentation # 2 Brien Flewelling March 4 th, 2009.

Slides:



Advertisements
Similar presentations
Optimizing and Learning for Super-resolution
Advertisements

Digital Image Processing
High Dynamic Range Imaging Samu Kemppainen VBM02S.
From Images to Answers A Basic Understanding of Digital Imaging and Analysis.
Chapter 4: Image Enhancement
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
Fast GPU Histogram Analysis for Scene Post- Processing Andy Luedke Halo Development Team Microsoft Game Studios.
Computer graphics & visualization HDRI. computer graphics & visualization Image Synthesis – WS 07/08 Dr. Jens Krüger – Computer Graphics and Visualization.
Shaojie Zhuo, Dong Guo, Terence Sim School of Computing, National University of Singapore CVPR2010 Reporter: 周 澄 (A.J.) 01/16/2011 Key words: image deblur,
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Lecture 5 Template matching
Radiometric Self Calibration
Segmentation Divide the image into segments. Each segment:
Digital Image Processing Chapter 2: Digital Image Fundamentals.
i Sight1 April 1998 i Sight2 Objective u Present i Sight Company. u Present i Sight Technologies. u Description of technologies. u Status of each technology.
Lecture 1: Images and image filtering
Dept. Elect. Eng. Technion – Israel Institute of Technology Radiometric Nonidealities: A Unified Framework Anatoly Litvinov, Yoav Y. Schechner Support:
Digital Image Fundamentals
Announcements Kevin Matzen office hours – Tuesday 4-5pm, Thursday 2-3pm, Upson 317 TA: Yin Lou Course lab: Upson 317 – Card access will be setup soon Course.
Lecture 2: Image filtering
Image Analysis Preprocessing Arithmetic and Logic Operations Spatial Filters Image Quantization.
Chapter 2: Digital Image Fundamentals Fall 2003, 劉震昌.
Digital Images The nature and acquisition of a digital image.
Digital Image Characteristic
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Kavita Bala Hybrid Images, Oliva et al.,
Tone mapping with slides by Fredo Durand, and Alexei Efros Digital Image Synthesis Yung-Yu Chuang 11/08/2005.
Image Processing 고려대학교 컴퓨터 그래픽스 연구실 cgvr.korea.ac.kr.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
High dynamic range imaging. Camera pipeline 12 bits8 bits.
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Willy Kuang Feb. 3, 2004 HDR images rendering in iCAM Simg726 Final Project Update.
© N. David King. High Dynamic Range Imaging For Digital Photography © N. David King.
EE 7700 High Dynamic Range Imaging. Bahadir K. Gunturk2 References Slides and papers by Debevec, Ward, Pattaniak, Nayar, Durand, et al…
Computational photography CS4670: Computer Vision Noah Snavely.
CS654: Digital Image Analysis Lecture 17: Image Enhancement.
Nonphotorealistic rendering, and future cameras Computational Photography, Bill Freeman Fredo Durand May 11, 2006.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Digital Image Fundamentals. What Makes a good image? Cameras (resolution, focus, aperture), Distance from object (field of view), Illumination (intensity.
In Photography, there is an Exposure Triangle (Reciprocity Theory) Aperture – size of the iris opening, how much light come into the “window” Shutter Speed.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
03/05/03© 2003 University of Wisconsin Last Time Tone Reproduction If you don’t use perceptual info, some people call it contrast reduction.
7 elements of remote sensing process 1.Energy Source (A) 2.Radiation & Atmosphere (B) 3.Interaction with Targets (C) 4.Recording of Energy by Sensor (D)
Digital Image Processing Lecture 3: Image Display & Enhancement March 2, 2005 Prof. Charlene Tsai.
Why is computer vision difficult?
Image Processing Basics. What are images? An image is a 2-d rectilinear array of pixels.
Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography.
Arbitrary and Dynamic Patterning in a Programmable Array Microscope
Autonomous Robots Vision © Manfred Huber 2014.
Digital Image Processing EEE415 Lecture 3
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
Intelligent Robotics Today: Vision & Time & Space Complexity.
Lecture 3: Filtering and Edge detection
In Video, the shutter remains set Shutter Speed – amount of time the camera shutter is open during the exposure of one frame. (Standard – 1/50 sec.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Digital Image Processing Image Enhancement in Spatial Domain
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Segmentation at Any Speed Shrinivas J. Pundlik Department of Electrical and Computer Engineering, Clemson University, Clemson, SC.
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Noah Snavely Hybrid Images, Oliva et al.,
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
Image Perception ‘Let there be light! ‘. “Let there be light”
In Video, the shutter remains set Shutter Speed – amount of time the camera shutter is open during the exposure of one frame. (Standard – 1/50.
CPSC 6040 Computer Graphics Images
Lecture 1: Images and image filtering
Image filtering Hybrid Images, Oliva et al.,
Recap from Wednesday Spectra and Color Light capture in cameras and humans.
Digital Image Fundamentals
Lecture 1: Images and image filtering
Presentation transcript:

High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga CPSC 643 Presentation # 2 Brien Flewelling March 4 th, 2009

Overview  HDR Imaging Problem Motivation Methods  Related Work Where it Started Sequential Images Multiple Detectors, Adaptive Pixel Elements

Overview  HDR Imaging using Spatially Varying Pixel Exposure The method  Image Aquisition  Image Reconstruction  Experimental Results  Conclusions and Future Work

High Dynamic Range Imaging: The Idea  Perceptible intensity values span a range far greater than can be sampled by a single image.  Using Various Techniques, Estimate the camera response function in order to accurately allocate bits in the grayscale to energy levels in the scene.

Combining Information from Over Exposure and Under Exposure  Consider the projection of the illumination in a scene to be a function of energy rates.  Bright/Darker Regions have a higher probability of being over/under exposed for an arbitrary snapshot. It is the combination of various sampling techniques which allow us to display these regions together.

Motivation: Why do we care?  High Dynamic Range images result in scene representations much more like what is seen by the human eye.  Artistic Purposes  Visual methods need good “landmarks” if they exist in over/under exposed regions, this can be problematic.  In tracking, a region could be over exposed ore under exposed frame to frame.

Methods: How to Extract HDRI info  Sequential Exposures: Multiple Images at Various Shutter speeds or Iris Settings Solve a subset of pixel correspondences as an array of linear systems Solve for the camera response function Map the results to the image

Methods: How to Extract HDRI info  Multiple Image Detectors Use optical elements to generate mutiple images sampled by different imagers The images may have varying sensitivities, resolution, or exposure times. More Expensive but can handle moving objects better.

Multiple Sensor Elements in Each Pixel  Reduces Resolution by a factor of 2  Simple Combination of neighboring elements with different potential well depths.  Overall a disregarded approach since the sensor cost is greater and performance gain is not very high.

Adaptive Pixel Exposure  Vary the pixels sensitivity as a function of the amount of time for its potential well to fill. Feedback System  An Interesting and Promising Approach but.. Expensive for large scale chip designs Very sensitive to motion or blur effects in low light scenes

Related Work: Where it Started  [Blackwell, 1946] H. R. Blackwell. Contrast thresholds of the human eye. Journal of the Optical Society of America, 36:624–643, Blackwell Studies the variations in perceptible illumination that the human eye detects in a scene.  Many patents on HDR CCD sensors in the 1980’s  Sequential Methods for HDR Image Generation Early 1990’s

Related Work: Sequential Exposures  [Azuma and Morimura, 1996], [Saito,1995], [Konishi et al., 1995], [Morimura, 1993], [Ikeda,1998], [Takahashi et al., 1997], [Burt and Kolczynski,1993], [Madden, 1993] [Tsai, 1994]. [Mann and Picard,1995], [Debevec and Malik, 1997] and [Mitsunagaand Nayar, 1999]  The final paper extends the estimation to include the radiometric response function of the camera

Related Work: Hardware Solutions  Multiple Imagers [Doi etal., 1986], [Saito, 1995], [Saito, 1996], [Kimura, 1998],[Ikeda, 1998]  Adaptive Pixel Elements [Street, 1998], [Handy, 1986], [Wen, 1989], [Hamazaki, 1996], [Murakoshi, 1994] and [Konishi et al.,1995] [Brajovic and Kanade, 1996].

Spatially Varying Pixel Exposure  The SVE (Spatially Varying Exposure Image.  Let a 2x2 array of pixels be subject to exposures e 0,e 1,e 2,e 3  Let this array be repeated in a mask for the entire image

How Does this Increase the DR?

How Many Grays? (846) K = # of exposure levels : 4 q = # of quantization levels per pixel: 256 R = Round off function ek = exposure level

Spatial Resolution Reduction  Not a reduction by a factor of 2!  Low exposure level pixels could be noise dominated for dim regions  High exposure level pixels could be saturated in bright regions.  In general the spatial resolution is not significantly reduced.

Image Reconstruction by Aggregation  Simple Averaging Convolution with a 2x2 box filter Results in a piecewise linear function which is like a gamma function with gamma > 1 Overall produces good HDR results except at sharp edges

Image Reconstruction by Interpolation  If sharp features are important, the image brightness value M(i,j) are scaled by their exposures to produce M’(i,j).  Remove all underexposed, and saturated pixels  Determine the ‘Off-grid’ points from the undiscarded ‘On- grid’ points by interpolation. The above equation is the cubic interpolation kernel which is used in the least squares estimation for the off grid points

Solving for Offgrid Values by the Interpolation Kernel M: 16x1 on-grid brightness values F: 16x49 cubic interpolation elements Mo: 16x1 off-grid brightness values

Experimental Results - Simulation

Results

Future Work  Prototype was still being developed  Simulation proved useful in the estimation of the nonlinear response function, can it be used to estimate properties of scene objects?  Can this be used to estimate/handle motion blur for moving objects?  What is an optimal pattern for variation of pixel exposures?