Diffusion Coding Photography for Extended Depth of Field SIGGRAPH 2010 Ollie Cossairt, Changyin Zhou, Shree Nayar Columbia University.

Slides:



Advertisements
Similar presentations
MIT Media Lab Camera Culture Image Destabilization: Programmable Defocus using Lens and Sensor Motion Ankit Mohan, Douglas Lanman, Shinsaku Hiura, Ramesh.
Advertisements

Lytro The first light field camera for the consumer market
LIGHT AND THE RETINAL IMAGE: KEY POINTS Light travels in (more or less) straight lines: the pinhole camera’s inverted image Enlarging the pinhole leads.
Procam and Campro Shree K. Nayar Computer Science Columbia University Support: NSF, ONR Procams 2006 PROCAMS Shree K. Nayar,
Micro Phase Shifting Mohit Gupta and Shree K. Nayar Computer Science Columbia University Supported by: NSF and ONR.
--- some recent progress Bo Fu University of Kentucky.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Unnatural L 0 Representation for Natural Image Deblurring Speaker: Wei-Sheng Lai Date: 2013/04/26.
Hemispherical Confocal Imaging using Turtleback Reflector

Apertureless Scanning Near-field Optical Microscopy: a comparison between homodyne and heterodyne approaches Journal Club Presentation – March 26 th, 2007.
Micro PIV  An optical diagnostic technique for microfluidics (e.g. MEMS, biological tissues, inkjet printer head) Requirements: Measure instantaneously.
When Does a Camera See Rain? Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar ICCV Conference October 2005, Beijing, China.
What are Good Apertures for Defocus Deblurring? Columbia University ICCP 2009, San Francisco Changyin Zhou Shree K. Nayar.
High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga CPSC 643 Presentation # 2 Brien Flewelling March 4 th, 2009.
Photorealistic Rendering of Rain Streaks Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar SIGGRAPH Conference July 2006,
CCU VISION LABORATORY Object Speed Measurements Using Motion Blurred Images 林惠勇 中正大學電機系
Structured Light in Scattering Media Srinivasa Narasimhan Sanjeev Koppal Robotics Institute Carnegie Mellon University Sponsor: ONR Shree Nayar Bo Sun.
Interactive Matting Christoph Rhemann Supervised by: Margrit Gelautz and Carsten Rother.
Projection Defocus Analysis for Scene Capture and Image Display Li Zhang Shree Nayar Columbia University IIS SIGGRAPH Conference July 2006, Boston,
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
1 Diffusion Coded Photography for Extended Depth of Field SIGGRAPH 2010 Oliver Cossairt, Changyin Zhou, Shree Nayar Columbia University Supported by ONR.
Lensless Imaging with A Controllable Aperture Assaf Zomet and Shree K. Nayar Columbia University IEEE CVPR Conference June 2006, New York, USA.
Integral Photography A Method for Implementing 3D Video Systems.
Lecture 33: Computational photography CS4670: Computer Vision Noah Snavely.
Chapter 4: Cameras and Photography Depth of Field –Circle of Confusion –Effect of aperture Apertures –F-stop –Area –Depth of field Exposure –Shutter speed.
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University September.
Depth from Diffusion Supported by ONR Changyin ZhouShree NayarOliver Cossairt Columbia University.
MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar Mitsubishi Electric Research.
1 Fabricating BRDFs at High Spatial Resolution Using Wave Optics Anat Levin, Daniel Glasner, Ying Xiong, Fredo Durand, Bill Freeman, Wojciech Matusik,
Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
Shedding Light on the Weather
W. Thomas Cathey and Edward R. Dowski W. T. Cathey is with the Department of Electrical and Computer Engineering, University of Colorado. E. R. Dowski.
Computational photography CS4670: Computer Vision Noah Snavely.
Chapter 7 Case Study 1: Image Deconvolution. Different Types of Image Blur Defocus blur --- Depth of field effects Scene motion --- Objects in the scene.
Integral University EC-024 Digital Image Processing.
Specialization module TTM5 Part: Collaboration Space Building Block 2 Item/NTNU October L A Rønningen.
Austin Roorda, Ph.D. University of Houston College of Optometry
Yu-Wing Tai, Hao Du, Michael S. Brown, Stephen Lin CVPR’08 (Longer Version in Revision at IEEE Trans PAMI) Google Search: Video Deblurring Spatially Varying.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Tutorial on Computational Optical Imaging University of Minnesota September David J. Brady Duke University
Wong group, Optical Nanostructures Laboratory, Columbia UniversityXiujian Li Phase Conjugation in Silicon waveguide.
Motion Deblurring Using Hybrid Imaging Moshe Ben-Ezra and Shree K. Nayar Columbia University IEEE CVPR Conference June 2003, Madison, USA.
Mitsubishi Electric Research Labs (MERL) Super-Res from Single Motion Blur PhotoAgrawal & Raskar Amit Agrawal and Ramesh Raskar Mitsubishi Electric Research.
The Media Lab Designing Aperture Masks in Phase Space Roarke Horstmeyer 1, Se Baek Oh 2, and Ramesh Raskar 1 1 MIT Media Lab 2 MIT Dept. of Mechanical.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
October 13, IMAGE FORMATION. October 13, CAMERA LENS IMAGE PLANE OPTIC AXIS LENS.
EG 2011 | Computational Plenoptic Imaging STAR | VI. High Speed Imaging1 Computational Plenoptic Imaging Gordon Wetzstein 1 Ivo Ihrke 2 Douglas Lanman.
 Marc Levoy  Light Field = Array of (virtual) Cameras Sub-aperture Virtual Camera = Sub-aperture View.
Extracting Depth and Matte using a Color-Filtered Aperture Yosuke Bando TOSHIBA + The University of Tokyo Bing-Yu Chen National Taiwan University Tomoyuki.
On the Evaluation of Optical Performace of Observing Instruments Y. Suematsu (National Astronomical Observatory of Japan) ABSTRACT: It is useful to represent.
Removing motion blur from a single image
Real-Time Lens Blur Effects and Focus Control Sungkil Lee, Elmar Eisemann, and Hans-Peter Seidel Sunyeong Kim Nov. 23 nd
Optical Sciences CenterThe University of Arizona ERROR ANALYSIS FOR CGH OPTICAL TESTING Yu-Chun Chang and James Burge Optical Science Center University.
Date of download: 6/24/2016 Copyright © ASME. All rights reserved. From: Characterization of “Bulk Lithography” Process for Fabrication of Three-Dimensional.
Bo Sun Kalyan Sunkavalli Ravi Ramamoorthi Peter Belhumeur Shree Nayar Columbia University Time-Varying BRDFs.
IMAGE QUALITY. SPATIAL RESOLUTION CONTRAST RESOLUTION NOISE IMAGE ARTIFACTS RADIATION DOSE.
Introduction Computational Photography Seminar: EECS 395/495
Extended Depth of Field For Long Distance Biometrics
Radon Transform Imaging
Multiplexed Illumination
Deconvolution , , Computational Photography
Ali Ercan & Ulrich Barnhoefer
A Comparative Study for Single Image Blind Deblurring
Rob Fergus Computer Vision
Removing motion blur from a single image
Unit 57 – Photography Depth of field
Deblurring Shaken and Partially Saturated Images
Presentation transcript:

Diffusion Coding Photography for Extended Depth of Field SIGGRAPH 2010 Ollie Cossairt, Changyin Zhou, Shree Nayar Columbia University

Conventional Camera (F/1.8)

Focused Image PSFImage Noise Captured Image Camera Blur Model Spatial domain Focused ImageMTF Image Noise Captured Image Frequency domain

Deblurring Problems MTFCaptured image Low SNR low MTF values Problem 1: Focused image Lens Sensor P Object Q Problem 2: Variation with depth

Extending Depth of Field: Previous Work Focus Sweep Cameras [Hausler ’72] [Nagahara et al. ’08] LensFocal PlaneSensor Wavefront Coding Cameras [Dowski and Cathey ’95] [Chi and George ’01] [Garcia-Guerrero et al. ‘07] Other Related Work [Levin et al. ’07] [Veeraraghavan et al. ’07] [Levin et al. ’09]

Focus Sweep Camera Lens Scene Sensor = Instantaneous PSF t = 1 t = 2t = 3t = 4t = 5t = 6t = 7 Final PSF [Hausler ’72] [Nagahara et al. ’08]

Focus Sweep Camera = Instantaneous PSF t = 1t = 2t = 3t = 4t = 5t = 6t = 7 Final PSF Instantaneous PSF t = 1 + t = 2 + t = 3 + t = 4t = 5 + t = 6 + t = 7 + depth 1 depth 2 [Levin et al. ’09] 2D MTF = Final PSF Lens Scene Sensor [Hausler ’72] [Nagahara et al. ’08]

Wavefront Coding Lens Scene Sensor [Levin et al. ’09] 2D MTF Cubic Phase Plate xu [Dowski and Cathey ’95] Ambiguity Function slice MTF

Lens Resolution Target Sensor Focus Sweep Wavefront Coding EDOF Camera Comparison depth

Focus Sweep Wavefront Coding Deblurred image EDOF Camera Comparison

Deblurring Error vs. Depth Deblurring Error noise Deblurring Error Depth Wavefront Coding Focus Sweep Wavefront Coding

Is it possible to achieve the performance of focus sweep without moving parts?

Optical Diffusers Circular diffuser [ Diffuser sheets w SensorDiffuser Light ray x x Scatter function w SEM image

Diffuser Kernels LensSensor x u A u x A/2 -A/2 With diffuser A/2 -A/2 u x Without diffuser Light field space

w w A/2 -A/2 u x Without diffuser Diffuser Kernels LensSensor w u x With diffuser w Light field space x u

Diffuser Kernels LensSensor w u x A/2 -A/2 u x u x Diffuser kernel Light fieldDiffuser kernelCoded light field x u Without diffuserWith diffuser Light field space

Diffusion Coded PSF A/2 -A/2 u x Without diffuser u x With diffuser u x Diffuser kernel x project x Light field space x project Sensor space Camera PSFScatter functionCoded PSF

Radially Symmetric Light Field For an on-axis, isotropic point source:

Radially Symmetric Diffuser For a radially-symmetric diffuser kernel

Radially Symmetric Diffuser PSFs Radially symmetric diffuser Coded PSFScatter functionCamera PSF Coded PSFScatter functionCamera PSF Conventional diffuser Normalized frequency PSF Vs. Depth MTF Vs. Depth -50px50px-50px50px-50px50px-50px50px depth

Diffusion Coding Performance Deblurring Error vs. Depth Depth Wavefront Coding Focus Sweep noise Diffusion Coding (light field) Diffusion Coding (wave optics) Similar performance to focus sweep without moving parts

Diffuser Implementation Diffuser scatter function r (mm) 110 Thickness (um) r (mm) [ Diffuser heightmapFabricated Diffuser Diffuser surface profile [Sales et al. ‘03]

Garcia-Guerrero Comparison with Prior Work Deblurring Error vs. Depth Depth Diffusion Coding Diffusion coding significantly outperforms prior work

Diffusion Coding Experiments Fabricated DiffuserCannon 50mm EF lensCannon 450D Sensor Experimental Setup Measured PSFs depth Without diffuser with diffuser BM3D Deblurring Algorithm [Dabov et al. ‘08]

Examples

Conventional Camera f-number = 1.8, exposure time = 16ms

Conventional Camera f-number = 18, exposure time = 16ms

Diffusion Coding Captured f-number = 1.8, exposure time = 16ms

Diffusion Coding Deblurred f-number = 1.8, exposure time = 16ms

Conventional Camera f-number = 1.8 exposure time = 10ms

Diffusion Coding f-number = 1.8 exposure time = 10ms Captured

Diffusion Coding f-number = 1.8 exposure time = 10ms Deblurred

Conventional Camera f-number = 1.8 exposure time = 12.5ms

Diffusion Coding f-number = 1.8 exposure time = 12.5ms Captured

Diffusion Coding f-number = 1.8 exposure time = 12.5ms Deblurred

Conventional Camera f-number = 1.8, exposure time = 16ms

Diffusion Coding Captured f-number = 1.8, exposure time = 16ms

Diffusion Coding Deblurred f-number = 1.8, exposure time = 16ms

Limitations Conventional CameraDiffusion Coding Loss of image texture Loss of contrast Occlusion errors

Conclusions Diffusion Coding Theory Diffusion Coding Examples Radially Symmetric Diffusers Lens Sensor Diffuser Diffusion Coding Implementation