MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar Mitsubishi Electric Research.

Slides:



Advertisements
Similar presentations
MIT Media Lab Camera Culture Image Destabilization: Programmable Defocus using Lens and Sensor Motion Ankit Mohan, Douglas Lanman, Shinsaku Hiura, Ramesh.
Advertisements

Y y function). For a simple object, such as a wall, the environment response can be modelled as a single dirac. A wall further away would have a shifted.
ISO, Aperture and Shutter Speed For Beginners. The photographer can control how much natural light reaches the sensor by adjusting the camera's ISO shutter.
Lytro The first light field camera for the consumer market
CS-MUVI Video compressive sensing for spatial multiplexing cameras Aswin Sankaranarayanan, Christoph Studer, Richard G. Baraniuk Rice University.
Po-Hsiang Chen Advisor: Sheng-Jyh Wang. People  Shree K. Nayar  Ramesh Raskar  Ren Ng 2.
Fourier Slice Photography
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Mitsubishi Electric Research Laboratories August 2006 Mitsubishi Electric Research Labs (MERL) Cambridge, MA Instant Replay: Inexpensive High Speed Motion.
Generalized Mosaics Yoav Y. Schechner, Shree Nayar Department of Computer Science Columbia University.
Raskar, Camera Culture, MIT Media Lab Camera Culture Ramesh Raskar Camera Culture MIT Media Lab Ramesh Raskar
Shaojie Zhuo, Dong Guo, Terence Sim School of Computing, National University of Singapore CVPR2010 Reporter: 周 澄 (A.J.) 01/16/2011 Key words: image deblur,
Light Field Rendering Shijin Kong Lijie Heng.
Computational Photography
CCU VISION LABORATORY Object Speed Measurements Using Motion Blurred Images 林惠勇 中正大學電機系
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
1 Diffusion Coded Photography for Extended Depth of Field SIGGRAPH 2010 Oliver Cossairt, Changyin Zhou, Shree Nayar Columbia University Supported by ONR.
Lensless Imaging with A Controllable Aperture Assaf Zomet and Shree K. Nayar Columbia University IEEE CVPR Conference June 2006, New York, USA.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Light field microscopy Marc Levoy, Ren Ng, Andrew Adams Matthew Footer, Mark Horowitz Stanford Computer Graphics Laboratory.
Lecture 33: Computational photography CS4670: Computer Vision Noah Snavely.
Photography Merit Badge
Light field photography and microscopy Marc Levoy Computer Science Department Stanford University.
Course 3: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Jack Tumblin Northwestern University Course WebPage :
Course 1: Computational Photography Organisers Ramesh Raskar Mitsubishi Electric Research Labs Jack Tumblin Northwestern University.
Jason Holloway Aswin Sankaranarayanan Ashok Veeraraghavan Salil Tambe April 28, 2012.
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
How the Camera Works ( both film and digital )
Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University September.
Quick Overview of Robotics and Computer Vision. Computer Vision Agent Environment camera Light ?
OUTLINEOUTLINE What is Photography? What is Photography? What is ‘The Photographic Signal’? What is ‘The Photographic Signal’? Perfecting Film-Like Photography:
Raskar, Camera Culture, MIT Media Lab Camera Culture Ramesh Raskar Camera Culture Associate Professor, MIT Media Lab.
Computational Photography and Videography Christian Theobalt and Ivo Ihrke Winter term 09/10.
Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University Oct 19th,
1 CS6825: Image Formation How are images created. How are images created.
Digital Photography A tool for Graphic Design Graphic Design: Digital Photography.
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Computational photography CS4670: Computer Vision Noah Snavely.
Definitions Megarays - number of light rays captured by the light field sensor. Plenoptic - camera that uses a mirrolens array to capture 4D light field.
Nonphotorealistic rendering, and future cameras Computational Photography, Bill Freeman Fredo Durand May 11, 2006.
. Wild Dreams for Cameras Jack Tumblin Northwestern University From May 24 Panel Discussion on cameras.
Yu-Wing Tai, Hao Du, Michael S. Brown, Stephen Lin CVPR’08 (Longer Version in Revision at IEEE Trans PAMI) Google Search: Video Deblurring Spatially Varying.
Mitsubishi Electric Research Labs (MERL) Super-Res from Single Motion Blur PhotoAgrawal & Raskar Amit Agrawal and Ramesh Raskar Mitsubishi Electric Research.
EG 2011 | Computational Plenoptic Imaging STAR | VI. High Speed Imaging1 Computational Plenoptic Imaging Gordon Wetzstein 1 Ivo Ihrke 2 Douglas Lanman.
 Marc Levoy  Light Field = Array of (virtual) Cameras Sub-aperture Virtual Camera = Sub-aperture View.
Extracting Depth and Matte using a Color-Filtered Aperture Yosuke Bando TOSHIBA + The University of Tokyo Bing-Yu Chen National Taiwan University Tomoyuki.
EG 2011 | Computational Plenoptic Imaging STAR | I. Introduction1 Computational Plenoptic Imaging Gordon Wetzstein 1 Ivo Ihrke 2 Douglas Lanman 3 Wolfgang.
Camera surface reference images desired ray ‘closest’ ray focal surface ‘closest’ camera Light Field Parameterization We take a non-traditional approach.
Understanding Aperture (a beginner’s guide) Understanding Aperture (a beginner’s guide)
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
Introduction Computational Photography Seminar: EECS 395/495
Radon Transform Imaging
DIGITAL PHOTOGRAPHY.
A tool for Graphic Design
Uncontrolled Modulation Imaging
Reconstruction For Rendering distribution Effect
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Sampling and Reconstruction of Visual Appearance
Media Production Richard Trombly Contact :
Range Imaging Through Triangulation
Coding Approaches for End-to-End 3D TV Systems
Rob Fergus Computer Vision
A tool for Graphic Design
Open Topics.
Unrolling the shutter: CNN to correct motion distortions
Camera Culture Ramesh Raskar Camera Culture MIT Media Lab.
Presentation transcript:

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar Mitsubishi Electric Research Labs (MERL) MIT Media Lab Cambridge, MA, USA Reinterpretable Imager: Towards Variable Post-Capture Space, Angle & Time Resolution in Photography

Captured Photo

Video from Single-Shot (Temporal Frames)

Captured Photo

Rotating Doll in Focus

Captured 2D Photo

Captured 2D Photo In-Focus High Resolution 2D Image Static Scene Parts

Captured 2D Photo In-FocusOut of Focus High Resolution 2D Image 4D Light Field Static Scene Parts

Captured 2D Photo In-FocusOut of Focus In-Focus High Resolution 2D Image 4D Light Field Video Static Scene PartsDynamic Scene Parts

Captured 2D Photo In-FocusOut of FocusIn-Focus High Resolution 2D Image 4D Light Field Video Static Scene PartsDynamic Scene Parts 1D Parallax + Motion Out of Focus

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Key Idea Resolution tradeoff for Conventional Imaging –Fixed before capture video camera, lightfield camera –Scene independent Resolution tradeoff for Reinterpretable Imager –Variable in post-capture –Scene dependent –Different for different parts of the scene/captured photo

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Reinterpretable Imager Capture Time-Varying Light Field Spatial dimensions = 2 Angular dimensions = 2 Temporal dimensions = 1 We capture 4D subsets in single-shot Single Camera Design can act as –Still camera, Video camera, lightfield camera

Optical Design

Dynamic Aperture Mask Sensor Static Mask Sensor Static Aperture Mask Sensor Coded ApertureOptical HeterodyningReinterpretable Imager Veeraraghavan et al. SIGGRAPH 2007 This Paper Static Mask Digital Refocusing

Dynamic Aperture Mask Sensor Static Mask Sensor Static Aperture Mask Sensor Coded ApertureOptical HeterodyningReinterpretable Imager Veeraraghavan et al. SIGGRAPH 2007 This Paper Static Mask Digital Refocusing Light Field Capture

Static Mask Sensor Static Aperture Mask Sensor Coded ApertureOptical Heterodyning Veeraraghavan et al. SIGGRAPH 2007 This Paper Dynamic Aperture Mask Sensor Reinterpretable Imager Static Mask Digital Refocusing Light Field Capture Post-Capture Resolution Control

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Reinterpretable Imager Dynamic Aperture Mask –Pinhole moved across the aperture during exposure time Single shot video, lightfield, high res image –Vertical slit moved across the aperture 1D parallax + motion Near-Sensor Mask –Similar to Veeraraghvan et al. SIGGRAPH 2007

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Implementation Camera Motor Wheel Aperture Mask on Wheel Shutter Near-Sensor Mask

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Related Work Hand-held Light Field Camera Single shot Micro-lens Ng et al Prims+lens Georgiev et al Mask at sensor Veeraraghavan et al Light Field camera + Aperture Modulation –Horstmeyer ICCP 09 –Polarization, Spectral Multiplexing techniques –Assorted Pixels –Illumination multiplexing, Schechner et al. ICCV 2003

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Recovering Full Resolution 2D Image Sensor p Mask No Mask: pixel value = I(p) With Mask: pixel value = β(p)I(p) In-focus static scene Mask

For Static In-Focus Scene Captured 2D Photo Divide by Calibration Photo High Resolution Image

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Recovering Full Resolution 2D Image For static in-focus scene –Inserting Masks == Spatially Varying Image Attenuation –Compensate using normalization photo β(p) In Focus Out of Focus In Focus Out of Focus Captured PhotoNormalization PhotoFull Resolution Image

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Recovering Light Fields for Static Scene Capture Light Fields: –Map angular variations to spatial dimension –Angle To Space Mapping For static scenes –Mask close to sensor == captures light field Heterodyning, Veeraraghavan et al. SIGGRAPH 2007 –Mask in aperture == no impact, only lose light

For Static Scenes Captured 2D Photo Compute 4D Light Field Digital Refocusing Sub-Aperture Views == Angular Samples

Captured Photo (Static Scene)

Reconstructed Sub-Aperture Views (3 by 3 Light Field) Angle

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Recovering Video from Single Shot In-focus dynamic scene –Mask in aperture maps temporal variations to angular variations –Mask close to sensor captures angular variations as a light field Mapping Time to Space Via – Time to Angle + Angle to Space

Static Mask Moving Pinhole K K Scene patch (t 1 ) Capturing In-focus Dynamic Scenes

Static Mask K K Scene patch (t 2 ) Capturing In-focus Dynamic Scenes

Sensor Static Mask d K K spot Scene patch (t 3 )

For Dynamic In-focus Scene Captured 2D Photo Compute 4D Light Field Sub-Aperture Views == Temporal Frames

Captured Photo

Reconstructed Sub-Aperture Views (3 by 3 Light Field) Time

Rotating Doll

Reconstructed Sub-Aperture Views (3 by 3 Light Field)

Time For Rotating Doll

Angle For Static Scene Parts

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Recovering 1D Parallax + Motion Vertical slit moved across the aperture –Map angular variations to vertical dimension –Maps temporal variations to horizontal dimension Capture dynamic out-of-focus scene –However, only 1D out-of-focus blur (bokeh)

For Dynamic Out-of-focus Scene Captured 2D Photo Compute 4D Light Field Sub-Aperture Views == Temporal Frames (Horizontally) Sub-Aperture Views == Angular Samples (Vertically) Refocus Using Vertical Views

Captured Photo

Reconstructed Sub-Aperture Views (3 by 3 Light Field) Time Angle

Digital Refocusing on Moving Rubik’s Cube

Keeping Playing Card in Focus

Captured Photo

Static Object (in-focus)

Static Objects (Out-of-focus)

Moving Object (in depth)

Rotating Object (in focus)

Reconstructed Sub-Aperture Views (3 by 3 Light Field)

All Static and Dynamic Objects are sharp (No focus blur, no motion blur)

Angle For Static Objects

Time Angle For Moving Toy in Middle

Time For Rotating Toy on Right

Refocused on Static Toy High Resolution Image

Digital Refocusing on Static Objects

Digital Refocusing on Toy Moving in Depth

Video frames of Rotating Toy Video for Rotating Toy in-focus

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Limitations Light Loss –To get extra information –Both at aperture and sensor –Micro-lens at sensor (Ng et al.) for lightfield capture Temporal Frames –No. of frames = Max angular resolution –Not independent as in a video camera –Large motions cause motion blur –Viewpoint shift –Ghosting artifacts across sub-aperture views Does not capture full 5D information –Video light field camera

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Future Work LCD’s for modulation –Benefit: Faster modulation –Issues: Contrast, Diffraction Using Computer Vision –No high/mid-level processing at present Adaptive (Active) Sampling

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Acknowledgements Anonymous Reviewers MERL –Jay Thornton, Kojima Keisuke, Joseph Katz, John Barnwell, Brandon Taylor, Clifton Forlines and Yuichi Taguchi Mitsubishi Electric, Japan –Haruhisa Okuda & Kazuhiko Sumi

MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Google: ‘Reinterpretable Imager’ Captured 2D Photo In-FocusOut of FocusIn-FocusOut of Focus High Resolution 2D Image 4D Light Field Video 1D Parallax + Motion Static Scene PartsDynamic Scene Parts Dynamic Aperture Mask Sensor Reinterpretable Imager Static Mask