Lensless Imaging with A Controllable Aperture Assaf Zomet and Shree K. Nayar Columbia University IEEE CVPR Conference June 2006, New York, USA.

Slides:



Advertisements
Similar presentations
The Camera : Computational Photography Alexei Efros, CMU, Fall 2006.
Advertisements

--- some recent progress Bo Fu University of Kentucky.
Po-Hsiang Chen Advisor: Sheng-Jyh Wang. People  Shree K. Nayar  Ramesh Raskar  Ren Ng 2.
Generalized Mosaics Yoav Y. Schechner, Shree Nayar Department of Computer Science Columbia University.
Light Field Rendering Shijin Kong Lijie Heng.
Caustics of Catadioptric Cameras
Stereo.

Physics 52 - Heat and Optics Dr. Joseph F. Becker Physics Department San Jose State University © 2003 J. F. Becker San Jose State University Physics 52.
Diffusion Coding Photography for Extended Depth of Field SIGGRAPH 2010 Ollie Cossairt, Changyin Zhou, Shree Nayar Columbia University.
When Does a Camera See Rain? Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar ICCV Conference October 2005, Beijing, China.
What are Good Apertures for Defocus Deblurring? Columbia University ICCP 2009, San Francisco Changyin Zhou Shree K. Nayar.
MSU CSE 803 Stockman Perspective algebra: quick- and-dirty first look Geometry of similar triangles yields algebra for computing world-image transformation.
Detection and Removal of Rain from Videos Department of Computer Science Columbia University Kshitiz Garg and Shree K. Nayar IEEE CVPR Conference June.
360 x 360 Mosaics Shree K. Nayar and Amruta Karmarkar Computer Science Department Columbia University IEEE CVPR Conference June 2000, Hilton Head Island,
Image-based Rendering of Real Objects with Complex BRDFs.
Detector lens image Traditional Camera Shree Nayar, ICIP, 2001.
Structured Light in Scattering Media Srinivasa Narasimhan Sanjeev Koppal Robotics Institute Carnegie Mellon University Sponsor: ONR Shree Nayar Bo Sun.
Chromatic Framework for Vision in Bad Weather Srinivasa G. Narasimhan and Shree K. Nayar Computer Science Department Columbia University IEEE CVPR Conference.
Projection Defocus Analysis for Scene Capture and Image Display Li Zhang Shree Nayar Columbia University IIS SIGGRAPH Conference July 2006, Boston,
1 Diffusion Coded Photography for Extended Depth of Field SIGGRAPH 2010 Oliver Cossairt, Changyin Zhou, Shree Nayar Columbia University Supported by ONR.
General Imaging Model Michael Grossberg and Shree Nayar CAVE Lab, Columbia University ICCV Conference Vancouver, July 2001 Partially funded by NSF ITR.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 02/26/15 1.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
Removing Weather Effects from Monochrome Images Srinivasa Narasimhan and Shree Nayar Computer Science Department Columbia University IEEE CVPR Conference.
Projector with Radiometric Screen Compensation Shree K. Nayar, Harish Peri Michael Grossberg, Peter Belhumeur Support: National Science Foundation Computer.
The Camera Chapter 4.
The Video Camera.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University September.
Depth from Diffusion Supported by ONR Changyin ZhouShree NayarOliver Cossairt Columbia University.
MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar Mitsubishi Electric Research.
Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan
1 Digital Cameras Consumer digital cameras have been around since 1995 What features make a good camera? How do we optimize good features with a limited.
Lenses Are classified by their Focal Length. The distance from the optical center of a lens to the front surface of the imaging device.
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
Jitter Camera: High Resolution Video from a Low Resolution Detector Moshe Ben-Ezra, Assaf Zomet and Shree K. Nayar IEEE CVPR Conference June 2004, Washington.
When Does a Camera See Rain? Kshitiz Garg, Shree K. Nayar ICCV’ 05.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Computational photography CS4670: Computer Vision Noah Snavely.
Accidental pinhole and pinspeck cameras: revealing the scene outside the picture A. Torralba and W. T. Freeman Proceedings of 25 IEEE Conference on Computer.
Optics Real-time Rendering of Physically Based Optical Effects in Theory and Practice Masanori KAKIMOTO Tokyo University of Technology.
What Does Motion Reveal About Transparency ? Moshe Ben-Ezra and Shree K. Nayar Columbia University ICCV Conference October 2003, Nice, France This work.
Motion Deblurring Using Hybrid Imaging Moshe Ben-Ezra and Shree K. Nayar Columbia University IEEE CVPR Conference June 2003, Madison, USA.
Mitsubishi Electric Research Labs (MERL) Super-Res from Single Motion Blur PhotoAgrawal & Raskar Amit Agrawal and Ramesh Raskar Mitsubishi Electric Research.
Ch. 4- The Video Camera Zettl. Preview Key Terms ENG/EFP Camera Basic Functions F-Stop CCU Microphones.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #16.
October 13, IMAGE FORMATION. October 13, CAMERA LENS IMAGE PLANE OPTIC AXIS LENS.
Aperture What is it and how does it affect your pictures?
Basic Camera Function G The camera converts an optical image into electrical signals that are reconverted by a television receiver into visible screen.
Fundamentals of Digital Images & Photography. Pixels & Colors The pixel (a word invented from "picture element") is the basic unit of programmable color.
More with Pinhole + Single-view Metrology
Chapter 2: The Lens. Focal Length is the distance between the center of a lens and the film plane when focused at infinity.
How the lens reflects and refracts light This is usually at the point where the light originates.
Date of download: 6/29/2016 Copyright © 2016 SPIE. All rights reserved. Potential imaging modes. (a) Each detector in a focal plane array measures the.
A Photograph of two papers The Model: 2 papers – 8cm x 8cm and 5cm x 5cm The Camera – Simple pinhole – No focusing capability The Scene – Arrangements.
Lenses Are classified by their Focal Length.
Aperture and Depth of Field
Extended Depth of Field For Long Distance Biometrics
The Camera : Computational Photography
The Multisensor Camera
Depth of Field Objective: to photograph a subject with a foreground and background, giving a better understanding of aperture and its relation to depth.
A Photograph of two papers
Lenses Are classified by their Focal Length.
Basic Camera Function The camera converts an optical image into electrical signals that are reconverted by a television receiver into visible screen images.
Rob Fergus Computer Vision
The Camera : Computational Photography
Haileybury Astana Year 8
Unit 57 – Photography Depth of field
Photographic Image Formation I
Presentation transcript:

Lensless Imaging with A Controllable Aperture Assaf Zomet and Shree K. Nayar Columbia University IEEE CVPR Conference June 2006, New York, USA

Cameras Today Lens Plane in focus Image Detector Aperture

Attenuating Aperture Our Lensless Camera Image Detector Volumetric

Multilayer Aperture Layered Aperture Image Detector Attenuating apertures: Farid&Simoncelli ‘97, Nayar&Branzoi ‘03

Implementation Camera bodyAperture: LCD LCD Control

Image Detector Attenuating Layer Pinhole Controllable Pinhole Image Detector Pinhole Attenuating Layer

Controllable Pinhole: Video

Limitations of Lensless Imaging Optimal pinhole (Raleigh): Brightness: f-number Sharpness and brightness increase with FOV (image center).

Limitations of Lensless Imaging Optimal pinhole (Raleigh): Brightness: f-number Sharpness and brightness increase with FOV (image center). f and detector larger by factor k : Resolution/brightness by.

Zand, 1992 Collecting More Light: Coded Apertures

Conventional View Desired View

Split Field of View: Implementation Attenuating Layers (2nd layer physical) Image Detector Pinhole Fov 2 Fov 1 Fov 3

Split Field of View: Results Lens camera Our camera

Split Field of View: Video

Finding Faces Wasted pixels

Finding Faces

Pinhole Attenuating Layers (2 nd layer physical) Split Image: Computational Camera Correlation pattern Image Detector

Computations in the Optics Our cameraLens camera Correlation Pattern Normalized correlation: ImagePattern Noncoherent optical processing (Rogers 1977)

A Camera as a Mapping Camera Optics

Imaging Formulation: One Layer f

Imaging Formulation: Multilayer fjfj

Varying Zoom Conventional view Desired view H H LL L L

Varying Zoom Mapping Desired view Scene M X and M X are impossible to implement with 3 layers

Attenuating Layers Image Detector Fov 1Fov 3 Pinhole Fov 2 Varying Zoom: Imaging (simulation) Captured Desired

Varying Zoom: Reconstruction I computed I capturted

Summary Controllable Pinhole Computational CameraSpatially Varying Zoom Split Field of View Programmable cameras, Nayar, Branzoi & Boult 2004