Projection Defocus Analysis for Scene Capture and Image Display Li Zhang Shree Nayar Columbia University IIS-00-85864 SIGGRAPH Conference July 2006, Boston,

Slides:



Advertisements
Similar presentations
Fast Separation of Direct and Global Images Using High Frequency Illumination Shree K. Nayar Gurunandan G. Krishnan Columbia University SIGGRAPH Conference.
Advertisements

Procam and Campro Shree K. Nayar Computer Science Columbia University Support: NSF, ONR Procams 2006 PROCAMS Shree K. Nayar,
Kawada Industries Inc. has introduced the HRP-2P for Robodex 2002
IITB-Monash Research Academy An Indian-Australian Research Partnership IIT Bombay Projection Defocus Correction using Adaptive Kernel Sampling and Geometric.
--- some recent progress Bo Fu University of Kentucky.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Structured Light + Range Imaging Lecture #17 (Thanks to Content from Levoy, Rusinkiewicz, Bouguet, Perona, Hendrik Lensch)
Shadow Scanning [Bouguet 1999] Desk Lamp Camera Stick or pencil Desk The idea [Bouguet and Perona’98] J.-Y. Bouguet and P. Perona, “3D Photography on your.
CASTLEFORD CAMERA CLUB DEPTH OF FIELD. DEPTH OF FIELD (DOF) DOF is the portion of a scene that appears acceptably sharp in the image.
Light Field Rendering Shijin Kong Lijie Heng.
Structured Lighting Guido Gerig CS 6320, 3D Computer Vision Spring 2012 (thanks: some slides S. Narasimhan CMU, Marc Pollefeys UNC)
Shape-from-X Class 11 Some slides from Shree Nayar. others.
Stereo.
Matting and Transparency : Computational Photography Alexei Efros, CMU, Fall 2007.
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Diffusion Coding Photography for Extended Depth of Field SIGGRAPH 2010 Ollie Cossairt, Changyin Zhou, Shree Nayar Columbia University.
When Does a Camera See Rain? Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar ICCV Conference October 2005, Beijing, China.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
High-Quality Video View Interpolation
Photorealistic Rendering of Rain Streaks Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar SIGGRAPH Conference July 2006,
Image-based Rendering of Real Objects with Complex BRDFs.
Structured Light in Scattering Media Srinivasa Narasimhan Sanjeev Koppal Robotics Institute Carnegie Mellon University Sponsor: ONR Shree Nayar Bo Sun.
1 Diffusion Coded Photography for Extended Depth of Field SIGGRAPH 2010 Oliver Cossairt, Changyin Zhou, Shree Nayar Columbia University Supported by ONR.
Lensless Imaging with A Controllable Aperture Assaf Zomet and Shree K. Nayar Columbia University IEEE CVPR Conference June 2006, New York, USA.
Stereo Guest Lecture by Li Zhang
Project 1 artifact winners Project 2 questions Project 2 extra signup slots –Can take a second slot if you’d like Announcements.
Building an Autostereoscopic Display CS448A – Digital Photography and Image-Based Rendering Billy Chen.
Integral Photography A Method for Implementing 3D Video Systems.
Projector with Radiometric Screen Compensation Shree K. Nayar, Harish Peri Michael Grossberg, Peter Belhumeur Support: National Science Foundation Computer.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
© Tracey Garvey Photography
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University September.
Depth from Diffusion Supported by ONR Changyin ZhouShree NayarOliver Cossairt Columbia University.
MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar Mitsubishi Electric Research.
Structure from images. Calibration Review: Pinhole Camera.
Digital Photography A tool for Graphic Design Graphic Design: Digital Photography.
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
REAL-TIME VIDEO EFFECTS USING A DEPTH SENSOR Robbie Fraser and Ridho Jeftha.
Computational photography CS4670: Computer Vision Noah Snavely.
Nonphotorealistic rendering, and future cameras Computational Photography, Bill Freeman Fredo Durand May 11, 2006.
An Interactive Background Blurring Mechanism and Its Applications NTU CSIE 1 互動式背景模糊.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
Capturing light Source: A. Efros.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Stereo Many slides adapted from Steve Seitz.
PCB Soldering Inspection. Structured Highlight approach Structured Highlight method is applied to illuminating and imaging specular surfaces which yields.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Computer Vision, CS766 Staff Instructor: Li Zhang TA: Yu-Chi Lai
Extracting Depth and Matte using a Color-Filtered Aperture Yosuke Bando TOSHIBA + The University of Tokyo Bing-Yu Chen National Taiwan University Tomoyuki.
Paper presentation topics 2. More on feature detection and descriptors 3. Shape and Matching 4. Indexing and Retrieval 5. More on 3D reconstruction 1.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
An Interactive Background Blurring Mechanism and Its Applications NTU CSIE Yan Chih-Yu Advisor: Wu Ja-Ling, Ph.D. 1.
Camera surface reference images desired ray ‘closest’ ray focal surface ‘closest’ camera Light Field Parameterization We take a non-traditional approach.
Matting and Transparency : Computational Photography Alexei Efros, CMU, Spring 2010.
Chapter 1 Graphics Systems and Models Models and Architectures.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
제 5 장 스테레오.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Coding Approaches for End-to-End 3D TV Systems
Matting, Transparency, and Illumination
Distributed Ray Tracing
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Presentation transcript:

Projection Defocus Analysis for Scene Capture and Image Display Li Zhang Shree Nayar Columbia University IIS SIGGRAPH Conference July 2006, Boston, USA

Modeling Complex Scenes is Hard Courtesy of Maia flickr.com

LimitationGoal The Limits of 3D Photography Scene Depth Map Structured Light & Stereo Depth from Camera Defocus Photometric Stereo Missing Data near Occlusions Inaccurate at Discontinuities Only for Single Surface Image-CompleteAccurate for Complex Scenes

Projection Defocus Analysis Projector Defocus Model Robust Depth Recovery Image Refocusing Video Composition Enhanced Image Display Increasing Depth of Field Depixelation

Camera Defocus Cue Courtesy of flickr.com

Limitation of Depth from Camera Defocus Focal Plane Surface Camera Camera Defocus:

Depth from Projector Defocus Focal Plane Lamp Projector

Depth from Projector Defocus Focal Plane Lamp Surface Projector

Depth from Projector Defocus Focal Plane Lamp Surface Projector Projector Defocus:

Camera Defocus vs. Projector Defocus Projector Defocus: Camera Defocus:

Temporal Defocus Method Focal Plane Lamp Shift Surface Projector Camera Projector Defocus Model:

Experimental Setup Projector Half-mirror Camera Projector Camera Half-mirror

Experimental Setup Projector Half-mirror Camera

Books and Gate: Sharp Discontinuities SceneDepth Map 3D Model

Books and Gate: Sharp Discontinuities SceneDepth Map Refocus on Gate +

Books and Gate: Sharp Discontinuities SceneDepth Map Refocus on Back +

Application: Refocus Synthesis

Wrestlers: Curved and Specular Surfaces SceneDepth Map 3D Model

Fence and Leaves: Complex Occlusions SceneDepth Map 3D Model

Application: Refocus Synthesis

Playing Cards: Object Insertion SceneDepth Map Green Screen Capture Depth-based Composition

Projection Defocus Analysis Projector Defocus Model Robust Depth Recovery Image Refocusing Video Composition Enhanced Image Display Increasing Depth of Field Depixelation

Focused Projection at Multiple Depths

A Multi-Projector Solution [Bimber & Emmerling 2006]

Focused Projection at Multiple Depths Our Single Projector Solution Defocus Compensation 30 inches 3 meters

? ? Defocused Projection Original ImageCompensation Image Defocus Compensation Defocus Kernel

Compensation Image ? Defocused Projection Defocus Compensation Defocus Kernel Simple Sharpening Does NOT work. Result

? Defocus Compensation Original Image Defocus Kernel Compensation Image

Compensated Projection Front PlaneMiddle PlaneBack Plane Original Image Compensation Results Normal Projection

Defocus-Based Depixelation Screen-Door Effect

Depixelation Results Depixelated Projection Original Image Normal Projection

Contributions Reducing Pixelation for Digital Projectors Increasing DoF without Changing Optics Estimating Depth from Projector Defocus

Limitations and Future Work Real-time Compensation Real-time Depth Recovery Translucency Acquisition Matte Estimation

The End