CS 691B Computational Photography Instructor: Gianfranco Doretto Modeling Light.

Slides:



Advertisements
Similar presentations
Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.
Advertisements

Fourier Slice Photography
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Project 3 Results
Computational Photography
© 2010 Adobe Systems Incorporated. All Rights Reserved. T. Georgiev, Adobe Systems A. Lumsdaine, Indiana University The Multi-Focus Plenoptic Camera.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Lightfields, Lumigraphs, and Image-based Rendering.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Fall 2014.
Light Readings Forsyth, Chapters 4, 6 (through 6.2) by Ted Adelson.
Modeling Light : Rendering and Image Processing Alexei Efros.
Lecture 30: Light, color, and reflectance CS4670: Computer Vision Noah Snavely.
Wrap Up : Rendering and Image Processing Alexei Efros.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Image-Based Rendering Computer Vision CSE576, Spring 2005 Richard Szeliski.
Light by Ted Adelson Readings Forsyth, Chapters 4, 6 (through 6.2)
CSCE641: Computer Graphics Image Formation Jinxiang Chai.
Lecture 20: Light, color, and reflectance CS6670: Computer Vision Noah Snavely.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Image or Object? Michael F. Cohen Microsoft Research.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Computational Photography Light Field Rendering Jinxiang Chai.
Intromission Theory: Plato, Euclid, Ptolemy, da Vinci. Plato for instance, wrote in the fourth century B. C. that light emanated from the eye, seizing.
Graphics research and courses at Stanford
Light field microscopy Marc Levoy, Ren Ng, Andrew Adams Matthew Footer, Mark Horowitz Stanford Computer Graphics Laboratory.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
NVIDIA Lecture 10 Copyright  Pat Hanrahan Image-Based Rendering: 1st Wave Definition: Using images to enhance the realism of 3D graphics Brute Force in.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Wrap Up : Computational Photography Alexei Efros, CMU, Fall 2005 © Robert Brown.
CSCE 641: Computer Graphics Image Formation & Plenoptic Function Jinxiang Chai.
Light field photography and microscopy Marc Levoy Computer Science Department Stanford University.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
CPSC 641: Computer Graphics Image Formation Jinxiang Chai.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Computational photography CS4670: Computer Vision Noah Snavely.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Image-based rendering Michael F. Cohen Microsoft Research.
Physiology of Vision: a swift overview : Learning-Based Methods in Vision A. Efros, CMU, Spring 2009 Some figures from Steve Palmer.
Physiology of Vision: a swift overview : Advanced Machine Perception A. Efros, CMU, Spring 2006 Some figures from Steve Palmer.
Computing & Information Sciences Kansas State University Wednesday, 03 Dec 2008CIS 530 / 730: Artificial Intelligence Lecture 38 of 42 Wednesday, 03 December.
Lightfields, Lumigraphs, and Other Image-Based Methods.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
Modeling Light cs129: Computational Photography
What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of power (watts) – is wavelength Light: Travels.
The Plenoptic Function Lázaro Hermoso Beltrán. 2 Previous Concepts “The body of the air is full of an infinite number of radiant pyramids caused by the.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
Panorama artifacts online –send your votes to Li Announcements.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
Announcements Office hours today 2:30-3:30 Graded midterms will be returned at the end of the class.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Lecture 34: Light, color, and reflectance CS4670 / 5670: Computer Vision Noah Snavely.
776 Computer Vision Jan-Michael Frahm Fall Capturing light Source: A. Efros.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
It’s amazing!…Can you imagine life without it?
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
Homographies and Mosaics : Computational Photography Alexei Efros, CMU, Fall 2006 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Physiology of Vision: a swift overview
Physiology of Vision: a swift overview
Image-Based Rendering
Sampling and Reconstruction of Visual Appearance
15-463: Rendering and Image Processing Alexei Efros
Presentation transcript:

CS 691B Computational Photography Instructor: Gianfranco Doretto Modeling Light

What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power (watts) – λ is wavelength Light: Travels far Travels fast Travels in straight lines Interacts with stuff Bounces off things Is produced in nature Has lots of energy -- Steve Shafer

Figures © Stephen E. Palmer, 2002 What do we see? 3D world2D image

What do we see? 3D world2D image Painted backdrop

On Simulating the Visual Experience Just feed the eyes the right data –No one will know the difference! Philosophy: –Ancient question: “Does the world really exist?” Science fiction: –Many, many, many books on the subject, e.g. slowglass from “Light of Other Days”“Light of Other Days” –Latest take: The Matrix Physics: –Slowglass might be possible? Computer Science: –Virtual Reality To simulate we need to know: What does a person see?

The Plenoptic Function Q: What is the set of all things that we can ever see? A: The Plenoptic Function (Adelson & Bergen) Let’s start with a stationary person and try to parameterize everything that he can see… Figure by Leonard McMillan

Grayscale snapshot is intensity of light – Seen from a single view point – At a single time – Averaged over the wavelengths of the visible spectrum (can also do P(x,y), but spherical coordinate are nicer) P(θ,φ)

Color snapshot is intensity of light – Seen from a single view point – At a single time – As a function of wavelength P(θ,φ,λ)

A movie is intensity of light – Seen from a single view point – Over time – As a function of wavelength P(θ,φ,λ,t)

Holographic movie is intensity of light – Seen from ANY viewpoint – Over time – As a function of wavelength P(θ,φ,λ,t,V X,V Y,V Z )

The Plenoptic Function – Can reconstruct every possible view, at every moment, from every position, at every wavelength – Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! Not bad for a function… P(θ,φ,λ,t,V X,V Y,V Z )

Sampling Plenoptic Function (top view) Just lookup -- Quicktime VR

Ray Let’s not worry about time and color: 5D –3D position –2D direction P(θ,φ,V X,V Y,V Z )

SurfaceCamera No Change in Radiance Lighting How can we use this?

Ray Reuse Infinite line –Assume light is constant (vacuum) 4D –2D direction –2D position –non-dispersive medium

Only need plenoptic surface v u

Synthesizing novel views

Lumigraph / Lightfield Outside convex space 4D Stuff Empty

Lumigraph - Organization 2D position 2D direction u 

Lumigraph - Organization 2D position 2 plane parameterization u s

Lumigraph - Organization 2D position 2 plane parameterization s u v u,v s,t t u,v s,t

Lumigraph - Organization Hold u,v constant Let s,t vary An image u,v s,t

Lumigraph / Lightfield

Lumigraph - Capture Idea 1 Move camera carefully over u,v plane Gantry –see Lightfield paper u,v s,t

Lumigraph - Capture Idea 2 Move camera anywhere Rebinning –see Lumigraph paper u,v s,t

Lumigraph - Rendering q For each output pixel determine u,v,s,t either use closest discrete RGB interpolate near values u s

Lumigraph - Rendering Nearest –closest u –closest s –draw it Blend 16 nearest –quadrilinear interpolation u s

Stanford multi-camera array 640 × 480 pixels × 30 fps × 128 cameras synchronized timing continuous streaming flexible arrangement

Light field photography using a handheld plenoptic camera Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan

Conventional versus light field camera

uv-planest-plane

Prototype camera 4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels per lens Contax medium format cameraKodak 16-megapixel sensor Adaptive Optics microlens array125μ square-sided microlenses

Digitally stopping-down stopping down = summing only the central portion of each microlens Σ Σ

Digital refocusing refocusing = summing windows extracted from several microlenses Σ Σ

Example of digital refocusing

Digitally moving the observer moving the observer = moving the window we extract from the microlenses Σ Σ

Example of moving the observer

Moving backward and forward

On sale now: lytro.com

3D Lumigraph One row of s,t plane –i.e., hold t constant s,t u,v

3D Lumigraph One row of s,t plane –i.e., hold t constant –thus s,u,v –a “row of images” s u,v

by David Dewey P(x,t)

2D: Image What is an image? All rays through a point –Panorama?

Image Image plane 2D –position

Spherical Panorama All light rays through a point form a ponorama Totally captured in a 2D array -- P(  ) Where is the geometry??? See also: 2003 New Years Eve

Other ways to sample Plenoptic Function Moving in time: –Spatio-temporal volume: P( ,t) –Useful to study temporal changes –Long an interest of artists: Claude Monet, Haystacks studies

Space-time images Other ways to slice the plenoptic function… x y t

The “Theatre Workshop” Metaphor desired image (Adelson & Pentland,1996) PainterLighting Designer Sheet-metal worker

Painter (images)

Lighting Designer (environment maps) Show Naimark SF MOMA video

Sheet-metal Worker (geometry) Let surface normals do all the work!

… working together Want to minimize cost Each one does what’s easiest for him –Geometry – big things –Images – detail –Lighting – illumination effects clever Italians

Slide Credits This set of sides also contains contributions kindly made available by the following authors – Alexei Efros – Richard Szelisky – Michael Cohen – Stephen E. Palmer – Ren Ng