Project 3 Results

Slides:



Advertisements
Similar presentations
Lytro The first light field camera for the consumer market
Advertisements

Fourier Slice Photography
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
CLASS 9 ADVANCE RENDERING RAY TRACING RADIOSITY LIGHT FIELD CS770/870.
Computational Photography
© 2010 Adobe Systems Incorporated. All Rights Reserved. T. Georgiev, Adobe Systems A. Lumsdaine, Indiana University The Multi-Focus Plenoptic Camera.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Fall 2014.
Light Readings Forsyth, Chapters 4, 6 (through 6.2) by Ted Adelson.
Modeling Light : Rendering and Image Processing Alexei Efros.
Lecture 30: Light, color, and reflectance CS4670: Computer Vision Noah Snavely.
Wrap Up : Rendering and Image Processing Alexei Efros.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Image-Based Rendering Computer Vision CSE576, Spring 2005 Richard Szeliski.
Light by Ted Adelson Readings Forsyth, Chapters 4, 6 (through 6.2)
CSCE641: Computer Graphics Image Formation Jinxiang Chai.
Lecture 20: Light, color, and reflectance CS6670: Computer Vision Noah Snavely.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Image or Object? Michael F. Cohen Microsoft Research.
1Jana Kosecka, CS 223b Cylindrical panoramas Cylindrical panoramas with some slides from R. Szeliski, S. Seitz, D. Lowe, A. Efros,
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Computational Photography Light Field Rendering Jinxiang Chai.
Intromission Theory: Plato, Euclid, Ptolemy, da Vinci. Plato for instance, wrote in the fourth century B. C. that light emanated from the eye, seizing.
Graphics research and courses at Stanford
Light field microscopy Marc Levoy, Ren Ng, Andrew Adams Matthew Footer, Mark Horowitz Stanford Computer Graphics Laboratory.
CSE473/573 – Stereo Correspondence
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
NVIDIA Lecture 10 Copyright  Pat Hanrahan Image-Based Rendering: 1st Wave Definition: Using images to enhance the realism of 3D graphics Brute Force in.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Wrap Up : Computational Photography Alexei Efros, CMU, Fall 2005 © Robert Brown.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Light field photography and microscopy Marc Levoy Computer Science Department Stanford University.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Computational photography CS4670: Computer Vision Noah Snavely.
Proxy Plane Fitting for Line Light Field Rendering Presented by Luv Kohli COMP238 December 17, 2002.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Image-based rendering Michael F. Cohen Microsoft Research.
Physiology of Vision: a swift overview : Learning-Based Methods in Vision A. Efros, CMU, Spring 2009 Some figures from Steve Palmer.
Physiology of Vision: a swift overview : Advanced Machine Perception A. Efros, CMU, Spring 2006 Some figures from Steve Palmer.
Computing & Information Sciences Kansas State University Wednesday, 03 Dec 2008CIS 530 / 730: Artificial Intelligence Lecture 38 of 42 Wednesday, 03 December.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
Spring 2015 CSc 83020: 3D Photography Prof. Ioannis Stamos Mondays 4:15 – 6:15
Modeling Light cs129: Computational Photography
What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of power (watts) – is wavelength Light: Travels.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
The Plenoptic Function Lázaro Hermoso Beltrán. 2 Previous Concepts “The body of the air is full of an infinite number of radiant pyramids caused by the.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
Panorama artifacts online –send your votes to Li Announcements.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
Lecture 34: Light, color, and reflectance CS4670 / 5670: Computer Vision Noah Snavely.
776 Computer Vision Jan-Michael Frahm Fall Capturing light Source: A. Efros.
CS 691B Computational Photography Instructor: Gianfranco Doretto Modeling Light.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
Homographies and Mosaics : Computational Photography Alexei Efros, CMU, Fall 2006 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
CS5670: Computer Vision Noah Snavely Light & Perception.
Physiology of Vision: a swift overview
Physiology of Vision: a swift overview
Sampling and Reconstruction of Visual Appearance
RANSAC and mosaic wrap-up
Presentation transcript:

Project 3 Results

Stereo Recap Epipolar geometry – Epipoles are intersection of baseline with image planes – Matching point in second image is on a line passing through its epipole – Fundamental matrix maps from a point in one image to a line (its epipolar line) in the other – Can solve for F given corresponding points (e.g., interest points) Stereo depth estimation – Estimate disparity by finding corresponding points along scanlines – Depth is inverse to disparity – Projected, structured lighting can replace one of the cameras

Modeling Light cs129: Computational Photography James Hays, Brown, Fall 2012 Slides from Alexei A. Efros and others

What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of power (watts) – is wavelength Useful things: Light travels in straight lines In vacuum, radiance emitted = radiance arriving i.e. there is no transmission loss

Figures © Stephen E. Palmer, 2002 What do we see? 3D world2D image

What do we see? 3D world2D image Painted backdrop

On Simulating the Visual Experience Just feed the eyes the right data No one will know the difference! Philosophy: Ancient question: “Does the world really exist?” Science fiction: Many, many, many books on the subject, e.g. slowglass from “Light of Other Days”“Light of Other Days” Latest take: The Matrix Physics: Slowglass might be possible? Computer Science: Virtual Reality To simulate we need to know: What does a person see?

The Plenoptic Function Q: What is the set of all things that we can ever see? A: The Plenoptic Function (Adelson & Bergen) Let’s start with a stationary person and try to parameterize everything that he can see… Figure by Leonard McMillan

Grayscale snapshot is intensity of light Seen from a single view point At a single time Averaged over the wavelengths of the visible spectrum (can also do P(x,y), but spherical coordinate are nicer) P(  )

Color snapshot is intensity of light Seen from a single view point At a single time As a function of wavelength P(  )

A movie is intensity of light Seen from a single view point Over time As a function of wavelength P( ,t)

Holographic movie is intensity of light Seen from ANY viewpoint Over time As a function of wavelength P( ,t,V X,V Y,V Z )

The Plenoptic Function Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! Not bad for a function… P( ,t,V X,V Y,V Z )

Sampling Plenoptic Function (top view) Just lookup – Google Street View

Model geometry or just capture images?

Ray Let’s not worry about time and color: 5D 3D position 2D direction P(  V X,V Y,V Z ) Slide by Rick Szeliski and Michael Cohen

SurfaceCamera No Change in Radiance Lighting How can we use this?

Ray Reuse Infinite line Assume light is constant (vacuum) 4D 2D direction 2D position non-dispersive medium Slide by Rick Szeliski and Michael Cohen

Only need plenoptic surface

Synthesizing novel views Slide by Rick Szeliski and Michael Cohen

Lumigraph / Lightfield Outside convex space 4D Stuff Empty Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2D direction s  Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2 plane parameterization s u Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2 plane parameterization u s t s,t u,v v s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization Hold s,t constant Let u,v vary An image s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph / Lightfield

Lumigraph - Capture Idea 1 Move camera carefully over s,t plane Gantry –see Lightfield paper s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Capture Idea 2 Move camera anywhere Rebinning –see Lumigraph paper s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Rendering q For each output pixel determine s,t,u,v either use closest discrete RGB interpolate near values s u Slide by Rick Szeliski and Michael Cohen

Lumigraph - Rendering Nearest closest s closest u draw it Blend 16 nearest quadrilinear interpolation s u Slide by Rick Szeliski and Michael Cohen

Stanford multi-camera array 640 × 480 pixels × 30 fps × 128 cameras synchronized timing continuous streaming flexible arrangement

Light field photography using a handheld plenoptic camera Commercialized as Lytro Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan

 Marc Levoy Conventional versus light field camera

 Marc Levoy Conventional versus light field camera uv-planest-plane

Prototype camera 4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels per lens Contax medium format cameraKodak 16-megapixel sensor Adaptive Optics microlens array125μ square-sided microlenses

 Marc Levoy Digitally stopping-down stopping down = summing only the central portion of each microlens Σ Σ

 Marc Levoy Digital refocusing refocusing = summing windows extracted from several microlenses Σ Σ

 Marc Levoy Example of digital refocusing

 Marc Levoy Digitally moving the observer moving the observer = moving the window we extract from the microlenses Σ Σ

 Marc Levoy Example of moving the observer

3D Lumigraph One row of s,t plane i.e., hold t constant s,t u,v

3D Lumigraph One row of s,t plane i.e., hold t constant thus s,u,v a “row of images” s u,v

by David Dewey P(x,t)

2D: Image What is an image? All rays through a point Panorama? Slide by Rick Szeliski and Michael Cohen

Image Image plane 2D position

Spherical Panorama All light rays through a point form a ponorama Totally captured in a 2D array -- P(  ) Where is the geometry??? See also: 2003 New Years Eve

Other ways to sample Plenoptic Function Moving in time: Spatio-temporal volume: P( ,t) Useful to study temporal changes Long an interest of artists: Claude Monet, Haystacks studies

Space-time images Other ways to slice the plenoptic function… x y t

The “Theatre Workshop” Metaphor desired image (Adelson & Pentland,1996) PainterLighting Designer Sheet-metal worker

Painter (images)

Lighting Designer (environment maps) Show Naimark SF MOMA video

Sheet-metal Worker (geometry) Let surface normals do all the work!

… working together Want to minimize cost Each one does what’s easiest for him Geometry – big things Images – detail Lighting – illumination effects clever Italians