CSCE641: Computer Graphics Image Formation Jinxiang Chai.

Slides:



Advertisements
Similar presentations
The Camera : Computational Photography Alexei Efros, CMU, Fall 2006.
Advertisements

CS 691 Computational Photography Instructor: Gianfranco Doretto 3D to 2D Projections.
Computer Vision CS 776 Spring 2014 Cameras & Photogrammetry 1 Prof. Alex Berg (Slide credits to many folks on individual slides)
Color & Light, Digitalization, Storage. Vision Rods work at low light levels and do not see color –That is, their response depends only on how many photons,
Color Image Processing
Projection Readings Szeliski 2.1. Projection Readings Szeliski 2.1.
Image formation and cameras CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
Modeling Light : Rendering and Image Processing Alexei Efros.
Announcements. Projection Today’s Readings Nalwa 2.1.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Talk today on “Lightfield photography” by Ren.
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
1 Computer Science 631 Lecture 6: Color Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
Display Issues Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico.
Image Formation Mohan Sridharan Based on slides created by Edward Angel CS4395: Computer Graphics 1.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
CSCE 641: Computer Graphics Image Formation & Plenoptic Function Jinxiang Chai.
Cameras, lenses, and calibration
1 CSCE441: Computer Graphics: Color Models Jinxiang Chai.
Perspective projection
Cameras CSE 455, Winter 2010 January 25, 2010.
EEM 561 Machine Vision Week 10 :Image Formation and Cameras
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Image formation & Geometrical Transforms Francisco Gómez J MMS U. Central y UJTL.
9/14/04© University of Wisconsin, CS559 Spring 2004 Last Time Intensity perception – the importance of ratios Dynamic Range – what it means and some of.
Image Processing Lecture 2 - Gaurav Gupta - Shobhit Niranjan.
1Computer Graphics Lecture 3 - Image Formation John Shearer Culture Lab – space 2
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics I, Fall 2008 Image Formation.
1 Image Formation. 2 Objectives Fundamental imaging notions Physical basis for image formation ­Light ­Color ­Perception Synthetic camera model Other.
1 CS6825: Image Formation How are images created. How are images created.
CPSC 641: Computer Graphics Image Formation Jinxiang Chai.
Image Formation Fundamentals Basic Concepts (Continued…)
Geometric Models & Camera Calibration
Recap from Wednesday Two strategies for realistic rendering capture real world data synthesize from bottom up Both have existed for 500 years. Both are.
Color. Contents Light and color The visible light spectrum Primary and secondary colors Color spaces –RGB, CMY, YIQ, HLS, CIE –CIE XYZ, CIE xyY and CIE.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
1 Angel: Interactive Computer Graphics4E © Addison-Wesley 2005 Image Formation.
1 CSCE441: Computer Graphics: Color Models Jinxiang Chai.
Introduction to Computer Graphics
Jinxiang Chai CSCE 441 Computer Graphics 3-D Viewing 0.
1 CSCE441: Computer Graphics: Color Models Jinxiang Chai.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Intelligent Robotics Today: Vision & Time & Space Complexity.
1 Angel and Shreiner: Interactive Computer Graphics6E © Addison-Wesley 2012 Image Formation Sai-Keung Wong ( 黃世強 ) Computer Science National Chiao Tung.
Announcements Project 1 grading session this Thursday 2:30-5pm, Sieg 327 –signup ASAP:signup –10 minute slot to demo your project for a TA »have your program.
Lecture 18: Cameras CS4670 / 5670: Computer Vision KavitaBala Source: S. Lazebnik.
09/10/02(c) University of Wisconsin, CS559 Fall 2002 Last Time Digital Images –Spatial and Color resolution Color –The physics of color.
1 of 32 Computer Graphics Color. 2 of 32 Basics Of Color elements of color:
Outline 3D Viewing Required readings: HB 10-1 to 10-10
CSE 185 Introduction to Computer Vision
Capturing Light… in man and machine
CS5670: Computer Vision Lecture 9: Cameras Noah Snavely
Color Image Processing
Color Image Processing
The Camera : Computational Photography
Physiology of Vision: a swift overview
Announcements Project 1 Project 2
Color Image Processing
CSCE 441 Computer Graphics 3-D Viewing
Lecture 13: Cameras and geometry
Color Image Processing
The Camera : Computational Photography
Announcements Midterm out today Project 1 demos.
Projection Readings Nalwa 2.1.
Credit: CS231a, Stanford, Silvio Savarese
Projection Readings Szeliski 2.1.
Announcements Midterm out today Project 1 demos.
Presentation transcript:

CSCE641: Computer Graphics Image Formation Jinxiang Chai

Are They Images?

Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic function

Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic function

Color Representation Why do we use RGB to encode pixel color? Can we use RGB to represent all colors? What are other color representations?

Human Vision Model of human vision

Human Vision Model of human vision Vision components: Incoming light Human eye

Electromagnetic Spectrum Visible light frequencies range between: –Red: 4.3X10 14 hertz (700nm) –Violet: 7.5X10 14 hertz (400nm)

Visible Light The human eye can see “visible” light in the frequency between 400nm-700nm

Visible Light The human eye can see “visible” light in the frequency between 400nm-700nm 400nm700nm

Visible Light The human eye can see “visible” light in the frequency between 400nm-700nm 400nm700nm - Not strict boundary - Some colors are absent (brown, pink)

12 Spectral Energy Distribution Three different types of lights

13 Spectral Energy Distribution Three different types of lights Can we use spectral energy distribution to represent color?

14 Spectral Energy Distribution Three different types of lights Can we use spectral energy distribution to represent color? - Not really, different distribution might result in the same color (metamers)!

Spectral Energy Distribution The six spectra below look the same purple to normal color-vision people

Color Representation? Why not all ranges of light spectrum are perceived? So how to represent color? - unique - compact - work for as many visible lights as possible 400nm700nm

Human Vision Photoreceptor cells in the retina: - Rods - Cones

Light Detection: Rods and Cones Rods: -120 million rods in retina -1000X more light sensitive than Cones - Discriminate B/W brightness in low illumination - Short wave-length sensitive Cons: million Cones in the retina - Responsible for high-resolution vision - Discriminate Colors - Three types of color sensors (64% red, 32%, 2% blue) - Sensitive to any combination of three colors

Tristimulus of Color Theory Spectral-response functions of each of the three types of cones

Tristimulus of Color Theory Spectral-response functions of each of the three types of cones Can we use them to match any spectral color?

Tristimulus of Color Theory Spectral-response functions of each of the three types of cones Color matching function based on RGB - any spectral color can be represented as a linear combination of these primary colors

Tristimulus of Color Theory Spectral-response functions of each of the three types of cones Color matching function based on RGB - any spectral color can be represented as a linear combination of these primary colors

Tristimulus of Color Theory Spectral-response functions of each of the three types of cones Color matching function based on RGB - any spectral color can be represented as a linear combination of these primary colors

Tristimulus Color Theory So, color is psychological - Representing color as a linear combination of red, green, and blue is related to cones, not physics - Most people have the same cones, but there are some people who don’t – the sky might not look blue to them (although they will call it “blue” nonetheless) - But many people (mostly men) are colorblind, missing 1,2 or 3 cones (can buy cheaper TVs)

Additive and Subtractive Color RGB color model CMY color model Complementary color models: R=1-C; G = 1-M; B=1-Y; White: [1 1 1] T Green: [0 1 0]; White: [0 0 0] T Green: [1 0 1];

RGB Color Space RGB cube –Easy for devices –Can represent all the colors? –But not perceptual –Where is brightness, hue and saturation? red green blue

Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic function

Image Representation An image is a 2D rectilinear array of Pixels - A width X height array where each entry of the array stores a single pixel

Image Representation An image is a 2D rectilinear array of Pixels - A width X height array where each entry of the array stores a single pixel A 5X5 picture pixel

Image Representation A pixel stores color information Luminance pixels - gray-scale images (intensity images) or bits per pixel Red, green, blue pixels (RGB) - Color images - Each channel: or bits per pixel

Image Representation An image is a 2D rectilinear array of Pixels - A width X height array where each entry of the array stores a single pixel - Each pixel stores color information (255,255,255)

Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic Function

How Do We See the World? Let’s design a camera: idea 1: put a piece of film in front of camera Do we get a reasonable picture?

Pin-hole Camera Add a barrier to block off most of the rays –This reduces blurring –The opening known as the aperture –How does this transform the image?

Camera Obscura The first camera –Known to Aristotle –Depth of the room is the focal length –Pencil of rays – all rays through a point

Camera Obscura How does the aperture size affect the image?

Shrinking the Aperture Why not make the aperture as small as possible? –Less light gets through –Diffraction effects…

Shrink the Aperture: Diffraction A diffuse circular disc appears!

Shrink the Aperture

The Reason of Lenses

Adding A Lens A lens focuses light onto the film –There is a specific distance at which objects are “in focus” other points project to a “circle of confusion” in the image –Changing the shape of the lens changes this distance “circle of confusion”

Changing Lenses 28 mm 50 mm 210 mm70 mm

Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic Function

Projection Matrix What’s the geometric relationship between 3D objects and 2D images?

Modeling Projection: 3D->2D The coordinate system –We will use the pin-hole model as an approximation –Put the optical center (Center Of Projection) at the origin –Put the image plane (Projection Plane) in front of the COP –The camera looks down the negative z axis

Modeling Projection: 3D->2D Projection equations –Compute intersection with PP of ray from (x,y,z) to COP –Derived using similar triangles (on board)

Modeling Projection: 3D->2D Projection equations –Compute intersection with PP of ray from (x,y,z) to COP –Derived using similar triangles (on board) –We get the projection by throwing out the last coordinate:

Homogeneous Coordinates Is this a linear transformation? –no—division by z is nonlinear

Homogeneous Coordinates Is this a linear transformation? Trick: add one more coordinate: homogeneous image coordinates homogeneous scene coordinates –no—division by z is nonlinear

Homogeneous Coordinates Is this a linear transformation? Trick: add one more coordinate: homogeneous image coordinates homogeneous scene coordinates Converting from homogeneous coordinates –no—division by z is nonlinear

Perspective Projection divide by third coordinate Projection is a matrix multiply using homogeneous coordinates:

Perspective Projection divide by third coordinate This is known as perspective projection –The matrix is the projection matrix –Can also formulate as a 4x4 Projection is a matrix multiply using homogeneous coordinates: divide by fourth coordinate

Perspective Effects Distant object becomes small The distortion of items when viewed at an angle (spatial foreshortening)

Perspective Effects Distant object becomes small The distortion of items when viewed at an angle (spatial foreshortening)

Perspective Effects Distant object becomes small The distortion of items when viewed at an angle (spatial foreshortening)

Parallel Projection Special case of perspective projection –Distance from the COP to the PP is infinite –Also called “parallel projection” –What’s the projection matrix? Image World

Weak-perspective Projection Scaled orthographic projection - object size is small as compared to the average distance from the camera z 0 (e.g.σz < z 0 /20) - d/z ≈ d/z0 (constant)

Weak-perspective Projection Scaled orthographic projection - object size is small as compared to the average distance from the camera z 0 (e.g.σz < z 0 /20) - d/z ≈ d/z0 (constant) Projection matrix: λ λ d z0z0

View Transformation From world coordinate to camera coordinate P

View Transformation From world coordinate to camera coordinate P

Viewport Transformation x y u v u 0, v 0 From projection coordinate to image coordinate

Viewport Transformation x y u v u 0, v 0 u0u0 v0v s y 0 sxsx 0u v 1 x y 1 From projection coordinate to image coordinate

Putting It Together From world coordinate to image coordinate u0u0 v0v s y 0 sxsx 0u v 1 Perspective projection View transformation Viewport projection

Putting It Together From world coordinate to image coordinate u0u0 v0v s y 0 sxsx 0u v 1 Perspective projection View transformation Viewport projection Image resolution, aspect ratio Focal length The relative position & orientation between camera and objects

Camera Parameters Totally 11 parameters, u0u0 v0v s y 0 sxsx 0u v 1

Camera Parameters Totally 11 parameters, u0u0 v0v s y 0 sxsx 0u v 1 Intrinsic camera parameters extrinsic camera parameters

How about this image?

Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic function

Plenoptic Function What is the set of all things that we can ever see? - The Plenoptic Function (Adelson & Bergen)

Plenoptic Function What is the set of all things that we can ever see? - The Plenoptic Function (Adelson & Bergen) Let’s start with a stationary person and try to parameterize everything that he can see…

Plenoptic Function Any ray seen from a single view point can be parameterized by (θ,φ).

Color Image is intensity of light –Seen from a single view point (θ,φ) –At a single time t –As a function of wavelength λ P(θ,φ,λ)

Dynamic Scene is intensity of light –Seen from a single view point (θ,φ) –Over time t –As a function of wavelength λ P(θ,φ,λ,t)

Moving around A Static Scene is intensity of light –Seen from an arbitrary view point (θ,φ) –At an arbitrary location (x,y,z) –At a single time t –As a function of wavelength λ P(x,y,z,θ,φ,λ)

Moving around A Dynamic Scene is intensity of light –Seen from an arbitrary view point (θ,φ) –At an arbitrary location (x,y,z) –Over time t –As a function of wavelength λ P(x,y,z,θ,φ,λ,t)

Plenoptic Function Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! An image is a 2D sample of plenoptic function! P(x,y,z,θ,φ,λ,t)

How to “Capture” Orthographic Images Rebinning rays forms orthographic images

How to “Capture” Orthographic Images Rebinning rays forms orthographic images

How to “Capture” Orthographic Images Rebinning rays forms orthographic images

How to “Capture” Orthographic Images Rebinning rays forms orthographic images

How to “Capture” Orthographic Images Rebinning rays forms orthographic images

Multi-perspective Images Rebinning rays forms multiperspective images

Multi-perspective Images Rebinning rays forms multiperspective images ……

Multi-perspective Images

Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic Function

They Are All Images

Next lecture Image sampling theory Fourier Analysis