CS 445 / 645 Introduction to Computer Graphics Lecture 12 Camera Models Lecture 12 Camera Models.

Slides:



Advertisements
Similar presentations
Today Composing transformations 3D Transformations
Advertisements

Three Dimensional Viewing
1 Computer Graphics Chapter 8 3D Transformations.
Color To understand how to make realistic images, we need a basic understanding of the physics and physiology of vision.
University of British Columbia CPSC 314 Computer Graphics Jan-Apr 2005 Tamara Munzner Color Week 5, Fri Feb.
Based on slides created by Edward Angel
CS 4731: Computer Graphics Lecture 24: Color Science
CS 325 Introduction to Computer Graphics 02 / 24 / 2010 Instructor: Michael Eckmann.
Sep 21, Fall 2005ITCS4010/ Computer Graphics Overview Color Displays Drawing Pipeline.
Announcements. Projection Today’s Readings Nalwa 2.1.
CS 376 Introduction to Computer Graphics 02 / 26 / 2007 Instructor: Michael Eckmann.
IAT 3551 Computer Graphics Overview Color Displays Drawing Pipeline.
Viewing Doug James’ CG Slides, Rich Riesenfeld’s CG Slides, Shirley, Fundamentals of Computer Graphics, Chap 7 Wen-Chieh (Steve) Lin Institute of Multimedia.
1 Chapter 5 Viewing. 2 Perspective Projection 3 Parallel Projection.
10/5/04© University of Wisconsin, CS559 Fall 2004 Last Time Compositing Painterly Rendering Intro to 3D Graphics Homework 3 due Oct 12 in class.
Transformations Aaron Bloomfield CS 445: Introduction to Graphics
COMP 175: Computer Graphics March 24, 2015
Viewing and Projections
Computer Graphics (fall 2009)
CS 445 / 645: Introductory Computer Graphics Color.
Week 5 - Wednesday.  What did we talk about last time?  Project 2  Normal transforms  Euler angles  Quaternions.
MIT EECS 6.837, Durand and Cutler Graphics Pipeline: Projective Transformations.
CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008.
CS-378: Game Technology Lecture #2.1: Projection Prof. Okan Arikan University of Texas, Austin Thanks to James O’Brien, Steve Chenney, Zoran Popovic, Jessica.
Computer Graphics Bing-Yu Chen National Taiwan University.
CAP4730: Computational Structures in Computer Graphics 3D Transformations.
10/3/02 (c) 2002 University of Wisconsin, CS 559 Last Time 2D Coordinate systems and transformations.
The Rendering Pipeline CS 445/645 Introduction to Computer Graphics David Luebke, Spring 2003.
David Luebke11/14/2015 CS 551 / 645: Introductory Computer Graphics David Luebke
Viewing CS418 Computer Graphics John C. Hart. Graphics Pipeline Homogeneous Divide Model Coords Model Xform World Coords Viewing Xform Still Clip Coords.
CS 376 Introduction to Computer Graphics 02 / 23 / 2007 Instructor: Michael Eckmann.
CS 450: COMPUTER GRAPHICS PROJECTIONS SPRING 2015 DR. MICHAEL J. REALE.
CS 325 Introduction to Computer Graphics 02 / 26 / 2010 Instructor: Michael Eckmann.
CS559: Computer Graphics Lecture 8: 3D Transforms Li Zhang Spring 2008 Most Slides from Stephen Chenney.
10/7/04© University of Wisconsin, CS559 Fall 2004 Last Time Transformations Homogeneous coordinates Directions Rotation Geometry 101 – Points, edges, triangles/polygons.
OpenGL The Viewing Pipeline: Definition: a series of operations that are applied to the OpenGL matrices, in order to create a 2D representation from 3D.
Basic 3D Concepts. Overview 1.Coordinate systems 2.Transformations 3.Projection 4.Rasterization.
CS 551 / 645: Introductory Computer Graphics Color.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Three-Dimensional Viewing Hearn & Baker Chapter 7
Introduction to Computer Graphics CS 445 / 645 Lecture 12 Chapter 12: Color.
Foundations of Computer Graphics (Spring 2012) CS 184, Lecture 5: Viewing
2/19/04© University of Wisconsin, CS559 Spring 2004 Last Time Painterly rendering 2D Transformations –Transformations as coordinate system changes –Transformations.
12/24/2015 A.Aruna/Assistant professor/IT/SNSCE 1.
Graphics CSCI 343, Fall 2015 Lecture 16 Viewing I
David Luebke1/10/2016 CS 551 / 645: Introductory Computer Graphics David Luebke
Taxonomy of Projections FVFHP Figure Taxonomy of Projections.
David Luebke 1 2/5/2016 Color CS 445/645 Introduction to Computer Graphics David Luebke, Spring 2003.
David Luebke2/23/2016 CS 551 / 645: Introductory Computer Graphics Color Continued Clipping in 3D.
CS559: Computer Graphics Lecture 9: 3D Transformation and Projection Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Viewing and Projection. The topics Interior parameters Projection type Field of view Clipping Frustum… Exterior parameters Camera position Camera orientation.
David Luebke3/11/2016 CS 551 / 645: Introductory Computer Graphics Geometric transforms: perspective projection Color.
CS 551 / 645: Introductory Computer Graphics Viewing Transforms.
CS5500 Computer Graphics March 20, Computer Viewing Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts.
1 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 Computer Viewing Isaac Gang University of Mary Hardin-Baylor.
OpenGL LAB III.
Viewing. Classical Viewing Viewing requires three basic elements - One or more objects - A viewer with a projection surface - Projectors that go from.
Outline 3D Viewing Required readings: HB 10-1 to 10-10
3D Viewing and Clipping Ming Ouhyoung 歐陽明 Professor Dept. of CSIE and GINM NTU.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
CS 551 / 645: Introductory Computer Graphics
University of British Columbia CPSC 314 Computer Graphics Jan-Apr 2016
CSCE 441 Computer Graphics 3-D Viewing
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
CENG 477 Introduction to Computer Graphics
CSC461: Lecture 19 Computer Viewing
Chap 3 Viewing Pipeline Reading:
Viewing (Projections)
Viewing Transformations II
Presentation transcript:

CS 445 / 645 Introduction to Computer Graphics Lecture 12 Camera Models Lecture 12 Camera Models

Paul Debevec Top Gun Speaker Wednesday, October 9 th at 3:30 – OLS MIT Technolgy Review’s “100 Young Innovators” Top Gun Speaker Wednesday, October 9 th at 3:30 – OLS MIT Technolgy Review’s “100 Young Innovators”

Rendering with Natural Light

Fiat Lux

Light Stage

Moving the Camera or the World? Two equivalent operations Initial OpenGL camera position is at origin, looking along -ZInitial OpenGL camera position is at origin, looking along -Z Now create a unit square parallel to camera at z = -10Now create a unit square parallel to camera at z = -10 If we put a z-translation matrix of 3 on stack, what happens?If we put a z-translation matrix of 3 on stack, what happens? –Camera moves to z = -3  Note OpenGL models viewing in left-hand coordinates –Camera stays put, but square moves to -7 Image at camera is the same with bothImage at camera is the same with both Two equivalent operations Initial OpenGL camera position is at origin, looking along -ZInitial OpenGL camera position is at origin, looking along -Z Now create a unit square parallel to camera at z = -10Now create a unit square parallel to camera at z = -10 If we put a z-translation matrix of 3 on stack, what happens?If we put a z-translation matrix of 3 on stack, what happens? –Camera moves to z = -3  Note OpenGL models viewing in left-hand coordinates –Camera stays put, but square moves to -7 Image at camera is the same with bothImage at camera is the same with both

A 3D Scene Notice the presence of the camera, the projection plane, and the world coordinate axes Viewing transformations define how to acquire the image on the projection plane Notice the presence of the camera, the projection plane, and the world coordinate axes Viewing transformations define how to acquire the image on the projection plane

Viewing Transformations Goal: To create a camera-centered view Camera is at origin Camera is looking along negative z-axis Camera’s ‘up’ is aligned with y-axis (what does this mean?) Goal: To create a camera-centered view Camera is at origin Camera is looking along negative z-axis Camera’s ‘up’ is aligned with y-axis (what does this mean?)

2 Basic Steps Step 1: Align the world’s coordinate frame with camera’s by rotation

2 Basic Steps Step 2: Translate to align world and camera origins

Creating Camera Coordinate Space Specify a point where the camera is located in world space, the eye point (View Reference Point = VRP) Specify a point in world space that we wish to become the center of view, the lookat point Specify a vector in world space that we wish to point up in camera image, the up vector (VUP) Intuitive camera movement Specify a point where the camera is located in world space, the eye point (View Reference Point = VRP) Specify a point in world space that we wish to become the center of view, the lookat point Specify a vector in world space that we wish to point up in camera image, the up vector (VUP) Intuitive camera movement

Constructing Viewing Transformation, V Create a vector from eye-point to lookat-point Normalize the vector Desired rotation matrix should map this vector to [0, 0, -1] T Why? Create a vector from eye-point to lookat-point Normalize the vector Desired rotation matrix should map this vector to [0, 0, -1] T Why?

Constructing Viewing Transformation, V Construct another important vector from the cross product of the lookat-vector and the vup- vector This vector, when normalized, should align with [1, 0, 0] T Why? Construct another important vector from the cross product of the lookat-vector and the vup- vector This vector, when normalized, should align with [1, 0, 0] T Why?

Constructing Viewing Transformation, V One more vector to define… This vector, when normalized, should align with [0, 1, 0] T Now let’s compose the results One more vector to define… This vector, when normalized, should align with [0, 1, 0] T Now let’s compose the results

Composing Matrices to Form V We know the three world axis vectors (x, y, z) We know the three camera axis vectors (u, v, n) Viewing transformation, V, must convert from world to camera coordinate systems We know the three world axis vectors (x, y, z) We know the three camera axis vectors (u, v, n) Viewing transformation, V, must convert from world to camera coordinate systems

Composing Matrices to Form V Remember Each camera axis vector is unit length.Each camera axis vector is unit length. Each camera axis vector is perpendicular to othersEach camera axis vector is perpendicular to others Camera matrix is orthogonal and normalized OrthonormalOrthonormal Therefore, M -1 = M T Remember Each camera axis vector is unit length.Each camera axis vector is unit length. Each camera axis vector is perpendicular to othersEach camera axis vector is perpendicular to others Camera matrix is orthogonal and normalized OrthonormalOrthonormal Therefore, M -1 = M T

Composing Matrices to Form V Therefore, rotation component of viewing transformation is just transpose of computed vectors

Composing Matrices to Form V Translation component too Multiply it through Translation component too Multiply it through

Final Viewing Transformation, V To transform vertices, use this matrix: And you get this: To transform vertices, use this matrix: And you get this:

Canonical View Volume A standardized viewing volume representation Parallel (Orthogonal) Perspective Parallel (Orthogonal) Perspective A standardized viewing volume representation Parallel (Orthogonal) Perspective Parallel (Orthogonal) Perspective x or y -z x or y -z 1 Front Plane Back Plane x or y = +/- z

Why do we care? Canonical View Volume Permits Standardization ClippingClipping –Easier to determine if an arbitrary point is enclosed in volume –Consider clipping to six arbitrary planes of a viewing volume versus canonical view volume RenderingRendering –Projection and rasterization algorithms can be reused Canonical View Volume Permits Standardization ClippingClipping –Easier to determine if an arbitrary point is enclosed in volume –Consider clipping to six arbitrary planes of a viewing volume versus canonical view volume RenderingRendering –Projection and rasterization algorithms can be reused

Projection Normalization One additional step of standardization Convert perspective view volume to orthogonal view volume to further standardize camera representationConvert perspective view volume to orthogonal view volume to further standardize camera representation –Convert all projections into orthogonal projections by distorting points in three space (actually four space because we include homogeneous coord w)  Distort objects using transformation matrix One additional step of standardization Convert perspective view volume to orthogonal view volume to further standardize camera representationConvert perspective view volume to orthogonal view volume to further standardize camera representation –Convert all projections into orthogonal projections by distorting points in three space (actually four space because we include homogeneous coord w)  Distort objects using transformation matrix

Projection Normalization Building a transformation matrix How do we build a matrix thatHow do we build a matrix that –Warps any view volume to canonical orthographic view volume –Permits rendering with orthographic camera Building a transformation matrix How do we build a matrix thatHow do we build a matrix that –Warps any view volume to canonical orthographic view volume –Permits rendering with orthographic camera All scenes rendered with orthographic camera

Projection Normalization - Ortho Normalizing Orthographic Cameras Not all orthographic cameras define viewing volumes of right size and location (canonical view volume)Not all orthographic cameras define viewing volumes of right size and location (canonical view volume) Transformation must map:Transformation must map: Normalizing Orthographic Cameras Not all orthographic cameras define viewing volumes of right size and location (canonical view volume)Not all orthographic cameras define viewing volumes of right size and location (canonical view volume) Transformation must map:Transformation must map:

Projection Normalization - Ortho Two steps Translate center to (0, 0, 0)Translate center to (0, 0, 0) –Move x by –(x max + x min ) / 2 Scale volume to cube with sides = 2Scale volume to cube with sides = 2 –Scale x by 2/(x max – x min ) Compose these transformation matricesCompose these transformation matrices –Resulting matrix maps orthogonal volume to canonical Two steps Translate center to (0, 0, 0)Translate center to (0, 0, 0) –Move x by –(x max + x min ) / 2 Scale volume to cube with sides = 2Scale volume to cube with sides = 2 –Scale x by 2/(x max – x min ) Compose these transformation matricesCompose these transformation matrices –Resulting matrix maps orthogonal volume to canonical

Projection Normalization - Persp Perspective Normalization is Trickier

Perspective Normalization Consider N= After multiplying: p’ = Npp’ = Np Consider N= After multiplying: p’ = Npp’ = Np

Perspective Normalization After dividing by w’, p’ -> p’’

Perspective Normalization Quick Check If x = zIf x = z –x’’ = -1 If x = -zIf x = -z –x’’ = 1 If x = zIf x = z –x’’ = -1 If x = -zIf x = -z –x’’ = 1

Perspective Normalization What about z? if z = z maxif z = z max if z = z minif z = z min Solve for  and  such that zmin -> -1 and zmax ->1Solve for  and  such that zmin -> -1 and zmax ->1 Resulting z’’ is nonlinear, but preserves ordering of pointsResulting z’’ is nonlinear, but preserves ordering of points –If z 1 < z 2 … z’’ 1 < z’’ 2 What about z? if z = z maxif z = z max if z = z minif z = z min Solve for  and  such that zmin -> -1 and zmax ->1Solve for  and  such that zmin -> -1 and zmax ->1 Resulting z’’ is nonlinear, but preserves ordering of pointsResulting z’’ is nonlinear, but preserves ordering of points –If z 1 < z 2 … z’’ 1 < z’’ 2

Perspective Normalization We did it. Using matrix, N Perspective viewing frustum transformed to cubePerspective viewing frustum transformed to cube Orthographic rendering of cube produces same image as perspective rendering of original frustumOrthographic rendering of cube produces same image as perspective rendering of original frustum We did it. Using matrix, N Perspective viewing frustum transformed to cubePerspective viewing frustum transformed to cube Orthographic rendering of cube produces same image as perspective rendering of original frustumOrthographic rendering of cube produces same image as perspective rendering of original frustum

Color Next topic: Color To understand how to make realistic images, we need a basic understanding of the physics and physiology of vision. Here we step away from the code and math for a bit to talk about basic principles. Next topic: Color To understand how to make realistic images, we need a basic understanding of the physics and physiology of vision. Here we step away from the code and math for a bit to talk about basic principles.

Basics Of Color Elements of color:

Basics of Color Physics: IlluminationIllumination –Electromagnetic spectra ReflectionReflection –Material properties –Surface geometry and microgeometry (i.e., polished versus matte versus brushed) Perception Physiology and neurophysiologyPhysiology and neurophysiology Perceptual psychologyPerceptual psychologyPhysics: IlluminationIllumination –Electromagnetic spectra ReflectionReflection –Material properties –Surface geometry and microgeometry (i.e., polished versus matte versus brushed) Perception Physiology and neurophysiologyPhysiology and neurophysiology Perceptual psychologyPerceptual psychology

Physiology of Vision The eye: The retina RodsRods ConesCones –Color! The eye: The retina RodsRods ConesCones –Color!

Physiology of Vision The center of the retina is a densely packed region called the fovea. Cones much denser here than the peripheryCones much denser here than the periphery The center of the retina is a densely packed region called the fovea. Cones much denser here than the peripheryCones much denser here than the periphery

Physiology of Vision: Cones Three types of cones: L or R, most sensitive to red light (610 nm)L or R, most sensitive to red light (610 nm) M or G, most sensitive to green light (560 nm)M or G, most sensitive to green light (560 nm) S or B, most sensitive to blue light (430 nm)S or B, most sensitive to blue light (430 nm) Color blindness results from missing cone type(s)Color blindness results from missing cone type(s) Three types of cones: L or R, most sensitive to red light (610 nm)L or R, most sensitive to red light (610 nm) M or G, most sensitive to green light (560 nm)M or G, most sensitive to green light (560 nm) S or B, most sensitive to blue light (430 nm)S or B, most sensitive to blue light (430 nm) Color blindness results from missing cone type(s)Color blindness results from missing cone type(s)

Physiology of Vision: The Retina Strangely, rods and cones are at the back of the retina, behind a mostly-transparent neural structure that collects their response.

Perception: Metamers A given perceptual sensation of color derives from the stimulus of all three cone types Identical perceptions of color can thus be caused by very different spectra

Perception: Other Gotchas Color perception is also difficult because: It varies from person to personIt varies from person to person It is affected by adaptation (stare at a light bulb… don’t)It is affected by adaptation (stare at a light bulb… don’t) It is affected by surrounding color:It is affected by surrounding color: Color perception is also difficult because: It varies from person to personIt varies from person to person It is affected by adaptation (stare at a light bulb… don’t)It is affected by adaptation (stare at a light bulb… don’t) It is affected by surrounding color:It is affected by surrounding color:

Perception: Relative Intensity We are not good at judging absolute intensity Let’s illuminate pixels with white light on scale of Intensity difference of neighboring colored rectangles with intensities:  > 0.11 (10% change)  > 0.55 (10% change) will look the same We perceive relative intensities, not absolute We are not good at judging absolute intensity Let’s illuminate pixels with white light on scale of Intensity difference of neighboring colored rectangles with intensities:  > 0.11 (10% change)  > 0.55 (10% change) will look the same We perceive relative intensities, not absolute

Representing Intensities Remaining in the world of black and white… Use photometer to obtain min and max brightness of monitor This is the dynamic range Intensity ranges from min, I 0, to max, 1.0 How do we represent 256 shades of gray? Remaining in the world of black and white… Use photometer to obtain min and max brightness of monitor This is the dynamic range Intensity ranges from min, I 0, to max, 1.0 How do we represent 256 shades of gray?

Representing Intensities Equal distribution between min and max fails relative change near max is much smaller than near I 0relative change near max is much smaller than near I 0 Ex: ¼, ½, ¾, 1Ex: ¼, ½, ¾, 1 Preserve % change Ex: 1/8, ¼, ½, 1Ex: 1/8, ¼, ½, 1 I n = I 0 * r n I 0, n > 0I n = I 0 * r n I 0, n > 0 Equal distribution between min and max fails relative change near max is much smaller than near I 0relative change near max is much smaller than near I 0 Ex: ¼, ½, ¾, 1Ex: ¼, ½, ¾, 1 Preserve % change Ex: 1/8, ¼, ½, 1Ex: 1/8, ¼, ½, 1 I n = I 0 * r n I 0, n > 0I n = I 0 * r n I 0, n > 0 I 0 =I 0 I 1 = rI 0 I 2 = rI 1 = r 2 I 0 … I 255 =rI 254 =r 255 I 0

Dynamic Ranges Dynamic RangeMax # of Display(max / min illum)Perceived Intensities (r=1.01) Dynamic RangeMax # of Display(max / min illum)Perceived Intensities (r=1.01) CRT: Photo (print) Photo (slide) B/W printout Color printout50400 Newspaper10234 Dynamic RangeMax # of Display(max / min illum)Perceived Intensities (r=1.01) Dynamic RangeMax # of Display(max / min illum)Perceived Intensities (r=1.01) CRT: Photo (print) Photo (slide) B/W printout Color printout50400 Newspaper10234

Gamma Correction But most display devices are inherently nonlinear: Intensity = k(voltage)  i.e., brightness * voltage != (2*brightness) * (voltage/2)i.e., brightness * voltage != (2*brightness) * (voltage/2)  is between 2.2 and 2.5 on most monitors Common solution: gamma correction Post-transformation on intensities to map them to linear range on display device:Post-transformation on intensities to map them to linear range on display device: Can have separate  for R, G, BCan have separate  for R, G, B But most display devices are inherently nonlinear: Intensity = k(voltage)  i.e., brightness * voltage != (2*brightness) * (voltage/2)i.e., brightness * voltage != (2*brightness) * (voltage/2)  is between 2.2 and 2.5 on most monitors Common solution: gamma correction Post-transformation on intensities to map them to linear range on display device:Post-transformation on intensities to map them to linear range on display device: Can have separate  for R, G, BCan have separate  for R, G, B

Gamma Correction Some monitors perform the gamma correction in hardware (SGI’s) Others do not (most PCs) Tough to generate images that look good on both platforms (i.e. images from web pages) Some monitors perform the gamma correction in hardware (SGI’s) Others do not (most PCs) Tough to generate images that look good on both platforms (i.e. images from web pages)