Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSCE641: Computer Graphics Image Formation Jinxiang Chai.

Similar presentations


Presentation on theme: "CSCE641: Computer Graphics Image Formation Jinxiang Chai."— Presentation transcript:

1 CSCE641: Computer Graphics Image Formation Jinxiang Chai

2 Are They Images?

3 Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic function

4 Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic function

5 Color Representation Why do we use RGB to encode pixel color? Can we use RGB to represent all colors? What are other color representations?

6 Human Vision Model of human vision

7 Human Vision Model of human vision Vision components: Incoming light Human eye

8 Electromagnetic Spectrum Visible light frequencies range between: –Red: 4.3X10 14 hertz (700nm) –Violet: 7.5X10 14 hertz (400nm)

9 Visible Light The human eye can see “visible” light in the frequency between 400nm-700nm

10 Visible Light The human eye can see “visible” light in the frequency between 400nm-700nm 400nm700nm

11 Visible Light The human eye can see “visible” light in the frequency between 400nm-700nm 400nm700nm - Not strict boundary - Some colors are absent (brown, pink)

12 12 Spectral Energy Distribution Three different types of lights

13 13 Spectral Energy Distribution Three different types of lights Can we use spectral energy distribution to represent color?

14 14 Spectral Energy Distribution Three different types of lights Can we use spectral energy distribution to represent color? - Not really, different distribution might result in the same color (metamers)!

15 Spectral Energy Distribution The six spectra below look the same purple to normal color-vision people

16 Color Representation? Why not all ranges of light spectrum are perceived? So how to represent color? - unique - compact - work for as many visible lights as possible 400nm700nm

17 Human Vision Photoreceptor cells in the retina: - Rods - Cones

18 Light Detection: Rods and Cones Rods: -120 million rods in retina -1000X more light sensitive than Cones - Discriminate B/W brightness in low illumination - Short wave-length sensitive Cons: - 6-7 million Cones in the retina - Responsible for high-resolution vision - Discriminate Colors - Three types of color sensors (64% red, 32%, 2% blue) - Sensitive to any combination of three colors

19 Tristimulus of Color Theory Spectral-response functions of each of the three types of cones

20 Tristimulus of Color Theory Spectral-response functions of each of the three types of cones Can we use them to match any spectral color?

21 Tristimulus of Color Theory Spectral-response functions of each of the three types of cones Color matching function based on RGB - any spectral color can be represented as a linear combination of these primary colors

22 Tristimulus of Color Theory Spectral-response functions of each of the three types of cones Color matching function based on RGB - any spectral color can be represented as a linear combination of these primary colors

23 Tristimulus of Color Theory Spectral-response functions of each of the three types of cones Color matching function based on RGB - any spectral color can be represented as a linear combination of these primary colors

24 Tristimulus Color Theory So, color is psychological - Representing color as a linear combination of red, green, and blue is related to cones, not physics - Most people have the same cones, but there are some people who don’t – the sky might not look blue to them (although they will call it “blue” nonetheless) - But many people (mostly men) are colorblind, missing 1,2 or 3 cones (can buy cheaper TVs)

25 Additive and Subtractive Color RGB color model CMY color model Complementary color models: R=1-C; G = 1-M; B=1-Y; White: [1 1 1] T Green: [0 1 0]; White: [0 0 0] T Green: [1 0 1];

26 RGB Color Space RGB cube –Easy for devices –Can represent all the colors? –But not perceptual –Where is brightness, hue and saturation? red green blue

27 Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic function

28 Image Representation An image is a 2D rectilinear array of Pixels - A width X height array where each entry of the array stores a single pixel

29 Image Representation An image is a 2D rectilinear array of Pixels - A width X height array where each entry of the array stores a single pixel A 5X5 picture pixel

30 Image Representation A pixel stores color information Luminance pixels - gray-scale images (intensity images) - 0-1.0 or 0-255 - 8 bits per pixel Red, green, blue pixels (RGB) - Color images - Each channel: 0-1.0 or 0-255 - 24 bits per pixel

31 Image Representation An image is a 2D rectilinear array of Pixels - A width X height array where each entry of the array stores a single pixel - Each pixel stores color information (255,255,255)

32 Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic Function

33 How Do We See the World? Let’s design a camera: idea 1: put a piece of film in front of camera Do we get a reasonable picture?

34 Pin-hole Camera Add a barrier to block off most of the rays –This reduces blurring –The opening known as the aperture –How does this transform the image?

35 Camera Obscura The first camera –Known to Aristotle –Depth of the room is the focal length –Pencil of rays – all rays through a point

36 Camera Obscura How does the aperture size affect the image?

37 Shrinking the Aperture Why not make the aperture as small as possible? –Less light gets through –Diffraction effects…

38 Shrink the Aperture: Diffraction A diffuse circular disc appears!

39 Shrink the Aperture

40 The Reason of Lenses

41 Adding A Lens A lens focuses light onto the film –There is a specific distance at which objects are “in focus” other points project to a “circle of confusion” in the image –Changing the shape of the lens changes this distance “circle of confusion”

42 Changing Lenses 28 mm 50 mm 210 mm70 mm

43 Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic Function

44 Projection Matrix What’s the geometric relationship between 3D objects and 2D images?

45 Modeling Projection: 3D->2D The coordinate system –We will use the pin-hole model as an approximation –Put the optical center (Center Of Projection) at the origin –Put the image plane (Projection Plane) in front of the COP –The camera looks down the negative z axis

46 Modeling Projection: 3D->2D Projection equations –Compute intersection with PP of ray from (x,y,z) to COP –Derived using similar triangles (on board)

47 Modeling Projection: 3D->2D Projection equations –Compute intersection with PP of ray from (x,y,z) to COP –Derived using similar triangles (on board) –We get the projection by throwing out the last coordinate:

48 Homogeneous Coordinates Is this a linear transformation? –no—division by z is nonlinear

49 Homogeneous Coordinates Is this a linear transformation? Trick: add one more coordinate: homogeneous image coordinates homogeneous scene coordinates –no—division by z is nonlinear

50 Homogeneous Coordinates Is this a linear transformation? Trick: add one more coordinate: homogeneous image coordinates homogeneous scene coordinates Converting from homogeneous coordinates –no—division by z is nonlinear

51 Perspective Projection divide by third coordinate Projection is a matrix multiply using homogeneous coordinates:

52 Perspective Projection divide by third coordinate This is known as perspective projection –The matrix is the projection matrix –Can also formulate as a 4x4 Projection is a matrix multiply using homogeneous coordinates: divide by fourth coordinate

53 Perspective Effects Distant object becomes small The distortion of items when viewed at an angle (spatial foreshortening)

54 Perspective Effects Distant object becomes small The distortion of items when viewed at an angle (spatial foreshortening)

55 Perspective Effects Distant object becomes small The distortion of items when viewed at an angle (spatial foreshortening)

56 Parallel Projection Special case of perspective projection –Distance from the COP to the PP is infinite –Also called “parallel projection” –What’s the projection matrix? Image World

57 Weak-perspective Projection Scaled orthographic projection - object size is small as compared to the average distance from the camera z 0 (e.g.σz < z 0 /20) - d/z ≈ d/z0 (constant)

58 Weak-perspective Projection Scaled orthographic projection - object size is small as compared to the average distance from the camera z 0 (e.g.σz < z 0 /20) - d/z ≈ d/z0 (constant) Projection matrix: λ λ d z0z0

59 View Transformation From world coordinate to camera coordinate P

60 View Transformation From world coordinate to camera coordinate P

61 Viewport Transformation x y u v u 0, v 0 From projection coordinate to image coordinate

62 Viewport Transformation x y u v u 0, v 0 u0u0 v0v0 100 -s y 0 sxsx 0u v 1 x y 1 From projection coordinate to image coordinate

63 Putting It Together From world coordinate to image coordinate u0u0 v0v0 100 -s y 0 sxsx 0u v 1 Perspective projection View transformation Viewport projection

64 Putting It Together From world coordinate to image coordinate u0u0 v0v0 100 -s y 0 sxsx 0u v 1 Perspective projection View transformation Viewport projection Image resolution, aspect ratio Focal length The relative position & orientation between camera and objects

65 Camera Parameters Totally 11 parameters, u0u0 v0v0 100 -s y 0 sxsx 0u v 1

66 Camera Parameters Totally 11 parameters, u0u0 v0v0 100 -s y 0 sxsx 0u v 1 Intrinsic camera parameters extrinsic camera parameters

67 How about this image?

68 Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic function

69 Plenoptic Function What is the set of all things that we can ever see? - The Plenoptic Function (Adelson & Bergen)

70 Plenoptic Function What is the set of all things that we can ever see? - The Plenoptic Function (Adelson & Bergen) Let’s start with a stationary person and try to parameterize everything that he can see…

71 Plenoptic Function Any ray seen from a single view point can be parameterized by (θ,φ).

72 Color Image is intensity of light –Seen from a single view point (θ,φ) –At a single time t –As a function of wavelength λ P(θ,φ,λ)

73 Dynamic Scene is intensity of light –Seen from a single view point (θ,φ) –Over time t –As a function of wavelength λ P(θ,φ,λ,t)

74 Moving around A Static Scene is intensity of light –Seen from an arbitrary view point (θ,φ) –At an arbitrary location (x,y,z) –At a single time t –As a function of wavelength λ P(x,y,z,θ,φ,λ)

75 Moving around A Dynamic Scene is intensity of light –Seen from an arbitrary view point (θ,φ) –At an arbitrary location (x,y,z) –Over time t –As a function of wavelength λ P(x,y,z,θ,φ,λ,t)

76 Plenoptic Function Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! An image is a 2D sample of plenoptic function! P(x,y,z,θ,φ,λ,t)

77 How to “Capture” Orthographic Images Rebinning rays forms orthographic images

78 How to “Capture” Orthographic Images Rebinning rays forms orthographic images

79 How to “Capture” Orthographic Images Rebinning rays forms orthographic images

80 How to “Capture” Orthographic Images Rebinning rays forms orthographic images

81 How to “Capture” Orthographic Images Rebinning rays forms orthographic images

82 Multi-perspective Images Rebinning rays forms multiperspective images

83 Multi-perspective Images Rebinning rays forms multiperspective images ……

84 Multi-perspective Images

85 Outline Color representation Image representation Pin-hole Camera Projection matrix Plenoptic Function

86 They Are All Images

87 Next lecture Image sampling theory Fourier Analysis


Download ppt "CSCE641: Computer Graphics Image Formation Jinxiang Chai."

Similar presentations


Ads by Google