Instructor: Mircea Nicolescu Lecture 3 CS 485 / 685 Computer Vision.

Slides:



Advertisements
Similar presentations
The Camera : Computational Photography Alexei Efros, CMU, Fall 2006.
Advertisements

What is Visible light A. EM radiation that has. a wavelength range
Light Chapter 19.
14.
Cameras. Overview The pinhole projection model Qualitative properties Perspective projection matrix Cameras with lenses Depth of focus Field of view Lens.
Cameras (Reading: Chapter 1) Goal: understand how images are formed Camera obscura dates from 15 th century Basic abstraction is the pinhole camera Perspective.
Image Formation and Optics
Announcements. Projection Today’s Readings Nalwa 2.1.
Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Talk today on “Lightfield photography” by Ren.
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Lecture 4a: Cameras CS6670: Computer Vision Noah Snavely Source: S. Lazebnik.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
Ten light and image. Oscillation Occurs when two forces are in opposition Causes energy to alternate between two forms Guitar string Motion stretches.
Ch 25 1 Chapter 25 Optical Instruments © 2006, B.J. Lieb Some figures electronically reproduced by permission of Pearson Education, Inc., Upper Saddle.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
Cameras CSE 455, Winter 2010 January 25, 2010.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Building a Real Camera.
Image formation & Geometrical Transforms Francisco Gómez J MMS U. Central y UJTL.
Digital Cameras Basic Info on Operations Class Web Quest.
How the Camera Works ( both film and digital )
Image Formation and Representation CS485/685 Computer Vision Dr. George Bebis.
Cameras Course web page: vision.cis.udel.edu/cv March 22, 2003  Lecture 16.
18.4 Seeing Light Pg
Lenses Chapter 30.
Camera No lens Inverted, real image Film or screen.
Human Eye  A human eyeball is like a simple camera! Sclera: White part of the eye, outer walls, hard, like a light-tight box. Cornea and crystalline lens.
Light Waves Sec 1.
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
1 CS6825: Image Formation How are images created. How are images created.
Optical Instruments, Camera A single lens camera consists basically of an opaque box, converging lens and film. Focusing depends on the object distance.
Photography Lesson 2 Pinhole Camera Lenses. The Pinhole Camera.
776 Computer Vision Jan-Michael Frahm Fall Camera.
Image Formation Fundamentals Basic Concepts (Continued…)
When light travels from an object to your eye, you see the object. How do you use light to see? 14.1 Mirrors When no light is available to reflect off.
Recap from Wednesday Two strategies for realistic rendering capture real world data synthesize from bottom up Both have existed for 500 years. Both are.
Copyright © 2010 Pearson Education, Inc. Lecture Outline Chapter 27 Physics, 4 th Edition James S. Walker.
1 Computer Vision 一目 了然 一目 了然 一看 便知 一看 便知 眼睛 頭腦 眼睛 頭腦 Computer = Image + Artificial Vision Processing Intelligence Vision Processing Intelligence.
Color in image and video Mr.Nael Aburas. outline  Color Science  Color Models in Images  Color Models in Video.
Image Formation Dr. Chang Shu COMP 4900C Winter 2008.
Human Visual Perception The Human Eye Diameter: 20 mm 3 membranes enclose the eye –Cornea & sclera –Choroid –Retina.
Digital Image Fundamentals. What Makes a good image? Cameras (resolution, focus, aperture), Distance from object (field of view), Illumination (intensity.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
776 Computer Vision Jan-Michael Frahm Fall Last class.
Lenses. Diverging and Converging Lenses Double Convex lenses focus light rays to a point on the opposite side of the lens. Double Concave lenses diverge.
The Human Eye. A convex lens is the type of lens found in your eye. The lens takes light rays spreading out from objects and focuses the light, through.
EECS 274 Computer Vision Cameras.
Miguel Tavares Coimbra
CSE 185 Introduction to Computer Vision Cameras. Camera models –Pinhole Perspective Projection –Affine Projection –Spherical Perspective Projection Camera.
The Physics of Photography
DIGITAL CAMERAS Prof Oakes. Overview Camera history Digital Cameras/Digital Images Image Capture Image Display Frame Rate Progressive and Interlaced scans.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Lens Applications.
Announcements Project 1 grading session this Thursday 2:30-5pm, Sieg 327 –signup ASAP:signup –10 minute slot to demo your project for a TA »have your program.
Eight light and image. Oscillation Occurs when two forces are in opposition Causes energy to alternate between two forms Guitar string Motion stretches.
Physics: light waves. Properties and Sources of Light Key Question: What are some useful properties of light?
The Eye The sensory receptors in your eye detect light energy. The receptors are stimulated by light rays, which enter your eyes after bouncing off objects.
CSE 185 Introduction to Computer Vision
Digital Image -M.V.Ramachandranwww.youtube.com/postmanchandru
PNU Machine Vision Lecture 2a: Cameras Source: S. Lazebnik.
The Camera : Computational Photography
CSE 185 Introduction to Computer Vision
Image Formation and Representation
Lecture 13: Cameras and geometry
The Camera : Computational Photography
Announcements Midterm out today Project 1 demos.
Projection Readings Nalwa 2.1.
Announcements Midterm out today Project 1 demos.
Photographic Image Formation I
Presentation transcript:

Instructor: Mircea Nicolescu Lecture 3 CS 485 / 685 Computer Vision

Image Formation and Representation Cameras The pinhole model Lenses The human eye Image digitization 2

3 Cameras First photograph due to Niepce (1816) First on record - shown below (1822)

4 Animal eye: a (really) long time ago. Pinhole perspective projection: Brunelleschi, XV th Century. Camera obscura: XVI th Century. Photographic camera: Niepce, Evolution

5 A Simple Model of Image Formation The scene is illuminated by a single source The scene reflects radiation towards the camera The camera senses it via chemicals on film or in digital sensors

Image Formation Two parts to the image formation process: (1)The geometry, which determines where in the image plane the projection of a point in the scene will be located. (2) The physics of light (radiometry), which determines the brightness of a point in the image plane. f(x,y) = i(x,y) r(x,y) Simple model: i: illumination r: reflectance 6

Let’s Design a Camera Put a piece of film in front of an object – do we get a reasonable image? −Blurring – need to be more selective! 7

Let’s Design a Camera Add a barrier with a small opening (aperture) to block off most of the rays −Reduces blurring 8

“Pinhole” Camera Model The simplest device to form an image of a 3D scene on a 2D surface. Rays of light pass through a "pinhole" and form an inverted image of the object on the image plane. Perspective projection: (center of projection) (x,y) (X,Y,Z) f: focal length 9

What Is the Effect of Aperture Size? Large aperture: light from the source spreads across the image (not properly focused), making it blurry! Small aperture: reduces blurring but: (i) it limits the amount of light entering the camera and (ii) causes light diffraction. 10

Example: Varying Aperture Size 11

Example: Varying Aperture Size What happens if we keep decreasing aperture size? When light passes through a small hole, it does not travel in a straight line and is scattered in many directions (diffraction) SOLUTION: refraction 12

Refraction Bending of a wave when entering a medium where its speed is different. 13

14 The Reason for Lenses Lens cameras: gather more light from each scene point Pinhole cameras: typically dark, because a very small set of rays from a particular scene point reaches the image plane

Lenses Lenses duplicate pinhole geometry without resorting to undesirably small apertures −Gather all the light radiating from an object point towards the lens’s finite aperture −Bring light into focus at a single distinct image point refraction 15

Lenses Lenses improve image quality, leading to sharper images. 16

Light rays passing through the lens center are not deviated Light rays passing through a point farther from the center are deviated more focal point f Properties of the “Thin” Lens 17

All parallel rays converge to a single point For rays perpendicular to the lens, that point is called focal point focal point f Properties of the “Thin” Lens 18

Properties of the “Thin” Lens The plane parallel to the lens at the focal point is called the focal plane The distance between the lens and the focal plane is called the focal length (f) focal point f 19

Thin Lens Equation Assume an object at distance u from the lens plane: object f u v image 20

Thin Lens Equation Using similar triangles: y’/y = v/u f u v y’ y image 21

Thin Lens Equation f u v y’ y y’/y = (v-f)/f Using similar triangles: image 22

Thin Lens Equation f u v 111 uvf += image Combining the equations: 23

Thin Lens Equation The thin lens equation implies that only points at distance u from the lens are “in focus” Other points project to a “blur circle” or “circle of confusion” in the image (blurring occurs) “circle of confusion” 111 uvf += 24

Thin Lens Equation When objects move far away from the camera, they will focus on the focal plane focal point f 111 uvf += 25

Depth of Field The range of depths over which the world is approximately sharp (in focus) 26

How Can We Control Depth of Field? The size of blur circle is proportional to aperture size 27

Changing aperture size (controlled by diaphragm) affects depth of field −Larger aperture: −decreases the depth of field −but need to decrease exposure time −Smaller aperture: −increases the range in which objects are approximately in focus −but need to increase exposure time How Can We Control Depth of Field? 28

Varying Aperture Size Large aperture = small DOFSmall aperture = large DOF 29

Another Example Large aperture = small DOF 30

Field of View (Zoom) f f The cone of viewing directions of the camera Inversely proportional to focal length 31

Field of View (Zoom) 32

Reducing Perspective Distortion Small f (large FOV), camera close to car Large f (small FOV), camera far from car Less perspective distortion! by varying distance and focal length 33

Same Effect for Faces standardwide-angletelephoto Practical significance: can approximate perspective projection with a simpler model – when using a telephoto lens to view a distant object that has a relatively small range of depths Less perspective distortion! 34

Approximating an “Affine” Camera Center of projection is at infinity! 35

Real Lenses All but the simplest cameras contain lenses which are actually comprised of several "lens elements" Each element aims to direct the path of light rays such that they recreate the image as accurately as possible on the digital sensor 36

Lens Flaws: Chromatic Aberration Lens has different refractive indices for different wavelengths Could cause color fringing: −lens cannot focus all the colors at the same point. 37

Chromatic Aberration – Example 38

Lens Flaws: Radial Distortion Straight lines become distorted as we move farther away from the center of the image Deviations are most noticeable for rays that pass through the edge of the lens 39

No distortionPin cushionBarrel Lens Flaws: Radial Distortion 40

Lens Flaws: Tangential Distortion Lens is not exactly parallel to the imaging plane 41

42 Lens Flaws: Vignetting

The Human Eye Functions similarly to a camera: aperture (pupil), lens, mechanism for focusing and surface for registering images (retina) 43

The Human Eye In a camera, focusing at various distances is achieved by varying the distance between the lens and the imaging plane In the human eye, the distance between the lens and the retina is fixed (14mm to 17mm) 44

The Human Eye Focusing is achieved by varying the shape of the lens (flattening or thickening) 45

The Human Eye Retina contains light sensitive cells that convert light energy into electrical impulses that travel through nerves to the brain Brain interprets the electrical signals to form images 46

The Human Eye Two kinds of light-sensitive cells: rods and cones (unevenly distributed) Cones (6-7 million) are responsible for all color vision and are present throughout the retina, but are concentrated toward the center of the field of vision at the back of the retina Fovea – special area: −Mostly cones −Detail, color sensitivity, and resolution are highest 47

The Human Eye Three different types of cones; each type has a special pigment that is sensitive to wavelengths of light in a certain range: −Short (S) corresponds to blue −Medium (M) corresponds to green −Long (L) corresponds to red Ratio of L to M to S cones: −approx. 10:5:1 Almost no S cones in the center of the fovea 48

The Human Eye Rods (120 million) are more sensitive to light than cones but cannot discern color −Primary receptors for night vision and detecting motion −Large amount of light overwhelms them, and they take a long time to “reset” and adapt to the dark again −Once fully adapted to darkness, the rods are 10,000 times more sensitive to light than the cones 49

Digital Cameras A digital camera replaces film with a sensor array −Each cell in the array is a light-sensitive diode that converts photons to electrons −Two common types −Charge Coupled Device (CCD) −Complementary Metal Oxide Semiconductor (CMOS) 50

Digital Cameras 51

CCD Cameras CCDs move photogenerated charge from pixel to pixel and convert it to voltage at an output node An analog-to-digital converter (ADC) then turns each pixel's value into a digital value 52

CMOS Cameras CMOs convert charge to voltage inside each element Use several transistors at each pixel to amplify and move the charge using more traditional wires The CMOS signal is digital, so it needs no ADC 53

Image Digitization Sampling: measuring the value of an image at a finite number of points Quantization: representing measured value (voltage) at the sampled point by an integer 54

Image Digitization Sampling Quantization 55

What Is an Image? 8 bits/pixel

We can think of a (grayscale) image as a function f from R 2 to R (or a 2D signal): −f (x,y) gives the intensity at position (x,y) −A digital image is a discrete (sampled, quantized) version of this function x y f (x, y) What Is an Image? 57

Image Sampling - Example original image sampled by a factor of 2 sampled by a factor of 4 sampled by a factor of 8 Images have been resized for easier comparison 58

Image Quantization - Example 256 gray levels (8 bits/pixel) 32 gray levels (5 bits/pixel) 16 gray levels (4 bits/pixel) 8 gray levels (3 bits/pixel) 4 gray levels (2 bits/pixel) 2 gray levels (1 bit/pixel) 59

Color Images Color images are comprised of three color channels – red, green, and, blue – which combine to create most of the colors we can see = 60

Color Images 61

Color Sensing in Cameras: Prism Requires three chips and precise alignment CCD(B) CCD(G) CCD(R) 62

Color Sensing in Cameras: Color Filter Array Why more green? Human Luminance Sensitivity Function Bayer grid In traditional systems, color filters are applied to a single layer of photodetectors in a tiled mosaic pattern 63

redgreenblueoutput demosaicing (interpolation) Color Sensing in Cameras: Color Filter Array 64

Color Sensing in Cameras: Foveon X3 CMOS sensor; takes advantage of the fact that red, blue and green light silicon to different depths 65