© University of Wisconsin, CS559 Spring 2002

Slides:



Advertisements
Similar presentations
RET Optics Research Workshop ”Demos” Dr. Mike Nofziger Professor College of Optical Sciences University of Arizona Dr. Mike Nofziger 2013.
Advertisements

Color & Light, Digitalization, Storage. Vision Rods work at low light levels and do not see color –That is, their response depends only on how many photons,
Color Image Processing
Computational Photography Prof. Feng Liu Spring /13/2015.
Digital Image Processing: Digital Imaging Fundamentals.
Lecture 30: Light, color, and reflectance CS4670: Computer Vision Noah Snavely.
Image Formation Mohan Sridharan Based on slides created by Edward Angel CS4395: Computer Graphics 1.
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
1 Perception. 2 “The consciousness or awareness of objects or other data through the medium of the senses.”
1/22/04© University of Wisconsin, CS559 Spring 2004 Last Time Course introduction Image basics.
Why Care About Color? Accurate color reproduction is commercially valuable - e.g. Kodak yellow, painting a house Color reproduction problems increased.
9/14/04© University of Wisconsin, CS559 Spring 2004 Last Time Intensity perception – the importance of ratios Dynamic Range – what it means and some of.
Light and Color. Light is a form of energy light travels extremely fast and over long distances light carries energy and information light travels in.
09/05/02© University of Wisconsin, CS559 Spring 2002 Last Time Course introduction Assignment 1 (not graded, but necessary) –View is part of Project 1.
How do we perceive colour? How do colours add?. What is colour? Light comes in many “colours”. Light is an electromagnetic wave. Each “colour” is created.
CS559-Computer Graphics Copyright Stephen Chenney 2001 The Human Eye Graphics is concerned with the visual transmission of information How do we see? –Light.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics I, Fall 2008 Image Formation.
1 Image Formation. 2 Objectives Fundamental imaging notions Physical basis for image formation ­Light ­Color ­Perception Synthetic camera model Other.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Any questions about the current assignment? (I’ll do my best to help!)
1 Color vision and representation S M L.
© 1999 Rochester Institute of Technology Introduction to Digital Imaging.
Color in image and video Mr.Nael Aburas. outline  Color Science  Color Models in Images  Color Models in Video.
01/21/05© 2005 University of Wisconsin Last Time Course introduction A simple physically-based rendering example.
9/9/04© University of Wisconsin, CS559 Spring 2004 Last Time Course introduction Digital Images –The difference between an image and a display –Ways to.
Image Formation CS418 Computer Graphics John C. Hart.
Color Why is the Sky Blue? Atmosphere scatters blue light because of the shorter λ.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Digital Image Processing Part 1 Introduction. The eye.
An Introduction to Analyzing Colors in a Digital Photograph Rob Snyder.
Graphics Lecture 4: Slide 1 Interactive Computer Graphics Lecture 4: Colour.
1 Angel: Interactive Computer Graphics4E © Addison-Wesley 2005 Image Formation.
University of Kurdistan Digital Image Processing (DIP) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
COLOR.
Color Models. Color models,cont’d Different meanings of color: painting wavelength of visible light human eye perception.
Lecture 34: Light, color, and reflectance CS4670 / 5670: Computer Vision Noah Snavely.
CS559: Computer Graphics Lecture 8: Dynamic Range and Trichromacy Li Zhang Spring 2008 Most Slides from Stephen Chenney.
David Luebke 1 2/5/2016 Color CS 445/645 Introduction to Computer Graphics David Luebke, Spring 2003.
Warm Up Where does all light come from? Do different colors of light have different intensities? If so, what color has the greatest intensity? When two.
09/10/02(c) University of Wisconsin, CS559 Fall 2002 Last Time Digital Images –Spatial and Color resolution Color –The physics of color.
The Visible Spectrum And how we see it. What is Visible Light? The cones in the eye are only sensitive to a narrow range of EM frequencies. Visible Light.
1 of 32 Computer Graphics Color. 2 of 32 Basics Of Color elements of color:
Capturing Light… in man and machine
Display Issues Ed Angel
Binary Notation and Intro to Computer Graphics
CS262 – Computer Vision Lect 4 - Image Formation
Vision Basics Lighting I. Vision Basics Lighting I.
Color Image Processing
Color Image Processing
(c) University of Wisconsin, CS559 Spring 2002
Introduction to Electromagnetic waves, Light, and color
Digital Image Processing (DIP)
Angel: Interactive Computer Graphics5E © Addison-Wesley 2009
Yuanfeng Zhou Shandong University
Color Image Processing
CS 4722 Computer Graphics and Multimedia Spring 2018
Perception and Measurement of Light, Color, and Appearance
Introduction to Computer Graphics with WebGL
© University of Wisconsin, CS559 Spring 2004
Why does a blue shirt look blue?
Image Formation Ed Angel Professor Emeritus of Computer Science,
Isaac Gang University of Mary Hardin-Baylor
Principles of Light.
Color Image Processing
Slides taken from Scott Schaefer
Light Waves Day 1.
Angel: Interactive Computer Graphics4E © Addison-Wesley 2005
Image Formation Ed Angel
University of New Mexico
Presentation transcript:

© University of Wisconsin, CS559 Spring 2002 Last Time Course introduction Assignment 1 (not graded, but necessary) View is part of Project 1 Image and film basics 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Today More on Digital Images Introduction to color Homework 1 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Digital Images Computers work with discrete pieces of information How do we digitize a continuous image? Break the continuous space into small areas, pixels Use a single value for each pixel - the pixel value (no color, yet) No longer continuous in space or intensity This process is fraught with danger, as we shall see Continuous Discrete Pixels: Picture Elements 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Digital Cameras CCD stores a charge each time a photon hits it “Bins” have discrete area, one per pixel Spatially discrete Camera “reads” the charges out of the bins at some frequency Convert charges to discrete value Discrete in intensity Store values in memory - the image Still have issues of motion blur, depth of field, dynamic range, etc Light in Lens CCD 1/24/02 © University of Wisconsin, CS559 Spring 2002

Alternative Imaging Methods We obviously don’t have to use a digital camera to generate a digital image You can write the pixels directly, or use a paint program, or describe a 3D scene and have a computer render it into an image, or … This course is all about these other methods However, it still helps to think of any digital image as a sample of some ideal image 1/24/02 © University of Wisconsin, CS559 Spring 2002

Discretization Issues Can only store a finite number of pixels Resolution: Pixels per inch, or dpi (dots per inch from printers) Storage space goes up with square of resolution 600dpi has 4× more pixels than 300dpi Can only store a finite range of intensity values Typically referred to as depth - number of bits per pixel Directly related to the number of colors available Also concerned with the minimum and maximum intensity – dynamic range Both film and digital cameras have highly limited dynamic range The big question is: What is enough resolution and enough depth? 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Perceptual Issues Humans can discriminate about ½ a minute of arc At fovea, so only in center of view, 20/20 vision At 1m, about 0.2mm (“Dot Pitch” of monitors) Limits the required number of pixels Humans can discriminate about 8 bits of intensity “Just Noticeable Difference” experiments Limits the required depth for typical dynamic ranges Actually, it’s 9 bits, but 8 is far more convenient BUT, while perception can guide resolution requirements for display, when manipulating images much higher resolution may be required 129 128 125 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Intensity Perception Humans are actually tuned to the ratio of intensities, not their absolute difference So going from a 50 to 100 Watt light bulb looks the same as going from 100 to 200 So, if we only have 4 intensities, between 0 and 1, we should choose to use 0, 0.25, 0.5 and 1 Most computer graphics ignores this, giving poorer perceptible intensity resolution at low light levels, and better resolution at high light levels It would use 0, 0.33, 0.66, and 1 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Dynamic Range Image depth refers to the number of bits available, but not how those bits map onto intensities We can use those bits to represent a large range at low resolution, or a small range at high resolution Common display devices can only show a limited dynamic range, so typically we fix the range at that of the display device and choose high resolution Low range, high res High range, low res All possible intensities 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 More Dynamic Range Real scenes have very high and very low intensities Humans can see contrast at very low and very high light levels Can’t see all levels all the time – use adaptation to adjust Still, high range even at one adaptation level Film has low dynamic range ~ 100:1 Monitors are even worse Many ways to deal with the problem, but no great solution Way beyond the scope of this course 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Display on a Monitor When images are created, a linear mapping between pixels and intensity is assumed For example, if you double the pixel value, the displayed intensity should double Monitors, however, do not work that way For analog monitors, the pixel value is converted to a voltage The voltage is used to control the intensity of the monitor pixels But the voltage to display intensity is not linear Same problem with digital monitors, they just do the pixel to intensity conversion differently The outcome: A linear intensity scale in memory does not look linear on a monitor! 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Gamma Control The mapping from voltage to display is usually an exponential function: To correct the problem, we pass the pixel values through a gamma function before converting them to the monitor This process is called gamma correction The parameter, , is controlled by the user It should be matched to a particular monitor Typical values are between 2.2 and 2.5 The mapping can be done in hardware or software 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Some Facts About Color So far we have only discussed intensities, so called achromatic light (black and white) Accurate color reproduction is commercially valuable - e.g. Kodak yellow, painting a house Of the order of 10 color names are widely recognized by English speakers - other languages have fewer/more, but not much more Color reproduction problems have been increased by the prevalence of digital imaging - eg. digital libraries of art Color consistency is also important in user interfaces, eg: what you see on the monitor should match the printed version 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Light and Color The frequency of light determines its “color” Frequency, wavelength, energy all related Describe incoming light by a spectrum Intensity of light at each frequency Just like a graph of intensity vs. frequency We care about wavelengths in the visible spectrum: between the infra-red (700nm) and the ultra-violet (400nm) 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 White # Photons Wavelength (nm) 400 500 600 700 White Less Intense White (grey) Note that color and intensity are technically two different things However, in common usage we use color to refer to both For example, dark red vs. light red You will have to use context to extract the meaning 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Helium Neon Laser # Photons Wavelength (nm) 400 500 600 700 Lasers emit light at a single wavelength, hence they appear colored in a very “pure” way 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Normal Daylight # Photons Wavelength (nm) 400 500 600 700 The sky is blue, so what should this look like? 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Normal Daylight # Photons Wavelength (nm) 400 500 600 700 Note the hump at short wavelengths - the sky is blue Other bumps came from solar emission spectra and atmospheric adsorption 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Tungsten Lightbulb # Photons Wavelength (nm) 400 500 600 700 Most light sources are not anywhere near white It is a major research effort to develop light sources with particular properties 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Red Paint # Photons Wavelength (nm) 400 500 600 700 Red paint absorbs green and blue wavelengths, and reflects red wavelengths, resulting in you seeing a red appearance 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Frequency Response Any sensor is defined by its response to a frequency distribution Expressed as a graph of sensitivity vs. wavelength, () For each unit of energy at the given wavelength, how much voltage/impulses/whatever the sensor provides To compute the response, take the integral E() is the incoming energy at the particular wavelength The integral multiplies the amount of energy at each wavelength by the sensitivity at that wavelength, and sums them all up 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 A “Red” Sensor Sensitivity Wavelength (nm) 400 500 600 700 This sensor will respond to red light, but not to blue light, and a little to green light 1/24/02 © University of Wisconsin, CS559 Spring 2002

The “Red” Sensor Response Sensitivity,  Sensitivity,  400 500 600 700 400 500 600 700 #photons, E #photons, E Red Blue 400 500 600 700 400 500 600 700 High response Low response 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Changing Response How can you take a “white” sensor and change it into a “red” sensor? Hint: Think filters Can you change a “red” sensor into a “white” sensor? Assume for the moment that your eye is a “white” sensor. How is it that you can see a “black light” (UV) shining on a surface? Such surfaces are fluorescent Your eye isn’t really a white sensor - it just approximates one 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Seeing in Color The eye contains rods and cones Rods work at low light levels and do not see color That is, their response depends only on how many photons, not their wavelength Cones come in three types (experimentally and genetically proven), each responds in a different way to frequency distributions 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Color receptors Each cone type has a different sensitivity curve Experimentally determined in a variety of ways For instance, the L-cone responds most strongly to red light “Response” in your eye means nerve cell firings How you interpret those firings is not so simple … 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Color Perception How your brain interprets nerve impulses from your cones is an open area of study, and deeply mysterious Colors may be perceived differently: Affected by other nearby colors Affected by adaptation to previous views Affected by “state of mind” Experiment: Subject views a colored surface through a hole in a sheet, so that the color looks like a film in space Investigator controls for nearby colors, and state of mind 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 The Same Color? 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 The Same Color? 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Color Deficiency Some people are missing one type of receptor Most common is red-green color blindness in men Red and green receptor genes are carried on the X chromosome - most red-green color blind men have two red genes or two green genes Other color deficiencies Anomalous trichromacy, Achromatopsia, Macular degeneration Deficiency can be caused by the central nervous system, by optical problems in the eye, injury, or by absent receptors 1/24/02 © University of Wisconsin, CS559 Spring 2002

© University of Wisconsin, CS559 Spring 2002 Trichromacy Experiment: Show a target color beside a user controlled color User has knobs that add primary sources to their color Ask the user to match the colors By experience, it is possible to match almost all colors using only three primary sources - the principle of trichromacy Sometimes, have to add light to the target 1/24/02 © University of Wisconsin, CS559 Spring 2002

The Math of Trichromacy Write primaries as A, B and C Many colors can be represented as a mixture of A, B, C: M=aA+bB+cC (Additive matching) Gives a color description system - two people who agree on A, B, C need only supply (a, b, c) to describe a color Some colors can’t be matched like this, instead, write: M+aA=bB+cC (Subtractive matching) Interpret this as (-a, b, c) Problem for reproducing colors 1/24/02 © University of Wisconsin, CS559 Spring 2002

Color matching functions Choose primaries, say A, B, C Given energy function, E(), what amounts of primaries will match it? For each wavelength, determine how much of A, of B, and of C is needed to match light of that wavelength alone. Gives a(), b() and c() Match is: 1/24/02 © University of Wisconsin, CS559 Spring 2002