Colour an algorithmic approach Thomas Bangert PhD Research Topic.

Slides:



Advertisements
Similar presentations
Chapter 9: Color Vision. Overview of Questions How do we perceive 200 different colors with only three cones? What does someone who is “color-blind” see?
Advertisements

Chapter 9: Perceiving Color
Midterm 2 March 9 th and 10 th Review Session Monday 7pm in this room (probably)
Color.
1 Computational Vision CSCI 363, Fall 2012 Lecture 33 Color.
Why is this hard to read. Unrelated vs. Related Color Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color.
School of Computing Science Simon Fraser University
Exam next week Covers everything about all sensory modalities except hearing This includes: vision balance/touch/taste/smell/ proprioception/theroception.
Wavelength and Color Recall that light is electromagnetic radiation.
Color.
Read Land article for Thursday Test starts Wednesday of next week!!!!
Homework Set 8: Due Monday, Nov. 18 From Chapter 9: P10, P22, P26, P30, PH3, From Chapter 10: P4, P5, P9.
1 Computer Science 631 Lecture 6: Color Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
Trichromacy Helmholtz thought three separate images went forward, R, G, B. Wrong because retinal processing combines them in opponent channels. Hering.
Display Issues Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico.
Image Formation Mohan Sridharan Based on slides created by Edward Angel CS4395: Computer Graphics 1.
COLOR PERCEPTION Physical and Psychological Properties Theories – Trichromatic Theory – Opponent Process Theory Color Deficiencies Color and Lightness.
Basic Color Theory Susan Farnand
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
1 CSCE441: Computer Graphics: Color Models Jinxiang Chai.
Colour in Computer Graphics Mel Slater. Outline: This time Introduction Spectral distributions Simple Model for the Visual System Simple Model for an.
Technology and digital images. Objectives Describe how the characteristics and behaviors of white light allow us to see colored objects. Describe the.
1Computer Graphics Lecture 3 - Image Formation John Shearer Culture Lab – space 2
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics I, Fall 2008 Image Formation.
1 Image Formation. 2 Objectives Fundamental imaging notions Physical basis for image formation ­Light ­Color ­Perception Synthetic camera model Other.
Any questions about the current assignment? (I’ll do my best to help!)
Colour an algorithmic approach Thomas Bangert PhD Research Proposal.
1 Perception and VR MONT 104S, Fall 2008 Lecture 7 Seeing Color.
1 Color vision and representation S M L.
Colour an algorithmic approach Thomas Bangert PhD Research Topic.
Colour an algorithmic approach Thomas Bangert MSc in Computer Sciency by Research. Project Viva.
Color. Contents Light and color The visible light spectrum Primary and secondary colors Color spaces –RGB, CMY, YIQ, HLS, CIE –CIE XYZ, CIE xyY and CIE.
Week 6 Colour. 2 Overview By the end of this lecture you will be familiar with: –Human visual system –Foundations of light and colour –HSV and user-oriented.
Color Theory ‣ What is color? ‣ How do we perceive it? ‣ How do we describe and match colors? ‣ Color spaces.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Read article by Land for Thursday Article by Anne Treisman coming up in about two weeks.
Color Perception How your eye/brain processes colors.
September 17, 2013Computer Vision Lecture 5: Image Filtering 1ColorRGB HSI.
Digital Image Processing Part 1 Introduction. The eye.
CSC361/ Digital Media Burg/Wong
COLORCOLOR Angel 1.4 and 2.4 J. Lindblad
How do we see color? There is only one type of rod. It can only tell the intensity of the light, not its color. Because the cones can differentiate colors,
CS6825: Color 2 Light and Color Light is electromagnetic radiation Light is electromagnetic radiation Visible light: nm. range Visible light:
Graphics Lecture 4: Slide 1 Interactive Computer Graphics Lecture 4: Colour.
1 Angel: Interactive Computer Graphics4E © Addison-Wesley 2005 Image Formation.
1 CSCE441: Computer Graphics: Color Models Jinxiang Chai.
Introduction to Computer Graphics
1 CSCE441: Computer Graphics: Color Models Jinxiang Chai.
David Luebke 1 2/5/2016 Color CS 445/645 Introduction to Computer Graphics David Luebke, Spring 2003.
1 Angel and Shreiner: Interactive Computer Graphics6E © Addison-Wesley 2012 Image Formation Sai-Keung Wong ( 黃世強 ) Computer Science National Chiao Tung.
ECE 638: Principles of Digital Color Imaging Systems Lecture 5: Primaries.
ECE 638: Principles of Digital Color Imaging Systems Lecture 11: Color Opponency.
Chapter 9: Perceiving Color. Figure 9-1 p200 Figure 9-2 p201.
Color Measurement and Reproduction Eric Dubois. How Can We Specify a Color Numerically? What measurements do we need to take of a colored light to uniquely.
Digital Image Processing Lecture 12: Color Image Processing Naveed Ejaz.
School of Electronics & Information Engineering
International Conference on Light and Color in Nature
COLOR space Mohiuddin Ahmad.
Colour electrochromism and Thomas Bangert
Angel: Interactive Computer Graphics5E © Addison-Wesley 2009
Colour Theory Fundamentals
CS 4722 Computer Graphics and Multimedia Spring 2018
Image Formation Ed Angel Professor Emeritus of Computer Science,
Computer Vision Lecture 4: Color
Isaac Gang University of Mary Hardin-Baylor
Introduction to Perception and Color
Slides taken from Scott Schaefer
Angel: Interactive Computer Graphics4E © Addison-Wesley 2005
University of New Mexico
Presentation transcript:

Colour an algorithmic approach Thomas Bangert PhD Research Topic

understanding how natural visual systems process information Visual system: about 30% of cortex most studied part of brain best understood part of brain

Image sensors  Binary sensor array monochromatic ‘external retina’  Luminance sensor array dichromatic colour  Multi-Spectral sensor array tetrachromatic colour What do these direct links to the brain do?

Lets hypothesise … When an astronomer looks at a star, how does he code the information his sensors produce? It was noticed that parts of spectrum were missing.

Looking our own star – the sun x

Each atomic element absorbs at specific frequencies …

We can Code for these elements … We can imagine how coding spectral element lines could be used for visual perception … by a creature very different to us … a creature which hunts by ‘tasting’ the light we reflect … seeing the stuff we are made of Colour in this case means atomic structure and chemistry…

Where do we start with humans? Any visual system starts with the sensor. What kind of information do these sensors produce? How do we use that information to code what is relevant to us? Let’s first look at sensors we ourselves have designed!

Sensors we build X Y

The Pixel Sensors element may be:  Binary  Luminance  RGB The fundamental unit of information!

The Bitmap 2-d space represented by integer array

What information is produced? 2-d array of pixels:  Black & White Pixel: –single luminance value, usually 8 bit  Colour Pixel –3 colour values, usually 8-bit RGB

What does RGB mean? It is an instruction for producing light stimuli Light stimuli for a human standard observer Light stimuli produce perception RGB codes the re-production of measured perceptual stimuli It is assumed that humans are trichromatic It tells us nothing about what colour means!

The Standard Observer CIE1931 xy chromaticity diagram primaries at: 435.8nm, 546.1nm, 700nm The XYZ sensor response now we extract the colour information from the sensor readings The Math: … 2-d as z is redundant

Understanding CIE chromaticity White in center Saturated / monochromatic colours on the periphery Best understood as a failed colour circle Everything in between is a mix of white and the colour

Does it match? The problem of ‘negative primaries’ But does it blend? Monochromatic Colours

What the Human Visual System (HVS) does is very different! ?

Human Visual System (HVS) Part 1 Coding Colour

The Sensor 2 systems: day-sensor & night-sensor To simplify: we ignore night sensor system Cone Sensors very similar to RGB sensors we design for cameras

BUT: sensor array is not ordered arrangement is random note: very few blue sensors, none in the centre

sensor pre-processing circuitry

First Question: What information is sent from sensor array to visual system? Very clear division between sensor & pre-processing (Front of Brain) and visual system (Back of Brain) connected with very limited communication link

Receptive Fields All sensors in the retina are organized into receptive fields Two types of receptive field. Why?

What does a receptive field look like? In the central fovea it is simply a pair of sensors. Always 2 types: plus-centre minus-centre

What do retinal receptive fields do? Produce an opponent value: simply the difference between 2 sensors This means: it is a relative measure, not an absolute measure and no difference = no information to brain

Sensor Input Luminance Levels it is usual to code 256 levels of luminance Linear: Y Logarithmic: Y’

Receptive Field Function Min Zone Max-Min Function Output is difference between average of center and max/min of surround Max Zone Tip of Triangle

Dual Response to gradients Why? Often described as second derivative/zero crossing

Abstracted Neurons only produce positive values. Dual +/- produces positive & negative values. Together: called a channel means signed values. Produces directional information Location, angle luminance, equiluminance and colour Information sent to higher visual processing areas This is a sparse representation From this the percept is created This is a type of data compression. Only essential information is sent! Conversion from this format to bitmap?

starting with the sensor: Human Sensor Response to non-chromatic light stimuli

HVS Luminance Sensor Idealized A linear response in relation to wavelength. Under ideal conditions can be used to measure wavelength.

Spatially Opponent HVS: Luminance is always measured by taking the difference between two sensor values. Produces: contrast value Which is done twice, to get a signed contrast value

Moving from Luminance to Colour Primitive visual systems were in b&w Night-vision remains b&w Evolutionary Path –Monochromacy –Dichromacy(most mammals – eg. the dog) –Trichromacy (birds, apes, some monkeys) Vital for evolution: backwards compatibility

Electro-Magnetic Spectrum Visible Spectrum Visual system must represent light stimuli within this zone.

Colour Vision Young-Helmholtz Theory Argument: Sensors are RGB therefore Brain is RGB  3 colour model

Hering colour opponency model Fact: we never see reddish green or yellowish blue. Therefore: colours must be arranged in opponent pairs: Red  Green Blue  Yellow  4 colour model

Colour Sensor response to monochromatic light Human Bird 4 sensors Equidistant on spectrum

How to calculate spectral frequency with 2 poor quality luminance sensors. Roughly speaking: Sensor Value Wavelength λ-Δλ-Δλ λ+Δλ+Δ R G a shift of Δ from a known reference point

the ideal light stimulus Monochromatic Light Allows frequency to be measured in relation to reference.

Problem: natural light is not ideal Light stimulus might not activate reference sensor fully. Light stimulus might not be fully monochromatic. ie. there might be white mixed in

Solution: A 3 rd sensor is used to measure equiluminance. Which is subtracted. Then reference sensor can be normalized

Equiluminance & Normalization Also called Saturation and Lightness. Must be removed first – before opponent values calculated. Then opponent value = spectral frequency Values must be preserved – otherwise information is lost.

a 4 sensor design 2 opponent pairs only 1 of each pair can be active min sensor is equiluminance

What is Colour? Colour is calculated exactly the same as luminance contrast. The only difference is spectral range of sensors is modified. Colour channels are: RGRG ByBy Uncorrected colour values are contrast values. But with white subtracted and normalized: Colour is Wavelength!

How many sensors? 4 primary colours require 4 sensors!

Human Retina only has 3 sensors! What to do? We add an emulation layer. Hardware has 3 physical sensors but emulates 4 sensors No maths … just a diagram!

Testing Colour Opponent model What we should see What we do see Unfortunately it does not match There is Red in our Blue

Pigment Absorption Data of human cone sensors Red > Green

Solution: HVS colour representation must be circular! Which is not a new idea, but not currently in fashion. 540nm 620nm 480nm

Dual Opponency with Circularity an ideal model using 2 sensor pairs

… requires 2 independent channels  which give 4 primary colours Yellow added as a primary! Which allows a simple transform to circular representation

Opponent Values  Hue A simple transform from 2 opponent values to a single hue value How might HVS do this? we keep 2 colour channels but link them

Travelling the Colour Wheel (Hue) One Chroma channel is always at max or min The other Chroma channel is incremented or decremented Rules: if (C B ==Max)C R -- if (C R ==Max)C B ++ if (C R ==Min)C B -- if (C B ==Min)C R ++ +-

Colour Wheel Simple rule based system that cycles through the colour wheel Allows arithmetic operations on colour

Part 2 Accurate Colour reproduction First problem: Real world is not monochromatic Spectrum of a common yellow flower

Accurate Colour reproduction second problem: human colour vision is inaccurate prone to ‘making stuff up’ varies from person to person The closer the sensors the less accurate the color information. All humans are to an extent color blind … compared to animals like birds.

Examples of real world colour? Colours are often computed, not measured!

… an extreme example What is the colour?

Accurate colour reproduction … for dual channel opponency Problem # 1 very easy to solve we simply assume monochromacy when stimuli are not monochromatic opponent channels simply subtract to 0 green, yellow and red are active r-g = 0 b = 0 leaving only yellow stimuli equivalent to monochromatic

Accurate colour reproduction … with primaries Only primaries are true colours all other colours are intermediary … and can be generated by proportions of primaries!

Accurate colour reproduction … for humans Any colour may be displayed by a combination of 2 primaries but the location of primaries can vary between individuals and intermediary locations can be distorted Problem # 2

Solution to Accurate colour reproduction … for the individual human 1.primaries must be mapped for the individual 2.mid-points must be mapped Provides an individual colour profile … a map of the primaries and intermediary points. Can be repeated recursively for greater precision.

Does it work? Colour opponency requires primaries to be precisely located. For humans this would be virtual primaries Is there evidence that this is so? Do the colours match? This will be tested empirically …

monochromator Xenon light source equal light across visible spectrum Apparatus

Procedure generate subject selectable monochromatic stimuli subject selects colour virtual primaries are calculated

Preliminary results BlueGreenYellowRed Average Std Dev

Discussion small sample with inaccurate tools but primaries appear to be are very closely grouped and spaced equidistantly – exactly as predicted

In Future visits to optometrist will include a colour test Colour displays may be set by colour ‘prescription’

References Poynton, C. A. (1995). “Poynton’s Color FAQ”, electronic preprint. Bangert, Thomas (2008). “TriangleVision: A Toy Visual System”, ICANN Goldsmith, Timothy H. (July 2006). “What birds see”. Scientific American: 69–75. Neitz, Jay; Neitz, Maureen. (August 2008). “Colour Vision: The Wonder of Hue”. Current Biology 18(16): R700-r702. Questions?