Colour an algorithmic approach Thomas Bangert MSc in Computer Sciency by Research. Project Viva.

Slides:



Advertisements
Similar presentations
Light Light is fundamental for color vision Unless there is a source of light, there is nothing to see! What do we see? We do not see objects, but the.
Advertisements

Why is this hard to read. Unrelated vs. Related Color Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color.
School of Computing Science Simon Fraser University
Why is this hard to read. Unrelated vs. Related Color Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color.
Color.
1 Computer Science 631 Lecture 6: Color Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
Display Issues Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico.
Images and colour Colour - colours - colour spaces - colour models Raster data - image representations - single and multi-band (multi-channel) images -
Image Formation Mohan Sridharan Based on slides created by Edward Angel CS4395: Computer Graphics 1.
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
1 CSCE441: Computer Graphics: Color Models Jinxiang Chai.
Colour in Computer Graphics Mel Slater. Outline: This time Introduction Spectral distributions Simple Model for the Visual System Simple Model for an.
Image Processing Lecture 2 - Gaurav Gupta - Shobhit Niranjan.
1Computer Graphics Lecture 3 - Image Formation John Shearer Culture Lab – space 2
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics I, Fall 2008 Image Formation.
1 Image Formation. 2 Objectives Fundamental imaging notions Physical basis for image formation ­Light ­Color ­Perception Synthetic camera model Other.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Any questions about the current assignment? (I’ll do my best to help!)
Colour an algorithmic approach Thomas Bangert PhD Research Proposal.
1 Color vision and representation S M L.
Colour an algorithmic approach Thomas Bangert PhD Research Topic.
Color. Contents Light and color The visible light spectrum Primary and secondary colors Color spaces –RGB, CMY, YIQ, HLS, CIE –CIE XYZ, CIE xyY and CIE.
Week 6 Colour. 2 Overview By the end of this lecture you will be familiar with: –Human visual system –Foundations of light and colour –HSV and user-oriented.
Color Theory ‣ What is color? ‣ How do we perceive it? ‣ How do we describe and match colors? ‣ Color spaces.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
1 Chapter 2: Color Basics. 2 What is light?  EM wave, radiation  Visible light has a spectrum wavelength from 400 – 780 nm.  Light can be composed.
CSC361/ Digital Media Burg/Wong
COLORCOLOR Angel 1.4 and 2.4 J. Lindblad
How do we see color? There is only one type of rod. It can only tell the intensity of the light, not its color. Because the cones can differentiate colors,
Graphics Lecture 4: Slide 1 Interactive Computer Graphics Lecture 4: Colour.
1 Angel: Interactive Computer Graphics4E © Addison-Wesley 2005 Image Formation.
Three-Receptor Model Designing a system that can individually display thousands of colors is very difficult Instead, colors can be reproduced by mixing.
Introduction to Computer Graphics
Color Models. Color models,cont’d Different meanings of color: painting wavelength of visible light human eye perception.
Colour an algorithmic approach Thomas Bangert PhD Research Topic.
CS-321 Dr. Mark L. Hornick 1 Color Perception. CS-321 Dr. Mark L. Hornick 2 Color Perception.
1 Angel and Shreiner: Interactive Computer Graphics6E © Addison-Wesley 2012 Image Formation Sai-Keung Wong ( 黃世強 ) Computer Science National Chiao Tung.
ECE 638: Principles of Digital Color Imaging Systems Lecture 5: Primaries.
ECE 638: Principles of Digital Color Imaging Systems Lecture 11: Color Opponency.
Color Measurement and Reproduction Eric Dubois. How Can We Specify a Color Numerically? What measurements do we need to take of a colored light to uniquely.
Computer Graphics: Achromatic and Coloured Light.
1 of 32 Computer Graphics Color. 2 of 32 Basics Of Color elements of color:
CSE 185 Introduction to Computer Vision
Multimedia systems Lecture 5: Color in Image and Video.
Color Models Light property Color models.
Display Issues Ed Angel
Color Image Processing
Color Image Processing
Color Image Processing
International Conference on Light and Color in Nature
COLOR space Mohiuddin Ahmad.
Colour electrochromism and Thomas Bangert
Angel: Interactive Computer Graphics5E © Addison-Wesley 2009
Yuanfeng Zhou Shandong University
Color Image Processing
Colour Theory Fundamentals
CS 4722 Computer Graphics and Multimedia Spring 2018
Introduction to Computer Graphics with WebGL
Image Formation Ed Angel Professor Emeritus of Computer Science,
Isaac Gang University of Mary Hardin-Baylor
Introduction to Perception and Color
Color Image Processing
Slides taken from Scott Schaefer
Angel: Interactive Computer Graphics4E © Addison-Wesley 2005
Color Image Processing
Image Formation Ed Angel
Color Model By : Mustafa Salam.
Color Models l Ultraviolet Infrared 10 Microwave 10
University of New Mexico
Presentation transcript:

Colour an algorithmic approach Thomas Bangert MSc in Computer Sciency by Research. Project Viva

understanding how visual system process information Visual system: about 30% of cortex most studied part of brain best understood part of brain

Image sensors  Binary sensor array  Luminance sensor array  Multi-Spectral sensor array

Where do we start? We first need a model of what light information means. Any visual system starts with a sensor: What kind of information do these sensors produce? Let’s first look at sensors we have designed!

Sensors we build X Y

The Pixel Sensors element may be:  Binary  Luminance  RGB The fundamental unit of information!

The Bitmap 2-d space represented by integer array

What information is produced? 2-d array of pixels:  Black & White Pixel: –single luminance value, usually 8 bit  Colour Pixel –3 colour values, usually 8-bit

Where we need to start: the fundamentals of the sensor ?

Human Visual System (HVS) The fundamentals!

The Sensor 2 systems: day-sensor & night-sensor To simplify: we ignore night sensor system Cone Sensors very similar to RGB sensors we design for cameras

BUT: sensor array is not ordered arrangement is random note: very few blue sensors, none in the centre

sensor pre-processing circuitry

First Question: What information is sent from sensor array to visual system? Very clear division between sensor & pre-processing (Front of Brain) and visual system (Back of Brain) connected with very limited communication link

Receptive Fields All sensors in the retina are organized into receptive fields Two types of receptive field. Why?

What does a receptive field look like? In the central fovea it is simply a pair of sensors. Always 2 types: plus-centre minus-centre

What do retinal receptive fields do? Produce an opponent value: simply the difference between 2 sensors This means: it is a relative measure, not an absolute measure and no difference = no information to brain

Sensor Input Luminance Levels it is usual to code 256 levels of luminance Linear: Y Logarithmic: Y’

Receptive Field Function Min Zone Max-Min Function Output is difference between average of center and max/min of surround Max Zone Tip of Triangle

Dual Response to gradients Why? Often described as second derivative/zero crossing

Abstracted Neurons only produce positive values. Dual +/- produces positive & negative values. Together: called a channel Produces signed values. Co-ordinate

Human Sensor Response to monochromatic light stimuli

HVS Luminance Sensor Idealized A linear response in relation to wavelength. Under ideal conditions can be used to measure wavelength.

Spatially Opponent HVS: Luminance is always measured by taking the difference between two sensor values. Produces: contrast value Which is done twice, to get a signed contrast value

Moving from Luminance to Colour Primitive visual systems were in b&w Night-vision remains b&w Evolutionary Path –Monochromacy –Dichromacy(most mammals – eg. the dog) –Trichromacy (birds, apes, some monkeys) Vital for evolution: backwards compatibility

Electro-Magnetic Spectrum Visible Spectrum Visual system must represent light stimuli within this zone.

Colour Vision Young-Helmholtz Theory Argument: Sensors are RGB therefore Brain is RGB  3 colour model

Hering colour opponency model Fact: we never see reddish green or yellowish blue. Therefore: colours must be arranged in opponent pairs: Red  Green Blue  Yellow  4 colour model

HVS Colour Sensors response to monochromatic light

How to calculate spectral frequency with 2 luminance sensors. Roughly speaking:

the ideal light stimulus Monochromatic Light Allows frequency to be measured in relation to reference.

Problem: natural light is not ideal Light stimulus might not activate reference sensor fully. Light stimulus might not be fully monochromatic. ie. there might be white mixed in

Solution: A 3 rd sensor is used to measure equiluminance. Which is subtracted. Then reference sensor can be normalized

Equiluminance & Normalization Also called Saturation and Lightness. Must be removed first – before opponent values calculated. Then opponent value = spectral frequency Values must be preserved – otherwise information is lost.

a 4 sensor design 2 opponent pairs only 1 of each pair can be active min sensor is equiluminance

What does a colour opponent channel look like? luminance contrast opponent channel each colour opponent channel codes for 2 primary colours Total of 4 primary colours

What is Colour? Colour is calculated exactly the same as luminance contrast. The only difference is spectral range of sensors is modified. Colour channels are: R  G B  Y Uncorrected colour values are contrast values. But with white subtracted and normalized: Colour is Wavelength!

How many sensors? 4 primary colours require 4 sensors!

Human Retina only has 3 sensors! What to do? Because of opponency when R=G, R  G colour channel is 0. Why not pair RG and reuse it as a Yellow sensor! Yellow can be R=G

How do we abstract information from sensor array? Luma (Y’) Red-Green (C B ) Blue-Yellow (C R )

Luminance + 2 colour values + 2 sensor correction values Chroma Blue Chroma Red + Lightness + Saturation

Tri-Phosphor Lighting optimised for perception of ‘white’

Primary Colours matched to spectrum

Testing Colour Opponent model What we should see What we do see Unfortunately it does not match There is Red in our Blue

The strange case of Ultra-Violet Light with frequency of 400nm is ultra-blue Red sensor is at opposite of spectrum & not stimulated. Yet we see ultra-violet – which is Blue + Red …and the more we go into UV the more red

Colour Matching Data (CIE 1931) (indirect sensor response) a very odd fact – a virtual sensor response

Pigment Absorption Data of human cone sensors Red > Green

Therefore: HVS colour representation must be circular! Which is not a new idea, but not currently in fashion. 540nm 620nm 480nm

Dual Opponency with Circularity an ideal model using 2 sensor pairs

Colour Wheel Goethe & Munsell Colours are represented by a single value: Hue

RYB Colour Circle no longer used

HSL (Hue + S & L) Circular colour coding Any colour represented by 1 number Allows colour arithmetic R=255 G=0 B=0 R=255 G=255 B=0 R=0 G=255 B=0 R=0 G=255 B=255 R=0 G=0 B=255 R=255 G=0 B=255

HSL & HSV Simple & Elegant But it is flawed: –simple transformation of RGB –colours do not match perception Why? Because there are 4 primary colours, not 3!

gives us a 2-d colour space CBCB CRCR Colour Information: 2 independent values

2-d space: Cartesian coordinates or polar coordinates Co-ordinate systems

… requires 2 independent channels  which give 4 primary colours Yellow added as a primary! Which allows a simple transform to circular representation

Opponent Values  Hue A simple transform from 2 opponent values to a single hue value How might HVS do this? we keep 2 colour channels but link them

Travelling the Colour Wheel (Hue) One Chroma channel is always at max or min The other Chroma channel is incremented or decremented Rules: if (C B ==Max)C R -- if (C R ==Max)C B ++ if (C R ==Min)C B -- if (C B ==Min)C R ++ +-

Colour Wheel Simple rule based system that cycles through the colour wheel Allows arithmetic operations on colour

What is Hue? Circular representation of spectrum Its purpose is to provide a Spectrum Value Primary Colours are the extreme ends of the 2 linked colour channels

Hue: 2 values or 1 2 linked values allow us to turn colour off. (0,0) is not an allowed hue, used for no colour Simple standard input pixel: –luminance value or –colour value

Why do we need arithmetic on colour? Colours are computed, not measured!

Colour is very useful for transparency What is the colour?

Why do we need transparency? otherwise we might have trouble with windows

… and difficulties with these kinds of tasks

Colour is very helpful in deciphering the layers Aim: to reconstruct scenes with transparency

It all must start with the right kind of sensor: Format of ‘pixel’ as it enters visual area of brain for processing: Luminance Information Optional Colour Information Where on spectrum How colourful

visual systems with 4 sensors Birds Reptiles Dinosaurs Therapsids (our dinosaur-like ancestor)  about 60nm between sensors  evenly spaced  frequencies narrowed

The Ideal Sensor Equally spaced on spectrum Overlap with linear transition colour channel 1: R - Gcolour channel 2: yellow - B No overlap of opponent pairs

spectrum is shifted toward more even spacing Actual Sensor Response Sensor Response calculated from CIE perceptual data CRT RGB Phosphors spectrum is shifted more towards even spacing HVS Sensor + yellow  almost equal distribution

a yellow sensor + a few tweaks makes human vision equivalent to bird vision even spacing 60nm between primary colours response narrowed intermediary colours at half- way points requires more processing, is less accurate, but is equivalent

How do we get a yellow sensor? we re-use red & green sensors & but only when they are equal (R==G) This implies dividing by a measure of equality

Existing Circular Colour Systems: Munsell colour wheel with 5 primary colours 100 years old quite close

Existing Circular Colour Systems: CIE L*a*b* & CIE L*C*h L*a*b* is a colour opponent space L*C*h is the transform to circular 4 primary colours Red = 0° Yellow = 90 ° Green = 180 ° Blue = 270 °

Summary Colour is based on contrast HVS has a circular model of spectrum Colour is a code for where on spectrum 2 colour channels, bi-polar  4 primary colours 2 channels  2-d colour space Simple transform to circular representation Single variable represents all colours Purpose is to allow systematic colour transforms  colour computation

References Poynton, C. A. (1995). “Poynton’s Color FAQ”, electronic preprint. Bangert, Thomas (2008). “TriangleVision: A Toy Visual System”, ICANN Goldsmith, Timothy H. (July 2006). “What birds see”. Scientific American: 69–75. Neitz, Jay; Neitz, Maureen. (August 2008). “Colour Vision: The Wonder of Hue”. Current Biology 18(16): R700-r702. Questions?

The problem with Yellow Colour: an algorithmic approach Thomas Bangert Thomas Bangert MSc in Computer Sciency by Research. Project Viva

Colour channels are pure Opponency means colour pairs are pure with respect to themselves. It follows that a pure colour is achieved only when the other opponent channel is 0. Reddest red only when B-RG is 0 Bluest blue only when R-G is 0 and inversely

RGB is pure Red is reddest when G & B = 0 etc. XYZ and LMS are not pure. Sensors of visual system have a broad spectral response. They do not have a pure colour response. Retinal processing produces pure colour channels from noisy and ambiguous data. RGB Red: R=255, G=0, B=0 Green: R=0, G=255, B=0 Blue: R=0, G=0, B=255

YUV & YC B C R Transforms JPEG 2000 allows reversible simplification Transform usually expressed in matrix form JPEG without anything odd like ‘headroom’ note: no negative numbers for JPEG, C+=128

Lets try some JPEG numbers: not trivial ‘leakage’ Should be 127 Cyan

The Problem: Colours channels are not pure. They should be! R G B Magenta Cyan RG Cyan

YUV/YC R C B simplified A large number of transforms exist, most variations of YUV. Minor tweaks of transform from XYZ can lead to quite large differences. All of which work fine perceptually (meaning neurons are not that precise). Why not simplify?

Chroma Blue If there was a yellow sensor We use R=G instead: which is (R+G)/2 but we want a value only when R=G

Yellow: the Chroma Blue correction factor The less equal R and G are, the less yellow there should be. So: Simply divide R by G to determine how close they are. The more equal they are the more active the ‘yellow’ sensor is.

Transform back to RGB Fully Reversible Calculate R and G first, then Blue correction factor.

Samples of simple colour transforms

Blue- Yellow set to 0

Red- Green inverted

Blue- Yellow inverted

playing with colour

is easy

these are simple transforms

not touched by hand

YUV Summary Two simple tweaks allow us to correct conversion between RGB and YUV/YC R C B. Also allows conversion to be simplified.