Download presentation
Presentation is loading. Please wait.
1
Why is this hard to read
3
Unrelated vs. Related Color
Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color perceived to belong to an area seen in relation to other colors (CIE 17.4)
4
Illusory contour Shape, as well as color, depends on surround
Most neural processing is about differences
5
Illusory contour
6
CS 768 Color Science Perceiving color Describing color Modeling color
Measuring color Reproducing color
8
Spectral measurement Measurement p(l) of the power (or energy, which is power x time ) of a light source as a function of wavelength l Usually relative to p(560nm) Visible light nm
11
Retinal line spread function
retinal position relative intensity
12
Linearity additivity of response (superposition) r(m1+m2)=r(m1)+r(m2)
scaling (homogeneity) r(am)=ar(m) r(m1(x,y)+m2 (x,y))= r(m1)(x,y)+r(m2)(x,y)= (r(m1)+r(m2))(x,y) r(am(x,y))=ar(m)(x,y) retinal intensity monitor intensity
13
Non-linearity
16
Optic nerve Light Ganglion Amacrine Bipolar Horizontal Cone Rod Epithelium Retinal cross section
17
Visual pathways Three major stages Retina LGN Visual cortex
Visual cortex is further subdivided
18
Optic nerve 130 million photoreceptors feed 1 million ganglion cells whose output is the optic nerve. Optic nerve feeds the Lateral Geniculate Nucleus approximately 1-1 LGN feeds area V1 of visual cortex in complex ways.
19
Photoreceptors Cones - respond in high (photopic) light
differing wavelength responses (3 types) single cones feed retinal ganglion cells so give high spatial resolution but low sensitivity highest sampling rate at fovea
20
Photoreceptors Rods respond in low (scotopic) light none in fovea
try to foveate a dim star—it will disappear one type of spectral response several hundred feed each ganglion cell so give high sensitivity but low spatial resolution
21
Luminance Light intensity per unit area at the eye
Measured in candelas/m2 (in cd/m2) Typical ambient luminance levels (in cd/m2): starlight 10-3 moonlight 10-1 indoor lighting 102 sunlight 105 max intensity of common CRT monitors 10^2 From Wandell, Useful Numbers in Vision Science
22
Rods and cones Rods saturate at 100 cd/m2 so only cones work at high (photopic) light levels All rods have the same spectral sensitivity Low light condition is called scotopic Three cone types differ in spectral sensitivity and somewhat in spatial distribution.
23
Cones L (long wave), M (medium), S (short)
describes sensitivity curves. “Red”, “Green”, “Blue” is a misnomer. See spectral sensitivity.
24
Spectrum should be displayed compressed from about 400 to 680 nm
25
Receptive fields Each neuron in the visual pathway sees a specific part of visual space, called its receptive field Retinal and LGN rf’s are circular, with opponency; Cortical are oriented and sometimes shape specific. - + On center rf Red-Green LGN rf + - Oriented Cortical rf
27
Channels: Visual Pathways subdivided
Magno Color-blind Fast time response High contrast sensitivity Low spatial resolution Parvo Color selective Slow time response Low contrast sensitivity High spatial resolution Video coding implications Magno Separate color from b&w Need fast contrast changes (60Hz) Keep fine shading in big areas (Definition) Parvo Slow color changes OK (40 hz) Omit fine shading in small areas (Not obvious yet) pattern detail can be all in b&w channel
29
Trichromacy Helmholtz thought three separate images went forward, R, G, B. Wrong because retinal processing combines them in opponent channels. Hering proposed opponent models, close to right.
30
Opponent Models Three channels leave the retina:
Red-Green (L-M+S = L-(M-S)) Yellow-Blue(L+M-S) Achromatic (L+M+S) Note that chromatic channels can have negative response (inhibition). This is difficult to model with light.
31
+ -
34
Log Spatial Frequency (cpd)
100 Luminance 10 Contrast Sensitivity 1.0 Red-Green 0.1 Blue-Yellow 0.001 -1 1 2 Log Spatial Frequency (cpd)
35
Color matching Grassman laws of linearity: (r1 + r2)(l) = r1(l) + r2(l) (kr)(l) = k(r(l)) Hence for any stimulus s(l) and response r(l), total response is integral of s(l) r(l), taken over all l or approximately S s(l)r(l)
36
Surround light Primary lights Surround field Bipartite white screen Subject Test light Primary lights Test light
37
Color Matching Spectra of primary lights s1(l), s2(l), s3(l)
Subject’s task: find c1, c2, c3, such that c1s1(l)+c2s2(l)+c3s3(l) matches test light. Problems (depending on si(l)) [c1,c2,c3] is not unique (“metamer”) may require some ci<0 (“negative power”)
38
Color Matching Suppose three monochromatic primaries r,g,b at , , nm and a 10° field (Styles and Burch 1959). For any monochromatic light t(l) at l, find scalars R=R(l), G=G(l), B=B(l) such that t(l) = R(l)r + G(l)g + B(l)b R(l), G(l), B(l) are the color matching functions based on r,g,b.
40
Color matching Grassman laws of linearity: (r1 + r2)(l) = r1(l) + r2(l) (kr)(l) = k(r(l)) Hence for any stimulus s(l) and response r(l), total response is integral of s(l) r(l), taken over all l or approximately S s(l)r(l)
41
Color matching What about three monochromatic lights?
M(l) = R*R(l) + G*G(l) + B*B(l) Metamers possible good: RGB functions are like cone response bad: Can’t match all visible lights with any triple of monochromatic lights. Need to add some of primaries to the matched light
42
Surround light Primary lights Surround field Bipartite white screen Subject Test light Primary lights Test light
43
Color matching Solution: CIE XYZ basis functions
45
Color matching Note Y is V(l) None of these are lights
Euclidean distance in RGB and in XYZ is not perceptually useful. Nothing about color appearance
46
XYZ problems No correlation to perceptual chromatic differences
X-Z not related to color names or daylight spectral colors One solution: chromaticity
47
Chromaticity Diagrams
x=X/(X+Y+Z) y=Y/(X+Y+Z) z=Z/(X+Y+Z) Perspective projection on X-Y plane z=1-(x-y), so really 2-d Can recover X,Y,Z given x,y and on XYZ, usually Y since it is luminance
48
Chromaticity Diagrams
No color appearance info since no luminance info. No accounting for chromatic adaptation. Widely misused, including for color gamuts.
49
Some gamuts SWOP ENCAD GA ink
52
MacAdam Ellipses JND of chromaticity
Bipartite equiluminant color matching to a given stimulus. Depends on chromaticity both in magnitude and direction.
54
MacAdam Ellipses For each observer, high correlation to variance of repeated color matches in direction, shape and size 2-d normal distributions are ellipses neural noise? See Wysecki and Styles, Fig 1(5.4.1) p. 307
55
MacAdam Ellipses JND of chromaticity
Weak inter-observer correlation in size, shape, orientation. No explanation in Wysecki and Stiles 1982 More modern models that can normalize to observer?
56
MacAdam Ellipses JND of chromaticity
Extension to varying luminence: ellipsoids in XYZ space which project appropriately for fixed luminence
57
MacAdam Ellipses JND of chromaticity Technology applications:
Bit stealing: points inside chromatic JND ellipsoid are not distinguishable chromatically but may be above luminance JND. Using those points in RGB space can thus increase the luminance resolution. In turn, this has appearance of increased spatial resolution (“anti-aliasing”) Microsoft ClearType. See and
58
CIELab L* = 116 f(Y/Yn)-16 a* = 500[f(X/Xn) – f(Y/Yn)]
b* = 200[f(Y/Yn) –f(Z/Zn)] where Xn,Yn,Zn are the CIE XYZ coordinates of the reference white point. f(z) = z1/3 if z> f(z)=7.787z+16/116 otherwise L* is relative achromatic value, i.e. lightness a* is relative greenness-redness b* is relative blueness-yellowness
59
CIELab L* = 116 f(Y/Yn)-16 a* = 500[f(X/Xn) – f(Y/Yn)]
b* = 200[f(Y/Yn) –f(Z/Zn)] where Xn,Yn,Zn are the CIE XYZ coordinates of the reference white point. f(z) = z1/3 if z> f(z)=7.787z+16/116 otherwise
60
CIELab L* = 116 f(Y/Yn)-16 a* = 500[f(X/Xn) – f(Y/Yn)]
b* = 200[f(Y/Yn) –f(Z/Zn)] where Xn,Yn,Zn are the CIE XYZ coordinates of the reference white point. f(z) = z1/3 if z> f(z)=7.787z+16/116 otherwise C*ab = sqrt(a*2+b*2) corresponds to perception of chroma (colorfulness). hue angle hab=tan-1(b*/a*) corresponds to hue perception. L* corresponds to lightness perception Euclidean distance in Lab space is fairly correlated to color matching and color distance judgements under many conditions. Good correspondence to Munsell distances.
61
lightness chroma hue b*>0 yellower a*<0 greener a*>0 redder b*<0 bluer
62
Complementary Colors c1 and c2 are complementary hues if they sum to the whitepoint. Not all spectral (i.e. monochromatic) colors have complements. See chromaticity diagram. See Photoshop Lab interface.
63
CIELab defects Perceptual lines of constant hue are curved in a*-b* plane, especially for red and blue hues (Fairchiled Fig 10.5) Doesn’t predict chromatic adaptation well without modification Axes are not exactly perceptual unique r,y,g,b hues. Under D65, these are approx 24°, 90°,162°,246° rather than 0°, 90°, 180°, 270° (Fairchild)
64
CIELab color difference model
DE*=sqrt(DL*2 + Da* 2 + Db* 2) May be in the same L*a*b* space or to different white points (but both wp’s normalized to same max Y, usually Y=100). Typical observer reports match for DE* in range 2.5 – 20, but for simple patches, 2.5 is perceptible difference (Fairchild)
65
Viewing Conditions Illuminant matters. Fairchild Table 7-1 shows DE* using two different illuminants. Consider a source under an illuminant with SPD T(l). If color at a pixel p has spectral distribution p(l) and reflectance factor of screen is r(l) then SPD at retina is r(l)T(l)+p(l). Typically r(l) is constant, near 1, and diffuse.
66
Color ordering systems
Want system in which finite set of colors vary along several (usually three) axes in a perceptually uniform way. Several candidates, with varying success Munsell Spectra available at Finnish site NCS OSA Uniform Color Scales System …
67
Color ordering systems
CIE L*a*b* still not faithful model, e.g. contours of constant Munsell chroma are not perfect circles in L*a*b* space. See Fairchild Fig 10-4, Berns p. 69.
68
Effect of viewing conditions
Impact of measurement geometry on Lab Need illumination and viewing angle standards Need reflection descriptions for opaque material, transmission descriptions for translucent
69
Reflection geometry specular diffuse
70
Reflection geometry Semi-glossy glossy
71
Reflection geometry Semi-glossy glossy
72
Some standard measurement geometries
d/8:i diffuse illumination, 8° view, specular component included d/8:e as above, specular component excluded d/d:i diffuse illumination and viewing, specular component included 45/0 45° illumination, 0° view
73
Viewing comparison L* C* h DE d/8:i 51.1 41.5 269 45/0 44.8 46.9 268 8.3 d/8:e 47.5 44.6 4.7 Measurement differences of a semi-gloss tile under different viewing conditions (Berns, p. 86). DE is vs. d/8:i. Data are for Lab. Note that hue angles are very close for all, and that non-diffuse illumination has highest chroma. Also note that chroma is higher with specular reflection excluded.
74
L*u*v* CIE u' v' chromaticity coordinates:
u'=4X/(X+15Y+3Z)= 4x/(-2+12y+3) v'=9Y/(X+15Y+3Z)=9y/(-2+12y+3) Gives straighter lines of constant Munsell chroma (See figures on p. 64 of Berns). L* = 116(Y/Yn)1/3 –16 u* = 13L*(u' – un') v* = 13L*(v'-vn')
75
L*u*v* L* = 116(Y/Yn)1/3 –16 u* = 13L*(u' – un') v* = 13L*(v'-vn')
un', vn' values for whitepoint
76
Models for color differences
Euclidean metric in CIELab (or CIELuv) space not very predictive. Need some weighting DV = (1/kE))[(DL*)/kLSL)2+(DCa*/kCSC)2+(DHa*/kHSH)2]1/2 a = uv or ab according to whether using L*a*b* or L*u*v* The k's are parameters fit to the data. The S's are functions of the underlying variable, estimated from data.
77
Models for color differences
kL = kC = kH = 1 SL = 1 SC= C*ab SH = C*ab Fitting with one more parameter for scaling gives good predictions. Berns p 125.
78
Color constancy Color difference models such as previous have been used to predict color inconstancy under change of illumination. Berns p. 214.
79
Other color appearance phenomena
Models still under investigation to account for: Colorfulness (perceptual attribute of chroma) increases with luminance ("Hunt effect") Brightness contrast (perceptual attribute of lightness difference) increases with luminance Chromatic adaptation
80
Color Gamuts Gamut: the range of colors that are viewable under stated conditions Usually given on chromaticity diagram This is bad because it normalizes for lightness, but the gamut may depend on lightness. Should really be given in a 3d color space L*a*b* is usual, but has some defects to be discussed later
81
Color Gamut Limitations
CIE XYZ underlies everything this permits unrealizable colors, but usually "gamut" means restricted to the visible spectrum locus in chromaticity diagram Gamut can depend on luminance usually on illuminant relative luminance, i.e. Y/Yn
82
Color Gamut Limitations
Surface colors reflectance varies with gloss. Generally high gloss increases lightness and generally lightness reduces gamut (see figures in Berns, p. 145 ff) Stricter performance requirements often reduce gamut e.g. require long term fade resistance
83
Color Gamut Limitations
Physical limitations of colorants and illuminants Specific set of colorants and illuminants are available. For surface coloring we can not realize arbitrary XYZ values even within the chromaticity spectral locus Economic factors Color may be available but expense not justified
84
Color mixing Suppose a system of colorants (lights, inks,…). Given two colors with spectra c1(l) and c2(l). This may be reflectance spectra, transmittance spectra, emission spectra,…Let d be a mix of c1and c2. The system is additive if d(l) = c1(l) + c2(l) no matter what c1 and c2 are.
85
Scalability Suppose the system has some way of scaling the intensity of the color by a scalar k. Examples: CRT: increase intensity by k. halftone printing: make dots k times bigger colored translucent materials: make k times as thick If c is a color, denote the scaled color as d. If the spectrum d (l) is k(c(l)) for each l, the system is scalable
86
Scalability Consider a color production system and a colors c1,c2 with c2=kc1. Let mi=max(ci(l)) and di=(1/mi)ci. Highschool algebra shows that the system is scalable if and only if d1(l )=d2 (l) for all l, no matter what c1 and k.
87
Control in color mixing systems
Normally we control some variable to control intensity: CRT voltage on electron gun integer Translucent materials (liquids, plastics...): thickness Halftone printing: dot size
88
Linearity A color production system is linear if it is additive and scalable. Linearity is good: it means that model computations involving only linear algebra make good predictions. Interesting systems are typically additive over some range, but rarely scalable. A simple compensation can restore often restore linearity by considering a related mixing system.
89
Scalability in subtractive systems
0<=k<=1 L0 kL0 k*kL0 knL0 d d d
90
Scalability in subtractive systems
0<=k<=1 L0 knL0 Tl = tlb where Tl is total transmittance at wavelength l, tl transmittance of unit thickness and b is thickness L(nd) = knL0 n integer; L(bd) = kbL0 b arbitrary L(b) = kbL0 when d = 1; L(b)/L0 = kb
91
Linearity in subtractive systems
Absorbance Al = -log(Tl) defn = -log(tlb) = -blog(tl) = -bal where al=absorbance of unit thickness so absorbance is scalable when thickness b is the control variable By same argument as for scalability, the transmittance of the "sum" of colors Tl and Sl will be their product and so the absorbance of the sum will be the sum of the absorbances. Thus absorbance as a function of thickness is a linear mixture system
92
Tristimulus Linearity
[Xmix Ymix Zmix] = [X1 Y1 Z1] + [X2 Y2 Z2] c [X Y Z] = [cX cY cZ] This is true because r(l) g(l) b(l) are the basis of a 3-d linear space (of functions on wavelength) describing lights Grassman's laws are precisely the linearity of light when described in that space. [X Y Z] is a linear transformation from this space to R3
93
Monitor (non)Linearity
B C Linear stage Non-linear stage f1(L1, L2, L3) f3(L1, L2, L3) L1(A,B,C) L2(A,B,C) f2(L1, L2, L3) L3(A,B,C)
94
Monitor (non)Linearity
In = [A,B,C] --> L = [L1, L2, L3 ]--> Out = [O1 O2 O3] = [f1(L1, L2, L3) f2(L1, L2, L3) f3(L1, L2, L3)] Interesting monitor cases to consider: In = [dr dg db] where dr, dg, db are integers 0…255 or numbers 0…1 describing the programming API for red, green, blue channels Out = [X Y Z] tristimulus coords or monitor intensities in each channel Typically: fi depends only on Li fi are all the same fi(u) = ug for some g characteristic of the monitor
95
Monitor (non)Linearity
Warning: LCD non-linearity is logistic, not exponential but flat panel displays are usually built to mimic CRT because much software is gamma-corrected (with typical g= ) Somewhat related: Most LCD displays are built with analog instead of digital inputs, in order to function as SVGA monitors. This is changing.
96
Monitor (non)Linearity
(CRT Colorimetry example of Berns, p ) Non-linearity is f(u)=ug , g = 2.7, same for all output channels. Linearity is diagonal: R G B a a a dr dg db b b b = + where a=1.02/255, b= -.02
97
R+G+B vs. gray, LCD projector
120 100 80 60 40 20 50 100 150 200 250 300
98
More depth on Gamma Poynton, Gamma and its disguises: The nonlinear mappings of intensity in perception, CRTs, film and video. SMPTE Journal, 1993,
99
Halftoning The problem with ink: it’s opaque
Screening: luminance range is accomplished by printing with dots of varying size. Collections of big dots appear dark, small dots appear light. % of area covered gives darkness.
101
Halftoning references
A commercial but good set of tutorials Digital Halftoning, by Robert Ulichney, MIT Press, 1987 Stochastic halftoning
102
Color halftoning Needs screens at different angles to avoid moire
Needs differential color weighting due to nonlinear visual color response and spatial frequency dependencies.
103
Halftone ink May not always be opaque
Three inks can give 2^3=8 distinct colors Visual system gives more since dot size, spacing, yields intensity, gives somewhat additive system Highly nonlinear. See Berns et al. The Spectral Modeling of Large Format Ink Jet Printers
104
From http://www.matrixcolor.com/
108
108° 162° 90° 45°
109
Quantization If too few levels of gray, (e.g. decrease halftone spot size to increase spatial resolution), then boundaries between adjacent gray levels become apparent. This can happen in color halftoning also. See demo at
110
Saturation Distance from white point
Adding white desaturates but does not change hue or perceptual brightness. HSB model is approximate representative of this. See PhotoShop
111
Device Independence Calibration to standard space
typically CIE XYZ Coordinate transforms through standard space Gamut mapping
112
Device independence Stone et. al. “Color Gamut Mapping and the Printing of Digital Color Images”, ACM Transactions on Graphics, 7(4) October 1998, pp The following slides refer to their techniques.
113
Device to XYZ Sample gamut in device space on 8x8x8 mesh (7x7x7 = 343 cubes). Measure (or model) device on mesh. Interpolate with trilinear interpolation for small mesh and reasonable function XYZ=f(device1, device2, device3) this approximates interpolating to tangent.
114
XYZ to Device Invert function XYZ=f(device1, device2, device3)
hard to do in general if f is ill behaved At least make f monotonic by throwing out distinct points with same XYZ. e.g. CMY device: (continued)
115
XYZ to CMY Invert function XYZ=f(c,m,y)
Given XYZ=[x,y,z] want to find CMY=[c,m,y] such that f(CMY)=XYZ Consider X(c,m,y), Y(c,m,y), Z(c,m,y) A continuous function on a closed region has max and min on the region boundaries, here the cube vertices. Also, if a continuous function has opposite signs on two boundary points, it is zero somewhere in between.
116
XYZ to CMY Given X0, find [c,m,y] such that f(c,m,y) = X0
if [ci,mi,yi] [cj,mj,yj] are vertices on a given cube, and U=X(c,m,y)- X0 has opposite sign on them, then it is zero in the cube. Similarly Y, Z. If find such vertices for all of X0,Y0,Z0, then the found cube contains the desired point. (and use interpolation). Doing this recursively will find the desired point if there is one.
117
Gamut Mapping Criteria: preserve gray axis of original image
maximum luminance contrast few colors map outside destination gamut hue, saturation shifts minimized increase, rather than decrease saturation do not violate color knowledge, e.g. sky is blue, fruit colors, skin colors
118
Gamut Mapping Special colors and problems
Highlights: this is a luminance issue so is about the gray axis Colors near black: locus of these colors in image gamut must map into something reasonably similar shape else contrast and saturation is wrong
119
Gamut Mapping Special colors and problems
Highly saturated colors (far from white point): printers often incapable. Colors on the image gamut boundary occupying large parts of the image. Should map inside target gamut else have to project them all on target boundary.
120
Gamuts CRT Printer
121
Gamut Mapping First try: map black points and fill destination gamut.
122
device gamut image gamut
123
device gamut translate Bito Bd image gamut bs (black shift)
124
device gamut translate Bito Bd image gamut scale by csf
125
device gamut translate Bito Bd image gamut scale by csf rotate
126
Gamut Mapping Xd = Bd + csf R (Xi - Bi) Problems:
Bi = image black, Bd = destination black R = rotation matrix csf = contrast scaling factor Xi = image color, Xd = destination color Problems: Image colors near black outside of destination are especially bad: loss of detail, hue shifts due to quantization error, ...
127
shift and scale along destination gray
Xd = Bd + csf R (Xi - Bi) + bs (Wd - Bd) shift and scale along destination gray
128
Fig 14a, bs>0, csf small, image gamut maps entirely into printer gamut, but contrast is low.
Fig 14b, bs=0, csf large, more contrast, more colors inside printer gamut, but also more outside.
129
Saturation control “Umbrella transformation”
[Rs Gs Bs] = monitor whitepoint [Rn Gn Bn] new RGB coordinates such that Rs + Gs + Bs = Rn + Gn + Bn and [Rn Gn Bn] maps inside destination gamut First map R Rs+G Gs+B Bs to R Rn+G Gn+B Bn Then map into printer coordinates Makes minor hue changes, but “relative” colors preserved. Achromatic remain achromatic.
130
Projective Clipping After all, some colors remain outside printer gamut Project these onto the gamut surface: Try a perpendicular projection to nearest triangular face in printer gamut surface. If none, find a perpendicular projection to the nearest edge on the surface If none, use closest vertex
131
Projective Clipping This is the closest point on the surface to the given color Result is continuous projection if gamut is convex, but not else. Bad: want nearby image colors to be nearby in destination gamut.
132
Projective Clipping Problems
Printer gamuts have worst concavities near black point, giving quantization errors. Nearest point projection uses Euclidean distance in XYZ space, but that is not perceptually uniform. Try CIELAB? SCIELAB? Keep out of gamut distances small at cost of use of less than full printer gamut use.
133
Color Management Systems
Problems Solve gamut matching issues Attempt uniform appearance Solutions Image dependent manipulations (e.g. Stone) Device independent image editors (e.g. Photoshop) with embedded CMS ICC Profiles
134
ICC Color Profiles International Color Consortium ICC Profile device description text characterization data calibration data invertible transforms to a fixed virtual color space, the Profile Connection Space (PCS)
135
Profile Connection Space
Presently only two PCS’s: CIELAB and CIEXYZ Both specified with D50 white point Device<-->PCS must account for viewing conditions, gamut mapping and tone (e.g. gamma) mapping.
136
Viewing-condition independent space
Input image and device (e.g. RGB) Gamut mapping, tone control, etc Viewing-condition independent space input device colorimetric characterization Chromatic adaptation and color appearance models DVI color space (PCS) DVI color space (e.g. XYZ) DVI color space (e.g. XYZ) Chromatic adaptation and color appearance models output device colorimetric characterization Viewing-condition independent space Output image and device (e.g. CMY) Gamut mapping, tone control, etc
137
ICC Profiles Device profiles Colorspace profiles Device Link profile
data conversion Device Link profile concatenated D1->PCS->D2 Abstract profile generic for private purposes, e.g. special effects
138
ICC Profiles Named color profile
Allows data described in Pantone system (and others?) to map to other devices, e.g. view. Supported in Photoshop Photoshop demo of nearest Pantone, etc. colors to a selected color
139
ICC Profile Data Tags Profile header tags:
administrative and descriptive Start of Header Byte count of profile Profile version number Profile or device class (input, display, output, link, colorspace, abstract, named color profile) PCS target (CIEXYZ or CIELab) Photoshop demo of nearest Pantone, etc. colors to a selected color
140
ICC Profile Data Tags Profile header tags:
ICC registered device manufacturer, model Media attributes 64 attribute bits, 32 reserved (reflective/transparent; glossy/matte. ) XYZ of illuminant Rendering intent (Perceptual, relative colorimetry, saturation, absolute colorimetry) Photoshop demo of nearest Pantone, etc. colors to a selected color
141
ICC Profile Rendering Intents
perceptual: “full gamut of the image is compressed or expanded to fill the gamut of the destination device. Gray balance is preserved but colorimetric accuracy might not be preserved.” (ICC Spec Clause 4.9) saturation: “specifies the saturation of the pixels in the image is preserved perhaps at the expense of accuracy in hue and lightness.” (ICC Spec Clause 4.12) absolute colorimetry: relative to illuminant only relative colorimetry: relative to illuminant and media whitepoint Photoshop demo of nearest Pantone, etc. colors to a selected color
142
ICC Profile Data Tags Tone Reproduction Curve (TRC) tags:
grayTRC, redTRC, greenTRC, blueTRC single number (gamma) if TRC is exponential array of samples of the TRC appropriate to interpolation Photoshop demo of nearest Pantone, etc. colors to a selected color
143
ICC Profile Data Tags Mapping tags (“AtoB0Tag”, “BtoA0Tag”, etc.)
Map between device and PCS Includes 3x3 matrix if mapping is linear map of CIEXYZ spaces, or lookup table on sample points if not. Photoshop demo of nearest Pantone, etc. colors to a selected color
144
ICC Profile Special Goodies
Initimate with PostScript Support for PostScript Color Rendering Dictionaries reduces processing in printer Support for argument lists to PostScript level 2 color handling Halftone screen geometry and frequency Undercolor removal Embedding profiles in pict, gif, tiff, jpeg,eps Photoshop demo of nearest Pantone, etc. colors to a selected color
145
Digital Cameras CCD (Monochrome) RGB Color Filter Array
146
V(l) XYZ=[0,1,0] L*a*b*=[9,-39,15] RGB=[0,38,0]
Thus suitable green filter can be an approximation to luminance channel
147
Color Filter Arrays RGB Color Filter Array
Green is perceptually reasonable achromatic channel Hence need more spatial resolution in green, so twice as many green samples as red or blue. But each sample has implied R,G,B. Calculate what’s not sampled
148
Color Filter Arrays RGB Color Filter Array
Green is perceptually reasonable achromatic channel Demosaic by averaging at intersections or by interpolation at centers or by other methods
149
CFA Demosaic Techniques: Luminance channel
First find appropriate luminance (i.e. green for an RGB CFA) at pixels not sampled by filter. Linear filtering simple average of all adjacent green values Gaussian or other weighted average (See Photoshop) All blur edges. Instead use edge detection algorithms and average along edges instead of across edges. Requires more computation
150
CFA Demosaic Techniques: Chrominance channels
Need two chrominance channels at each pixel (or at intersections) CR = R-G, CB=B-G At blue and red pixels, already computed a green value in luminance computations, so CR, CB easy CR G CR G CB G CR G CR For green pixels, average adjacent horizontal chrominances to get CR, adjacent vertical to get CB
151
Digital Cameras: Other issues
Aliasing due to undersampling White balance to correct for illuminant Characterization (XYZ of primaries) Calibration (tables of correction to known color patches, suitable for correction of all colors with linear methods) Poor demosaic algorithms. See Wandell and Silverstein p. 14.
152
JPEG DCT Quantization FDCT of 8x8 blocks.
Order in increasing spatial frequency (zigzag) Low frequencies have more shape information, get finer quantization. High’s often very small so go to zero after quantizing If source has 8-bit entries ( s in [-27, 27-1), can show that quantized DCT needs at most 11 bits (c in [-210, 210-1]) See Wallace paper, p 12. Note high frequency contributions small.
153
JPEG DCT Quantization Q(u,v) 8x8 table of integers [1..255]
FQ(u,v) = Round(F(u,v)/Q(u,v)) Note can have one quantizer table for each image component. If not monochrome image, typically have usual one luminance, 2 chromatic channels. Quantization tables can be in file or reference to standard Standard quantizer based on JND. See Wallace p 12.
154
JPEG DCT Intermediate Entropy Coding
Variable length code (Huffman): High occurrence symbols coded with fewer bits Intermediate code: symbol pairs symbol-1 chosen from table of symbols si,j i is run length of zeros preceding quantized dct amplitude, j is length of huffman coding of the dct amplitude i = 0…15, j= 1…10, and s0,0=‘EOB’ s15,0 = ‘ZRL’ symbol-2: Huffman encoding of dct amplitude Finally, these 162 symbols are Huffman encoded.
155
JPEG components Y = 0.299R G B Cb = R G + 0.5B Cr = 0.5R G B Optionally subsample Cb, Cr replace each pixel pair with its average. Not much loss of fidelity. Reduce data by 1/2*1/3+1/2*1/3 = 1/3 More shape info in achromatic than chromatic components. (Color vision poor at localization).
156
JPEG goodies Progressive mode - multiple scans, e.g. increasing spatial frequency so decoding gives shapes then detail Hierarchical encoding - multiple resolutions Lossless coding mode JFIF: User embedded data more than 3 components possible?
157
Huffman Encoding Traverse from root to leaf, then repeat: s3 s s3 s2 s4 1 00 s1 01 s2 11 s3 100 s4 10 101 1011 s6 1010 s5
158
MPEG MPEG is to temporal compression as JPEG is to static compression:
utilizes known temporal psychophysics, both visual and audio utilizes temporal redundancy for inter-frame coding (most of a picture doesn’t change very fast)
159
MPEG Data Organization
Inter-frame differences within small blocks: code difference; good if not much motion code motion vector; good if translation Three kinds of frames: I (Intra); “still” or reference frame, e.g JPEG P (Predictive) coded relative to I or previous P B (Bidirectional) coded relative to both previous and next I or P
160
MPEG Data Organization
Goals of inter-frame coding: high bit rate random access Costs memory but memory is now cheap, hence HDTV arriving
161
Color TV Multiple standards - US, 2 in Europe, HDTV standards, Digital HDTV , Japanese analog. US: 525 lines (US HDTV is digital, and data stream defines resolution. Typically MPEG encoded to provide 1088 lines of which 1080 are displayed)
162
NTSC Analog Color TV 525 lines/frame Interlaced to reduce bandwidth
small interframe changes help Primary chromaticities:
163
NTSC Analog Color TV These yield
RGB2XYZ = Y=0.299R G B (same as luminance channel for JPEG!) = Y value of white point. Cr = R-Y, Cb = B-Y with chromaticity: Cr: x=1.070, y=0; Cb: x=0.131 y=0; y(C)=0 => Y(C)=0 => achromatic
164
NTSC Analog Color TV Signals are gamma corrected under assumption of dim surround viewing conditions (high saturation). Y, Cr, Cb signals (EY, Er, Eb) are sent per scan line; NTSC, SECAM, PAL do this in differing clever ways EY typically with twice the bandwidth of Er, Eb
165
NTSC Analog Color TV Y, Cr, Cb signals (EY, Er, Eb) are sent per scan line; NTSC, SECAM, PAL do this in differing clever ways. EY with 4-10 x bandwidth of Er, Eb “Blue saving”
166
Digital HDTV 1987 - FCC seeks proposals for advanced tv
Broadcast industry wants analog, 2x lines of NTSC for compatibility Computer industry wanta digital 1993 (February) DHDTV demonstrated in four incompatible systems 1993 (May) Grand Alliance formed
167
Digital HDTV 1996 (Dec 26) FCC accepts Grand Alliance Proposal of the Advanced Televisions Systems Committee ATSC 1999 first DHDTV broadcasts
168
Digital HDTV MPEG video compression Dolby AC-3 audio compression
lines hpix aspect frames frame rate ratio /9 progressive 24, 30 or 60 /9 interlaced /9 progressive , 30 MPEG video compression Dolby AC-3 audio compression
169
Some gamuts SWOP ENCAD GA ink
170
Color naming A Computational model of Color Perception and Color Naming, Johann Lammens, Buffalo CS Ph.D. dissertation Cross language study of Berlin and Kay, 1969 “Basic colors”
171
Color naming “Basic colors”
Meaning not predicted from parts (e.g. blue, yellow, but not bluish) not subsumed in another color category, (e.g. red but not crimson or scarlet) can apply to any object (e.g. brown but not blond) highly meaningful across informants (red but not chartruese) Ask audience to give basic colors they can identify in Berlin Kay colors (adapted by Lammel)
172
Color naming “Basic colors” Vary with language
173
Color naming Berlin and Kay experiment:
Elicit all basic color terms from 329 Munsell chips (40 equally spaced hues x 8 values plus 9 neutral hues Find best representative Find boundaries of that term
174
Color naming Berlin and Kay experiment:
Representative (“focus” constant across lang’s) Boundaries vary even across subjects and trials Lammens fits a linear+sigmoid model to each of R-B B-Y and Brightness data from macaque monkey LGN data of DeValois et. al.(1966) to get a color model. As usual this is two chromatic and one achromatic
175
Color naming To account for boundaries Lammens used standard statistical pattern recognition with the feature set determined by the coordinates in his color space defined by macaque LGN opponent responses. Has some theoretical but no(?) experimental justification for the model.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.