Download presentation
Presentation is loading. Please wait.
2
6/23/2015CIC 10, 20022 Color constancy at a pixel [Finlayson et al. CIC8, 2000] Idea: plot log(R/G) vs. log(B/G): 14 daylights 24 patches
3
6/23/2015CIC 10, 20023 Log(R/G) Log(B/G) For every patch, the direction from light color change is about the same!
4
6/23/2015CIC 10, 20024 Why all linear and same direction? colorshading intensity light SPD reflectance sensor k=1..3 Now let’s make some assumptions: The image formation equation:
5
6/23/2015CIC 10, 20025 Assumption 1: Light is ~ Planckian Assumption 1: Light is ~ Planckian (or some other 1D assumption) Wien’s approximation of a Planckian source: Note: 1D parameter: T == temperature == light color. P100
6
6 Assumption 2: Narrow band sensors SONY DXC-930 The Sony Camera has fairly narrow band sensitivities Using spectral sharpening, we can make almost all sensor sets have this property. [Finlayson, Drew, Funt]
7
Modified Image Formation The k th response Substituting Narrow-band and Planckian Assumptions Take logs Response = light intensity + surface + light color
8
8Implications We have k equations of the form: is common to all equations and can be removed by simple differencing at this pixel This results in k-1 independent equations of the form reflectance term light color term
9
6/23/2015CIC 10, 20029 Implications Implications The log chromaticities of 7 surfaces viewed under 10 lights (1) If there are 3 sensors we have two independent equations of this form: (2) For a single surface viewed under different colored lights the log chromaticities must fall on a line: (3) Different surfaces induce lines with the *same* orientation
10
6/23/2015CIC 10, 200210 Luminance1D invariant Gray One degree of freedom is invariant to light change One degree of freedom is invariant to light change
11
11 More formally: and define form ratios define vectors line in 2D
12
12 What is this good for? With certain restrictions, from a 3-band color image we can derive a 1-D grayscale image which is: - illuminant invariant - and so, shadow free
13
13 Then use edge info. to integrate back without shadows [ECCV02 Finlayson, Hordley, and Drew] These are approximately the same, except that the invariant edge map has no shadow edges
14
14 Other tasks: Tracking, etc. Tracking result for moving hand under lamp light. [Jiang and Drew, 2003]
15
6/23/2015CIC 10, 200215 But problem: doesn’t always remove all shadows: Depends on camera sensors
16
16 How do we find light color change direction? Sony DXC-930 camera Mean-subtracted log- chromaticity (Use robust line-finder)
17
6/23/2015CIC 10, 200217 Problem: invariant image isn’t invariant across illuminants
18
6/23/2015CIC 10, 200218 Gets worse: Kodak DCS420 camera is much less sharp
19
6/23/2015CIC 10, 200219 How to proceed? Try spectral sharpening, since wish to make sensors more narrowband…. Or just optimize directly, making invariant image more invariant. E.g. optimize color-matching functions :
20
6/23/201520 Invariant image for patches apply optimized sensors to any image Before optimization of sensorsAfter optimization of sensors
21
21 How to optimize? Firstly, let’s use a linear matrixing transform, taking 31 x 3 sensor matrix Q to a new sensor set: Should we sharpen to get M? sensorscolors 3 x 3
22
22 Should we sharpen to get M? There’s a problem: If we made sensors that were all the same, the definition makes the invariant go to zero… The more the sensors are alike, the “better” Sharpening & flattening both work…
23
6/23/2015CIC 10, 200223 So need to use a term to steer away from a rank-reduced M Optimize on the (correlation coefficient) 2 R 2 and encourage high effective rank are singular values of M Initialize with data-driven spectral sharpening matrix.
24
6/23/2015CIC 10, 200224 So optimize M: E.g., color-matching functions: R 2 goes from 0.43 to 0.94
25
6/23/2015CIC 10, 200225 HP912 camera: R 2 : 0.86 0.93 entropy : 5.856 5.590 bits/pixel
26
26 Real image: entropy 5.295 4.939 bits/pixel with an M
27
27 Thanks!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.