Presentation is loading. Please wait.

Presentation is loading. Please wait.

CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and.

Similar presentations


Presentation on theme: "CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and."— Presentation transcript:

1 CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and Assessment of Image Quality Jan P. Allebach School of Electrical and Computer Engineering Purdue University West Lafayette, Indiana allebach@purdue.edu CIC-19, San Jose, CA, 8 November 2011

2 2/120 What is a model? Model is not a complete description of the phenomenon being modeled. It should capture only what is important to the application at hand, and nothing more. Its structure must be responsive to resource constraints. From dictionary.com: A schematic description of a system, theory, or phenomenon that accounts for its known or inferred properties and may be used for further study of its characteristics.

3 CIC-19, San Jose, CA, 8 November 2011 3/120 Visual system components

4 CIC-19, San Jose, CA, 8 November 2011 4/120 Why do we need spatiochromatic models? Imaging systems succeed by providing a facsimile of the real world A few primaries instead of an exact spectral match Spatially discretized and amplitude quantized representation of images that are continuous in both space and amplitude These methods only succeed only because of the limitations of the human visual system (HVS) To design lowest cost systems that achieve the desired objective, it is necessary to take into account the human visual system in the design and evaluation

5 CIC-19, San Jose, CA, 8 November 2011 5/120 Modeling context Modeling process is very dependent on the intended application -Motivation for developing the models in the first place -Governs choice of features to be captured and computational structure of the model -Provides the final test of the success of the model Tight interplay between models for imaging system components and the human visual system Model usage may be either embedded or external

6 CIC-19, San Jose, CA, 8 November 2011 6/120 Pedagogical approach Spatiochromatic modeling, in principle, builds on all of the following areas: -Color science -Imaging science -Psychophysics -Image systems engineering As stated in course description, we assume only a rudimentary knowledge of these subjects Start from basic principles, but move quickly to more advanced level Focus on what is needed to follow the modeling discussion See references at end

7 CIC-19, San Jose, CA, 8 November 2011 7/120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

8 CIC-19, San Jose, CA, 8 November 2011 8/120 For further information There is an extensive list of references at the end of these notes. The powerpoint presentation may be downloaded from the web: https://engineering.purdue.edu/~ece638/

9 CIC-19, San Jose, CA, 8 November 2011 9/120 The retinal image is what counts Every spatiochromatic model has an implied viewing distance What happens when this condition is not met? -Too far – image looks better than specification -Too close – may see artifacts

10 CIC-19, San Jose, CA, 8 November 2011 10/120 Basic spatiochromatic model structure

11 CIC-19, San Jose, CA, 8 November 2011 11/120 Trichromatic stage First proposed by Young in 1801 Ignored for over 50 years Helmholtz revived concept in Handbook of Physiological Optics (1856 -1866) Physiological basis is the existence of three different cone types in retina Accurately predicts the results of color matching experiments over a wide range of conditions Simulated retinal mosaic

12 CIC-19, San Jose, CA, 8 November 2011 12/120 Trichromatic sensor model For the human visual system, the spectral response functions can be measured indirectly through color matching experiments are spectral response functions that characterize the sensor

13 CIC-19, San Jose, CA, 8 November 2011 13/120 Transformation between tristimulus representations The trichromatic sensor model is applicable to a wide range of image capture devices, such as cameras and scanners, as well as the human visual system If the spectral response functions are a linear transformation of those corresponding to the human visual system, then we call the 3-tuple response the tristimulus coordinate of that spectral power distribution For any two sets of spectral response functions that are both linear transformations of those for the human visual system, we can use a 3x3 matrix to transform between the corresponding tristimulus coordinates for any spectral stimulus The color matching functions for visually independent primaries are also equivalent to the cone responses of the human visual system

14 CIC-19, San Jose, CA, 8 November 2011 14/120 Color matching experiment – setup

15 CIC-19, San Jose, CA, 8 November 2011 15/120 Color matching experiment - procedure Test stimulus is fixed Observer individually adjusts strengths of the three match stimuli to achieve a visual match between the two sides of the split field Mixture is assumed to be additive, i.e. radiant power of mixture in any wavelength interval is sum of radiant powers of the three match stimuli in the same interval To achieve a match with some test stimuli, it may be necessary to move one or two of the match stimuli over to the side where the test stimulus is located For visually independent primaries, match amounts are equivalent to tristimulus coordinates

16 CIC-19, San Jose, CA, 8 November 2011 16/120 Color matching functions A color matching experiment yields the amount of each of three primaries required to match a particular stimulus A special case is a monochromatic stimulus with wavelength If we repeat this experiment for all wavelengths, we obtain color matching functions Since any stimulus can be expressed as a weighted sum of monochromatic stimuli, the primary match amounts can be expressed as

17 CIC-19, San Jose, CA, 8 November 2011 17/120 CIE 1931 standard RGB observer Observer consists of color matching functions corresponding to monochromatic primaries Primaries -R – 700 nm -G – 546.1 nm -B – 435.8 nm Ratio of radiances Chosen to place chromaticity of equal energy stimulus E at center of (r-g) chromaticity diagram, i.e. at (0.333,0.333) that areas under color matching functions are identical. Based on observations in a 2 degree field of view using color matching method discussed earlier.

18 CIC-19, San Jose, CA, 8 November 2011 18/120 Color matching functions for 1931 CIE standard RGB observer

19 CIC-19, San Jose, CA, 8 November 2011 19/120 Relative luminous efficiency - a special case of color matching An achromatic sensor with response function is called the standard photometric observer.

20 CIC-19, San Jose, CA, 8 November 2011 20/120 CIE 1931 standard XYZ observer The CIE also defined a second standard observer based on a linear transformation from the 1931 RGB color matching functions. The XYZ observer has the following properties: -The color matching functions are non- negative at all wavelengths. - The chromaticity coordinates of all realizable stimuli are non-negative. -The color matching function is equal to the relative luminous efficiency function To achieve these properties, it was necessary to use primaries that are not realizable. The chromaticities of the primaries lie outside the spectral locus.

21 CIC-19, San Jose, CA, 8 November 2011 21/120 Color matching functions for 1931 CIE standard XYZ observer

22 CIC-19, San Jose, CA, 8 November 2011 22/120 Cone responses for human visual system* *Vos and Walraven, Vision Res., 1971

23 CIC-19, San Jose, CA, 8 November 2011 23/120 Chromaticity coordinates Chromaticity coordinates provide an important method for visualizing tristimulus coordinates, i.e. sensor responses or primary match amounts Let denote either the sensor response or the primary match amounts for a particular stimulus The corresponding chromaticity coordinates are defined as We can see by inspection that each coordinate lies between 0 and 1 and that all three coordinates sum to 1

24 CIC-19, San Jose, CA, 8 November 2011 24/120 Chromaticity diagram for 1931 CIE standard XYZ observer

25 CIC-19, San Jose, CA, 8 November 2011 25/120 How do we use the trichromatic model? Assuming image is in a standard color space, such as sRGB, we transform to CIE XYZ as follows: Remove gamma correction, and transform to linear RGB Perform 3x3 matrix transform from linear RGB to CIE XYZ The CIE XYZ representation of the image will form the basis for further stages of the model

26 CIC-19, San Jose, CA, 8 November 2011 26/120 Basic spatiochromatic model structure

27 CIC-19, San Jose, CA, 8 November 2011 27/120 Opponent stage Trichromatic theory provides the basis for understanding whether or not two spectral power distributions will appear the same to an observer when viewed under the same conditions. However, the trichromatic theory will tell us nothing about the appearance of a stimulus. In the early 1900’s, Ewald Hering observed some properties of color appearance -Red and green never occur together – there is no such thing as a reddish green, or a greenish red -If I add a small amount of blue to green, it looks bluish- green. If I add more blue to green, it becomes cyan. -In contrast, if I add red to green, the green becomes less saturated. If I add enough red to green, the color appears gray, blue, or yellow -If I add enough red to green, the color appears red, but never reddish green

28 CIC-19, San Jose, CA, 8 November 2011 28/120 Blue-yellow color opponency

29 CIC-19, San Jose, CA, 8 November 2011 29/120 Opponent stage (cont.) Hering postulated that there existed two kinds of neural pathways in the visual system -Red-Green pathway fires fast if there is a lot of red, fires slowly if there is a lot of green -Blue-Yellow pathway fires fast if there is a lot of blue, fires slowly if there is a lot of yellow Hering provided no experimental evidence for his theory; and it was ignored for over 50 years

30 CIC-19, San Jose, CA, 8 November 2011 30/120 Experimental evidence for opponency Hurvitch and Jameson hue cancellation experiment (1955) Savaetichin electrophysiological evidence from the retinal neurons of a fish (1956) Boynton’s color naming experiment (1965) Wandell’s color decorrelation experiment Left and right plots show data for two different observers. Open triangles show cancellation of red- green appearance. Closed circles show cancellation of blue- yellow appearance.

31 CIC-19, San Jose, CA, 8 November 2011 31/120 Color spaces that incorporate opponency YUV (NTSC video standard space) YC r C b (Kodak PhotoCD space) L*a*b* (CIE uniform color space) YCxCz (Linearized CIE L*a*b* space) O 1 O 2 O 3 (Wandell’s optimally decorrelated space)

32 CIC-19, San Jose, CA, 8 November 2011 32/120 CIE L*a*b* and its linearized version YCxCz in terms of CIE XYZ CIE L*a*b* L*= 116 f(Y/Y n ) - 16 a*= 200 [ f(X/X n ) - f(Y/Y n ) ] b*= 500 [ f(Y/Y n ) - f(Z/Z n ) ] 7.787x +16/116 0 x 0.008856 x 1/3 0.008856x1 f(x) =  white point :(X n, Y n, Z n ) Linearized opponent color space Y y C x C z Y y= 116 (Y/Y n ) C x = 200 [ (X/X n ) - (Y/Y n ) ] C z = 500 [ (Y/Y n ) - (Z/Z n ) ] correlate of luminance R-G opponent color chrominance channel Y-B L* -a* +a* -b* +b*

33 CIC-19, San Jose, CA, 8 November 2011 33/120 Wandell’s space in terms of CIE XYZ* *Wen Wu, “Two Problems in Digital Color Imaging: Colorimetry and Image Fidelity Assessor,” Ph.D. Dissertation, Purdue University, Dec. 2000

34 CIC-19, San Jose, CA, 8 November 2011 34/120 Visualization of opponent color representation (13.3,o 2,0.17) (13.3,0.24,o 3 ) (Y,0.24,0.17) (Y,o 2,o 3 )

35 CIC-19, San Jose, CA, 8 November 2011 35/120 Basic spatiochromatic model structure

36 CIC-19, San Jose, CA, 8 November 2011 36/120 Impact of viewing geometry on spatial frequencies Both arrows A and B generate same retinal image For small ratio, the angle subtended at the retina in radians is

37 CIC-19, San Jose, CA, 8 November 2011 37/120 Spatial frequency conversion To convert between (cycles/inch) viewed at distance (inches) and (cycles/degree) subtended at the retina, we thus have For a viewing distance of 12 inches, this becomes

38 CIC-19, San Jose, CA, 8 November 2011 38/120 Spatial frequency filtering stage Based on pyschophysical measurements of contrast sensitivity function Use sinusoidal stimuli with modulation along achromatic, red-green, or blue-yellow axes For any fixed spatial frequency, threshold of visibility is depends only on. This is Weber’s Law.

39 CIC-19, San Jose, CA, 8 November 2011 39/120 Campbell’s contrast sensivity function on log-log axes

40 CIC-19, San Jose, CA, 8 November 2011 40/120 Dependence of sine wave visibility on contrast and spatial frequency

41 CIC-19, San Jose, CA, 8 November 2011 41/120 Models for achromatic spatial contrast sensitivty* Author Contrast sensitivity function Constants Campbell 1969 Mannos 1974 Nasanen 1984 Daly 1987 *Kim and Allebach, IEEE T-IP, March 2002

42 CIC-19, San Jose, CA, 8 November 2011 42/120 Achromatic spatial contrast sensitivity curves

43 CIC-19, San Jose, CA, 8 November 2011 43/120 Chrominance spatial frequency response Based on Mullen’s data* *K.T. Mullen, J. Physiol., 1985

44 CIC-19, San Jose, CA, 8 November 2011 44/120 Spatial Frequency Response of Opponent Channels Chrominance [Kolpatzik and Bouman*] [Kolpatzik and Bouman*]Luminance[Nasanen] *B. Kolpatzik, and C. A. Bouman, J. Electr. Imaging, July 1992

45 CIC-19, San Jose, CA, 8 November 2011 45/120 Illustration of difference in spatial frequency response of luminance and chrominance channels Original image O 1 - filtered

46 CIC-19, San Jose, CA, 8 November 2011 46/120 Illustration of difference in spatial frequency response of luminance and chrominance channels Original image O 2 - filtered

47 CIC-19, San Jose, CA, 8 November 2011 47/120 Illustration of difference in spatial frequency response of luminance and chrominance channels Original image O 3 - filtered

48 CIC-19, San Jose, CA, 8 November 2011 48/120 Application areas for spatiochromatic models Color image display on low-cost devices -PDA -Cellphone Color image printing -Inkjet -Laser electrophotographic Digital video display -LCD -DMD -Plasma panel Lossy color image compression -JPEG -MPEG

49 CIC-19, San Jose, CA, 8 November 2011 49/120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

50 CIC-19, San Jose, CA, 8 November 2011 50/120 What is digital halftoning? Digital halftoning is the process of rendering a continuous-tone image with a device that is capable of generating only two or a few levels of gray at each point on the device output surface. The perception of additional levels of gray depends on a local average of the binary or multilevel texture.

51 CIC-19, San Jose, CA, 8 November 2011 51/120 What is digital halftoning (cont.) Detail is rendered by local modulation of the texture

52 CIC-19, San Jose, CA, 8 November 2011 52/120 The Two Fundamental Goals of Digital Halftoning Representation of Tone -smooth, homogeneous texture. -free from visible structure or contouring. Diamond dot screen Bayer screen Error diffusion DBS

53 CIC-19, San Jose, CA, 8 November 2011 53/120 The Two Fundamental Goals of Digital Halftoning (cont.) Representation of Detail -sharp, distinct, and good contrast in rendering of fine structure in image. -good rendering of lines, edges, and type characters. -freedom from moire due to interference between halftone algorithm and image content Diamond dot screen DBS screenError diffusionDBS

54 CIC-19, San Jose, CA, 8 November 2011 54/120 Types of Halftone Texture PeriodicAperiodic Clustered Dot Dispersed Dot

55 CIC-19, San Jose, CA, 8 November 2011 55/120 Basic structure of screening algorithm The threshold matrix is periodically tiled over the entire continuous-tone image.

56 CIC-19, San Jose, CA, 8 November 2011 56/120 Error diffusion Q() w k,l g[m,n]f[m,n] d[m,n] + - + - u[m,n]

57 CIC-19, San Jose, CA, 8 November 2011 57/120 Direct binary search* *D. Lieberman and J. P. Allebach, IEEE T-IP, Nov. 2002 Printer model

58 CIC-19, San Jose, CA, 8 November 2011 58/120 The Search Heuristic

59 CIC-19, San Jose, CA, 8 November 2011 59/120 DBS Convergence: 0, 1, 2, 4, 6, and 8 Iterations

60 CIC-19, San Jose, CA, 8 November 2011 60/120 Swaps vs. Toggles Toggle onlySwap and toggle

61 CIC-19, San Jose, CA, 8 November 2011 61/120 Dual interpretation for DBS Minimize mean-squared error at distance D Minimize maximum error at distance 2D

62 CIC-19, San Jose, CA, 8 November 2011 62/120 Illustration of Dual Interpreta- tion f[m] f[m] * p[m] f[m] * c pp [m] ~ ~~ g[m] g[m] * p[m] g[m] * c pp [m] ~ ~~

63 CIC-19, San Jose, CA, 8 November 2011 63/120 Impact of scale parameter S on halftone texture S 1 =0.5S 2 S 3 =2.0S 2 S 2 =300 x 9.5 - printer resolution- viewing distance

64 CIC-19, San Jose, CA, 8 November 2011 64/120 Does it make a difference which model we use? Reason for normalization -Bandwidths of models differ significantly. -Causes a significant difference in texture between the models. -For any fixed model, can achieve a similar range of textures by varying scale parameter. -Would like to compare the models at the same texture scale. Normalization method -Match the 50% point from the maximum for Nasanen’s model.

65 CIC-19, San Jose, CA, 8 November 2011 65/120 Normalized contrast sensitivity functions

66 CIC-19, San Jose, CA, 8 November 2011 66/120 Comparison between models Nasanen Daly CampbellMannos

67 CIC-19, San Jose, CA, 8 November 2011 67/120 Comparison between models (cont.) In 2003, Monga, Geisler, and Evans published a comparision of the effectiveness of four different color HVS models in the context of error diffusion halftoning* They concluded that the Flohr et al** model resulted in the best overall image quality *V. Monga, W. Geisler, B. L. Evans, IEEE SP Letters, April 2003 **U.Agar and J. P. Allebach, IEEE T-IP, Dec. 2005

68 CIC-19, San Jose, CA, 8 November 2011 68/120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones -Embedding of spatiochromatic model within DBS for color halftoning -Use of spatiochromatic model with hybrid screen to improve highlight texture -Application of spatiochromatic model to design of tile sets for periodic clustered-dot screens Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

69 CIC-19, San Jose, CA, 8 November 2011 69/120 Embedding of spatiochromatic model within DBS for color halftoning* InputRGB Continuous-tone Image Initial CMY Halftone under test RGB to Y y C x C z Lum. and Chrom. Spatial Freq. Res. HVS model + CMY to Y y C x C z g[m] f YyCxCz [m] f YyCxCz (x) ~ f[m] g YyCxCz [m] - g YyCxCz (x) ~ e (x) T ~ ~ e (x )d x E = accept or reject the trial halftone change Lum. and Chrom. Spatial Freq. Res. HVS model *U.Agar and J. P. Allebach, IEEE T-IP, Dec. 2005

70 CIC-19, San Jose, CA, 8 November 2011 70/120 Results from embedding model in DBS CDBS in RGB CDBS in Y y C x C z

71 CIC-19, San Jose, CA, 8 November 2011 71/120 Use of spatiochromatic models with hybrid screen to improve highlight texture* The hybrid screen is a screening algorithm which generates stochastic dispersed-dot textures in highlights and shadows, and periodic clustered-dot textures in midtones. It is based on two main concepts: supercell and core Dispersed-dotClustered-dot Periodic recursive ordering pattern regularly nucleated clusters Stochastic blue noisegreen noise Smooth transition *Lin and Allebach, IEEE T-IP, Dec. 2006; Lee and Allebach, IEEE T-IP, Feb. 2010.

72 CIC-19, San Jose, CA, 8 November 2011 72/120 Simple clustered-dot screen Halftone using simple clustered-dot screen Contouring Continuous-tone input a=0 a=127/255 a=16/255

73 CIC-19, San Jose, CA, 8 November 2011 73/120 Supercell approach Supercell is a set of microcells combined together as a single period of screen Supercell is used To increase the gray levels To create more accurately angled screen 11 11 22 22 00 00 46 57 8 10 9 11 02 13 microcell with macrocell growing sequence 02 13 Increasing the gray levels Creating more accurately angled screen 14.93° 15.9° Note that microcells are not identical

74 CIC-19, San Jose, CA, 8 November 2011 74/120 Limitation on supercell Clustered-dot microcell with Bayer macrocell growing sequence Abrupt texture change - Bayer structure Periodic dot withdrawal pattern Clustered-dot microscreen with stochastic-dispersed macrocell growing sequence Homogeneous dot distribution Maze-like artifact Stochastic dot withdrawal pattern

75 CIC-19, San Jose, CA, 8 November 2011 75/120 Role of highlight and shadow cores The core is a small region in each microcell where the original microcell growing sequence is ignored and the sequence can be randomized  the first dot can move around within the core  creates blue-noise-like texture There are separate core regions for highlights and shadows Highlight core Shadow core The first dot placement within the core 46 57 8 10 9 11 02 13 microcell growing sequence 01 23 42 5 11 8 10 1 7 06 93 microcell growing sequence varies from cell to cell macrocell growing sequence 02 13 02 13 Conventional supercell Hybrid screen with core region

76 CIC-19, San Jose, CA, 8 November 2011 76/120 Improvement of texture quality with hybrid screen The hybrid screen – clustered-dot microcell with 2x2 core with DBS macrocell growing sequence More homogeneous dot distribution No noticeable dot withdrawal pattern Homogeneous dot distribution Clustered-dot microscreen with stochastic-dispersed macrocell growing sequence Maze-like artifact Stochastic dot withdrawal pattern No maze-like artifact

77 CIC-19, San Jose, CA, 8 November 2011 77/120 Joint color screen design framework Luminance filter sum Luminance filter S(.) 2 Luminance filter sum Luminance filter S(.) 2 T Luminance filter Chrominance filter T Luminance filter Chrominance filter S(.) 2 sum ∑

78 CIC-19, San Jose, CA, 8 November 2011 78/120 Joint screen design results Cyan halftone – plane independent screen design Cyan halftone – joint screen design with magenta screen

79 CIC-19, San Jose, CA, 8 November 2011 79/120 Joint screen design results Magenta halftone – plane independent screen design Magenta halftone – joint screen design with cyan screen

80 CIC-19, San Jose, CA, 8 November 2011 80/120 Joint screen design results Cyan and magenta halftone – plane independent screen design Cyan and magenta halftone – joint screen design Dot-on-dot printing decreased Uniform distribution

81 CIC-19, San Jose, CA, 8 November 2011 81/120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones -Embedding of spatiochromatic model within DBS for color halftoning -Use of spatiochromatic model with hybrid screen to improve highlight texture -Application of spatiochromatic model to design of tile sets for periodic clustered-dot screens Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

82 CIC-19, San Jose, CA, 8 November 2011 82/120 Application of spatiochromatic model to design o f t i l e s e t s f o r p e r i o d i c c l u s t e r e d - d o t s c r e e n s * Continuous Parameter Halftone Cell (CPHC) *F. Baqai and J. P. Allebach, Proc. IEEE, Jan. 2002

83 CIC-19, San Jose, CA, 8 November 2011 83/120 Finding Discrete Parameter Halftone Cell (DPHC) Compute number of pixels in unit cell = |det(N)| Assign pixels to unit cell in order of decreasing area of overlap with CPHC Skip over pixels that are congruent to a pixel that has already been assigned to DPHC AreaDPHC

84 CIC-19, San Jose, CA, 8 November 2011 84/120 Threshold Assignment by Growing Dots and Holes Simultaneously i[m] s[m] Abs. = 0.26 Abs. = 0.53 Abs. = 0.74

85 CIC-19, San Jose, CA, 8 November 2011 85/120 Neugebauer Primaries R i (  ) D65 CIE XYZ CMF’s Color Device Model

86 CIC-19, San Jose, CA, 8 November 2011 86/120 Opponent Color Channels Use linearized version of L * a * b * color space to represent opponent color channels of the human visual system Flohr et al [1993]

87 CIC-19, San Jose, CA, 8 November 2011 87/120 Spatial Frequency Response of Opponent Channels Chrominance [Kolpatzik and Bouman] [Kolpatzik and Bouman]Luminance[Nasanen] cycles/sample

88 CIC-19, San Jose, CA, 8 November 2011 88/120 Overall Framework for Perceptual Model Part I

89 CIC-19, San Jose, CA, 8 November 2011 89/120 Overall Framework for Perceptual Model Part II

90 CIC-19, San Jose, CA, 8 November 2011 90/120 Best Worst Optimized for Registration Errors Conventional Magnified Scanned Textures for Various Screens Absorptance = 0.25 MSE = 9 x Best MSE = 4 x Best MSE = 5 x Best

91 CIC-19, San Jose, CA, 8 November 2011 91/120 Weighted Spectra of Error in Y y C x C z BestWorst

92 CIC-19, San Jose, CA, 8 November 2011 92/120 Weighted Spectra of Error in Y y C x C z Optimized for Registration Errors Conventional

93 CIC-19, San Jose, CA, 8 November 2011 93/120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

94 CIC-19, San Jose, CA, 8 November 2011 94/120 An image quality example Noise in low-light parts of scene Moire on Prof. Bouman’s shirt JPEG artifacts Poor color rendering – too red Poor contrast and tone – too dark Glare from flash ”It's a little frightening to think that this picture is associated with the Transactions on Image Processing” – C. A. Bouman

95 CIC-19, San Jose, CA, 8 November 2011 95/120 Imaging Pipeline Camera scanner Image capture Image processing Image output l Display l Printer l Enhance l Compose l Compress

96 CIC-19, San Jose, CA, 8 November 2011 96/120 Image quality perspectives – image vs. system Imaging systems based -Resolution (modulation transfer function) -Dynamic range -Noise characteristics Image-based -Sharpness -Contrast -Graininess/mottle

97 CIC-19, San Jose, CA, 8 November 2011 97/120 Image quality vs. print quality Image quality -Broader viewpoint -Often focuses on issues that arise during image processing phase below, especially compression. -May also consider image capture and display Print quality -Specifically considers issues that arise during printing Image capture Image processing Image output

98 CIC-19, San Jose, CA, 8 November 2011 98/120 Typical image quality issues See discussion of photograph of Charles A. Bouman et al

99 CIC-19, San Jose, CA, 8 November 2011 99/120 Typical print quality issues Bands – orthogonal to process direction Streaks – parallel to process direction Spots -Repetitive -Random Color plane registration errors Ghosting Toner scatter Swath misalignment http://www.hp.com/cpso-support- new/pq/4700/home.html

100 CIC-19, San Jose, CA, 8 November 2011 100/120 Image quality assessment functionalities Metrics vs. maps -Local or global strength of a particular defect – a single number -Map showing defect strength throughout the image – an image Single defect vs. summative measures -Assess strength of a single defect, i.e. noise -Assess overall image quality – must account for all significant defects and their interactions Reference vs. no-reference methods

101 CIC-19, San Jose, CA, 8 November 2011 101/120 Image quality assessment factors Masking – image content may mask visibility of defect -Texture -Edges Tent-pole effect – worse defect dominates percept of image quality defects and overall assessment of image quality

102 CIC-19, San Jose, CA, 8 November 2011 102/120 Pyramid-Based Image Quality Metrics Daly, 1993 Visual Difference Predictor (VDP) Lubin, 1995 Sarnoff Visual Discrimination Model (VDM) Taylor&Allebach,1998 Image Fidelity Assessor (IFA) Mantiuk & Daly, 2005 High Dynamic Range VDP Wang & Wandell, 2002 SSIM (not HVS based) Wencheng Wu, 2000, Color Image Fidelity Assessor (CIFA) Teo & Heeger, 1994 Perceptual Distortion Metric (PDM) Avadhana & Algazi, 1999, Picture Distortion Metric Doll et al.,1998 Georgia Tech Vision (GTV) Model Monochromatic Chromatic Not HVS based Watson & Solomon, 1997 Model of Visual Contrast Gain Control Watson & Ahumada, 2005, Model for Fovea Detection of Spatial Contrast Zhang&Wandell,1998 Color Image Distortion Maps Jin,Feng&Newell,1998 Color Visual Difference Model (CVDM) Lian, 2001 Color Visual Difference Predictor (CVDP)

103 CIC-19, San Jose, CA, 8 November 2011 103/120 Structural Similarity (SSIM) Index* The SSIM Index expresses the similarity of image X and image Y at a point (i, j) where is a measure of local luminance similarity is a measure of local contrast similarity is a measure of local structure similarity *Wang & Bovik, IEEE Signal Processing Letters, March 02 *Wang, Bovik, Sheikh & Simoncelli, Trans on IP, March 04

104 CIC-19, San Jose, CA, 8 November 2011 104/120 Luminance similarity or, the luminance of the pixel; A typical window is a 11X11 circular-symmetric Gaussian weighting function. where, local average luminance window function

105 CIC-19, San Jose, CA, 8 November 2011 105/120 Contrast similarity where or, the luminance of the pixel; A typical window is a 11X11 circular-symmetric Gaussian weighting function.

106 CIC-19, San Jose, CA, 8 November 2011 106/120 Structural similarity where Structure comparison is conducted after luminance subtraction and variance normalization. Specifically, Prof. Bovik associates and with the structure of the two images. Correlation coefficient between X and Y

107 CIC-19, San Jose, CA, 8 November 2011 107/120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

108 CIC-19, San Jose, CA, 8 November 2011 108/120 *C. Taylor, Z. Pizlo, and J. P. Allebach, IS&T PICS, May. 1998 *

109 CIC-19, San Jose, CA, 8 November 2011 109/120 Model for assessment of color image fidelity Color extension of Taylor’s achromatic IFA The model predicts perceived image fidelity -Assesses visible differences in the opponent channels -Explains the nature of visible difference (luminance change vs. color shift) Color Image Fidelity Assessor (CIFA) Ideal Rendered Viewing parameters Image maps of predicted visible differences *W. Wu, Z. Pizlo, and J. P. Allebach, IS&T PICS, Apr. 2001

110 CIC-19, San Jose, CA, 8 November 2011 110/120 Chromatic difference (Definition) Objective: evaluate the spatial interaction between colors First transform CIE XYZ to opponent color space (O 2,O 3 ) * * X. Zhang and B.A. Wandell, “A SPATIAL EXTENSION OF CIELAB FOR DIGITAL COLOR IMAGE REPRODUCTION”, SID-97 Then normalize to obtain opponent chromaticities (o 2,o 3 ) Define chromatic difference (analogous to luminance contrast c 1 ) Luminance  Red-Green  Blue-Yellow 

111 CIC-19, San Jose, CA, 8 November 2011 111/120 Opponent color representation (13.3,o 2,0.17) (13.3,0.24,o 3 ) (Y,0.24,0.17) (Y,o 2,o 3 )

112 CIC-19, San Jose, CA, 8 November 2011 112/120 Chromatic difference (illustration) Chromatic difference is a measure of chromaticity variation Chromatic difference is a spatial feature derived from opponent chromaticity that has little dependence upon luminance 0.1 0.20.05 Chromatic difference is the amplitude of the sinusoidal grating

113 CIC-19, San Jose, CA, 8 November 2011 113/120 CIFA Ideal Y Image Rendered Y Image Ideal O 2 Image Rendered O 2 Image Ideal O 3 Image Rendered O 3 Image Blue - yellow IFA Red - green IFA Achromatic* IFA Chromatic IFAs * Previous work of Taylor et al (Y,O 2,O 3 ): Opponent representation of an image Multi-resolution Y images Image map of predicted visible luminance differences Image map of predicted visible blue-yellow differences Image map of predicted visible red-green differences

114 CIC-19, San Jose, CA, 8 November 2011 114/120 Psychometric LUT (f,o 2,c 2 ) Chromatic diff. discrimination Red-green IFA Psychometric Selector Channel Response Predictor Limited Memory Prob. Sum. Lowpass Pyramid Lowpass Pyramid Chromatic Diff. Decomposition Chromatic Diff. Decomposition  +–+– Adaptation level Contrast Decomposition Contrast Decomposition Achromatic IFA Psychometric LUT (f,Y,c 1 ) Lum. contrast discrimination Contrast: luminance contrast & chromatic difference

115 CIC-19, San Jose, CA, 8 November 2011 115/120 Estimating parameters of LUT (Psychophysical method) Red-green stimulus: (Y,o 2,o 3 ) specifies the background color, c 2 is the ref. chromatic difference Which stimulus has less chromatic difference? Probability of choosing left

116 CIC-19, San Jose, CA, 8 November 2011 116/120 Representative results Results for f = 16, 8, 4, 2, 1 cycle/deg are drawn in red, green, blue, yellow, and black. Threshold is not affected strongly by the reference chromatic difference Chromatic channels function like low-pass filters Reference c 3 Reference c 2 Threshold Red-green discrimination at RG1:(Y,o 2,o 3 )=(5,0.2,-0.3) Blue-yellow discrimination at BY1:(Y,o 2,o 3 )=(5,0.3,0.2)

117 CIC-19, San Jose, CA, 8 November 2011 117/120 CIFA output for example distortions (Hue change) LuminanceR-GB-Y

118 CIC-19, San Jose, CA, 8 November 2011 118/120 CIFA output for example distortions (Blurring) Luminance R-G B-Y

119 CIC-19, San Jose, CA, 8 November 2011 119/120 CIFA output for example distortions (Limited gamut) LuminanceR-GB-Y

120 CIC-19, San Jose, CA, 8 November 2011 120/120 Thank you for your attention!


Download ppt "CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and."

Similar presentations


Ads by Google