CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and.

Slides:



Advertisements
Similar presentations
ECE 472/572 - Digital Image Processing Lecture 10 - Color Image Processing 10/25/11.
Advertisements

 Image Characteristics  Image Digitization Spatial domain Intensity domain 1.
Color Image Processing
School of Computing Science Simon Fraser University
SWE 423: Multimedia Systems Chapter 4: Graphics and Images (2)
© 2002 by Yu Hen Hu 1 ECE533 Digital Image Processing Color Imaging.
CS248 Midterm Review. CS248 Midterm Mon, November 3, 7-9 pm, Gates B01 Mostly “short answer” questions – Keep your answers short and sweet! Covers lectures.
Display Issues Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico.
Why Care About Color? Accurate color reproduction is commercially valuable - e.g. Kodak yellow, painting a house Color reproduction problems increased.
Colour in Computer Graphics Mel Slater. Outline: This time Introduction Spectral distributions Simple Model for the Visual System Simple Model for an.
9/14/04© University of Wisconsin, CS559 Spring 2004 Last Time Intensity perception – the importance of ratios Dynamic Range – what it means and some of.
Human Visual System 4c8 Handout 2. Image and Video Compression We have already seen the need for compression We can use what we have learned in information.
Spectral contrast enhancement
CS 376 Introduction to Computer Graphics 01 / 26 / 2007 Instructor: Michael Eckmann.
Introduction to electrical and computer engineering Jan P. Allebach School of Electrical and Computer Engineering
An automated image prescreening tool for a printer qualification process by † Du-Yong Ng and ‡ Jan P. Allebach † Lexmark International Inc. ‡ School of.
Any questions about the current assignment? (I’ll do my best to help!)
IDL GUI for Digital Halftoning Final Project for SIMG-726 Computing For Imaging Science Changmeng Liu
1 Perception and VR MONT 104S, Fall 2008 Lecture 7 Seeing Color.
1 © 2010 Cengage Learning Engineering. All Rights Reserved. 1 Introduction to Digital Image Processing with MATLAB ® Asia Edition McAndrew ‧ Wang ‧ Tseng.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Computer Vision – Fundamentals of Human Vision Hanyang University Jong-Il Park.
Topic 5 - Imaging Mapping - II DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Purdue University Page 1 Color Image Fidelity Assessor Color Image Fidelity Assessor * Wencheng Wu (Xerox Corporation) Zygmunt Pizlo (Purdue University)
HP-PURDUE-CONFIDENTIAL Final Exam May 16th 2008 Slide No.1 Outline Motivations Analytical Model of Skew Effect and its Compensation in Banding and MTF.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
1 Chapter 2: Color Basics. 2 What is light?  EM wave, radiation  Visible light has a spectrum wavelength from 400 – 780 nm.  Light can be composed.
03/05/03© 2003 University of Wisconsin Last Time Tone Reproduction If you don’t use perceptual info, some people call it contrast reduction.
CSC361/ Digital Media Burg/Wong
How do we see color? There is only one type of rod. It can only tell the intensity of the light, not its color. Because the cones can differentiate colors,
CS654: Digital Image Analysis Lecture 29: Color Image Processing.
Introduction to Computer Graphics
ECE 638: Principles of Digital Color Imaging Systems
ECE 638: Principles of Digital Color Imaging Systems Lecture 3: Trichromatic theory of color.
ECE 638: Principles of Digital Color Imaging Systems Lecture 4: Chromaticity Diagram.
ECE 638: Principles of Digital Color Imaging Systems Lecture 5: Primaries.
ECE 638: Principles of Digital Color Imaging Systems Lecture 12: Characterization of Illuminants and Nonlinear Response of Human Visual System.
1 Embedded Signal Processing Laboratory The University of Texas at Austin Austin, TX USA 1 Mr. Vishal Monga,
ECE 638: Principles of Digital Color Imaging Systems Lecture 11: Color Opponency.
Chapter 9: Perceiving Color. Figure 9-1 p200 Figure 9-2 p201.
Color Measurement and Reproduction Eric Dubois. How Can We Specify a Color Numerically? What measurements do we need to take of a colored light to uniquely.
1 of 32 Computer Graphics Color. 2 of 32 Basics Of Color elements of color:
Color Models Light property Color models.
ECE 638: Principles of Digital Color Imaging Systems
Color Image Processing
Color Image Processing
ECE 638: Principles of Digital Color Imaging Systems
2.1 Direct Binary Search (DBS)
Tone Dependent Color Error Diffusion
Color Image Processing
ECE 638: Principles of Digital Color Imaging Systems
EE 638: Principles of Digital Color Imaging Systems
Spatiochromatic Vision Models for Imaging
1.1 Halftoning Fundamentals
Introduction to Computer Graphics with WebGL
Color Representation Although we can differentiate a hundred different grey-levels, we can easily differentiate thousands of colors.
School of Electrical and
School of Electrical and
Computer Vision Lecture 4: Color
Tone Dependent Color Error Diffusion
Introduction to Perception and Color
ECE 638: Principles of Digital Color Imaging Systems
1.2 Design of Periodic, Clustered-Dot Screens
Color Image Processing
A Review in Quality Measures for Halftoned Images
3.3 Screening Part 3.
Slides taken from Scott Schaefer
Color Image Processing
Color Model By : Mustafa Salam.
Tone Dependent Color Error Diffusion Halftoning
Presentation transcript:

CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and Assessment of Image Quality Jan P. Allebach School of Electrical and Computer Engineering Purdue University West Lafayette, Indiana CIC-19, San Jose, CA, 8 November 2011

2/120 What is a model? Model is not a complete description of the phenomenon being modeled. It should capture only what is important to the application at hand, and nothing more. Its structure must be responsive to resource constraints. From dictionary.com: A schematic description of a system, theory, or phenomenon that accounts for its known or inferred properties and may be used for further study of its characteristics.

CIC-19, San Jose, CA, 8 November /120 Visual system components

CIC-19, San Jose, CA, 8 November /120 Why do we need spatiochromatic models? Imaging systems succeed by providing a facsimile of the real world A few primaries instead of an exact spectral match Spatially discretized and amplitude quantized representation of images that are continuous in both space and amplitude These methods only succeed only because of the limitations of the human visual system (HVS) To design lowest cost systems that achieve the desired objective, it is necessary to take into account the human visual system in the design and evaluation

CIC-19, San Jose, CA, 8 November /120 Modeling context Modeling process is very dependent on the intended application -Motivation for developing the models in the first place -Governs choice of features to be captured and computational structure of the model -Provides the final test of the success of the model Tight interplay between models for imaging system components and the human visual system Model usage may be either embedded or external

CIC-19, San Jose, CA, 8 November /120 Pedagogical approach Spatiochromatic modeling, in principle, builds on all of the following areas: -Color science -Imaging science -Psychophysics -Image systems engineering As stated in course description, we assume only a rudimentary knowledge of these subjects Start from basic principles, but move quickly to more advanced level Focus on what is needed to follow the modeling discussion See references at end

CIC-19, San Jose, CA, 8 November /120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

CIC-19, San Jose, CA, 8 November /120 For further information There is an extensive list of references at the end of these notes. The powerpoint presentation may be downloaded from the web:

CIC-19, San Jose, CA, 8 November /120 The retinal image is what counts Every spatiochromatic model has an implied viewing distance What happens when this condition is not met? -Too far – image looks better than specification -Too close – may see artifacts

CIC-19, San Jose, CA, 8 November /120 Basic spatiochromatic model structure

CIC-19, San Jose, CA, 8 November /120 Trichromatic stage First proposed by Young in 1801 Ignored for over 50 years Helmholtz revived concept in Handbook of Physiological Optics ( ) Physiological basis is the existence of three different cone types in retina Accurately predicts the results of color matching experiments over a wide range of conditions Simulated retinal mosaic

CIC-19, San Jose, CA, 8 November /120 Trichromatic sensor model For the human visual system, the spectral response functions can be measured indirectly through color matching experiments are spectral response functions that characterize the sensor

CIC-19, San Jose, CA, 8 November /120 Transformation between tristimulus representations The trichromatic sensor model is applicable to a wide range of image capture devices, such as cameras and scanners, as well as the human visual system If the spectral response functions are a linear transformation of those corresponding to the human visual system, then we call the 3-tuple response the tristimulus coordinate of that spectral power distribution For any two sets of spectral response functions that are both linear transformations of those for the human visual system, we can use a 3x3 matrix to transform between the corresponding tristimulus coordinates for any spectral stimulus The color matching functions for visually independent primaries are also equivalent to the cone responses of the human visual system

CIC-19, San Jose, CA, 8 November /120 Color matching experiment – setup

CIC-19, San Jose, CA, 8 November /120 Color matching experiment - procedure Test stimulus is fixed Observer individually adjusts strengths of the three match stimuli to achieve a visual match between the two sides of the split field Mixture is assumed to be additive, i.e. radiant power of mixture in any wavelength interval is sum of radiant powers of the three match stimuli in the same interval To achieve a match with some test stimuli, it may be necessary to move one or two of the match stimuli over to the side where the test stimulus is located For visually independent primaries, match amounts are equivalent to tristimulus coordinates

CIC-19, San Jose, CA, 8 November /120 Color matching functions A color matching experiment yields the amount of each of three primaries required to match a particular stimulus A special case is a monochromatic stimulus with wavelength If we repeat this experiment for all wavelengths, we obtain color matching functions Since any stimulus can be expressed as a weighted sum of monochromatic stimuli, the primary match amounts can be expressed as

CIC-19, San Jose, CA, 8 November /120 CIE 1931 standard RGB observer Observer consists of color matching functions corresponding to monochromatic primaries Primaries -R – 700 nm -G – nm -B – nm Ratio of radiances Chosen to place chromaticity of equal energy stimulus E at center of (r-g) chromaticity diagram, i.e. at (0.333,0.333) that areas under color matching functions are identical. Based on observations in a 2 degree field of view using color matching method discussed earlier.

CIC-19, San Jose, CA, 8 November /120 Color matching functions for 1931 CIE standard RGB observer

CIC-19, San Jose, CA, 8 November /120 Relative luminous efficiency - a special case of color matching An achromatic sensor with response function is called the standard photometric observer.

CIC-19, San Jose, CA, 8 November /120 CIE 1931 standard XYZ observer The CIE also defined a second standard observer based on a linear transformation from the 1931 RGB color matching functions. The XYZ observer has the following properties: -The color matching functions are non- negative at all wavelengths. - The chromaticity coordinates of all realizable stimuli are non-negative. -The color matching function is equal to the relative luminous efficiency function To achieve these properties, it was necessary to use primaries that are not realizable. The chromaticities of the primaries lie outside the spectral locus.

CIC-19, San Jose, CA, 8 November /120 Color matching functions for 1931 CIE standard XYZ observer

CIC-19, San Jose, CA, 8 November /120 Cone responses for human visual system* *Vos and Walraven, Vision Res., 1971

CIC-19, San Jose, CA, 8 November /120 Chromaticity coordinates Chromaticity coordinates provide an important method for visualizing tristimulus coordinates, i.e. sensor responses or primary match amounts Let denote either the sensor response or the primary match amounts for a particular stimulus The corresponding chromaticity coordinates are defined as We can see by inspection that each coordinate lies between 0 and 1 and that all three coordinates sum to 1

CIC-19, San Jose, CA, 8 November /120 Chromaticity diagram for 1931 CIE standard XYZ observer

CIC-19, San Jose, CA, 8 November /120 How do we use the trichromatic model? Assuming image is in a standard color space, such as sRGB, we transform to CIE XYZ as follows: Remove gamma correction, and transform to linear RGB Perform 3x3 matrix transform from linear RGB to CIE XYZ The CIE XYZ representation of the image will form the basis for further stages of the model

CIC-19, San Jose, CA, 8 November /120 Basic spatiochromatic model structure

CIC-19, San Jose, CA, 8 November /120 Opponent stage Trichromatic theory provides the basis for understanding whether or not two spectral power distributions will appear the same to an observer when viewed under the same conditions. However, the trichromatic theory will tell us nothing about the appearance of a stimulus. In the early 1900’s, Ewald Hering observed some properties of color appearance -Red and green never occur together – there is no such thing as a reddish green, or a greenish red -If I add a small amount of blue to green, it looks bluish- green. If I add more blue to green, it becomes cyan. -In contrast, if I add red to green, the green becomes less saturated. If I add enough red to green, the color appears gray, blue, or yellow -If I add enough red to green, the color appears red, but never reddish green

CIC-19, San Jose, CA, 8 November /120 Blue-yellow color opponency

CIC-19, San Jose, CA, 8 November /120 Opponent stage (cont.) Hering postulated that there existed two kinds of neural pathways in the visual system -Red-Green pathway fires fast if there is a lot of red, fires slowly if there is a lot of green -Blue-Yellow pathway fires fast if there is a lot of blue, fires slowly if there is a lot of yellow Hering provided no experimental evidence for his theory; and it was ignored for over 50 years

CIC-19, San Jose, CA, 8 November /120 Experimental evidence for opponency Hurvitch and Jameson hue cancellation experiment (1955) Savaetichin electrophysiological evidence from the retinal neurons of a fish (1956) Boynton’s color naming experiment (1965) Wandell’s color decorrelation experiment Left and right plots show data for two different observers. Open triangles show cancellation of red- green appearance. Closed circles show cancellation of blue- yellow appearance.

CIC-19, San Jose, CA, 8 November /120 Color spaces that incorporate opponency YUV (NTSC video standard space) YC r C b (Kodak PhotoCD space) L*a*b* (CIE uniform color space) YCxCz (Linearized CIE L*a*b* space) O 1 O 2 O 3 (Wandell’s optimally decorrelated space)

CIC-19, San Jose, CA, 8 November /120 CIE L*a*b* and its linearized version YCxCz in terms of CIE XYZ CIE L*a*b* L*= 116 f(Y/Y n ) - 16 a*= 200 [ f(X/X n ) - f(Y/Y n ) ] b*= 500 [ f(Y/Y n ) - f(Z/Z n ) ] 7.787x +16/116 0 x x 1/ x1 f(x) =  white point :(X n, Y n, Z n ) Linearized opponent color space Y y C x C z Y y= 116 (Y/Y n ) C x = 200 [ (X/X n ) - (Y/Y n ) ] C z = 500 [ (Y/Y n ) - (Z/Z n ) ] correlate of luminance R-G opponent color chrominance channel Y-B L* -a* +a* -b* +b*

CIC-19, San Jose, CA, 8 November /120 Wandell’s space in terms of CIE XYZ* *Wen Wu, “Two Problems in Digital Color Imaging: Colorimetry and Image Fidelity Assessor,” Ph.D. Dissertation, Purdue University, Dec. 2000

CIC-19, San Jose, CA, 8 November /120 Visualization of opponent color representation (13.3,o 2,0.17) (13.3,0.24,o 3 ) (Y,0.24,0.17) (Y,o 2,o 3 )

CIC-19, San Jose, CA, 8 November /120 Basic spatiochromatic model structure

CIC-19, San Jose, CA, 8 November /120 Impact of viewing geometry on spatial frequencies Both arrows A and B generate same retinal image For small ratio, the angle subtended at the retina in radians is

CIC-19, San Jose, CA, 8 November /120 Spatial frequency conversion To convert between (cycles/inch) viewed at distance (inches) and (cycles/degree) subtended at the retina, we thus have For a viewing distance of 12 inches, this becomes

CIC-19, San Jose, CA, 8 November /120 Spatial frequency filtering stage Based on pyschophysical measurements of contrast sensitivity function Use sinusoidal stimuli with modulation along achromatic, red-green, or blue-yellow axes For any fixed spatial frequency, threshold of visibility is depends only on. This is Weber’s Law.

CIC-19, San Jose, CA, 8 November /120 Campbell’s contrast sensivity function on log-log axes

CIC-19, San Jose, CA, 8 November /120 Dependence of sine wave visibility on contrast and spatial frequency

CIC-19, San Jose, CA, 8 November /120 Models for achromatic spatial contrast sensitivty* Author Contrast sensitivity function Constants Campbell 1969 Mannos 1974 Nasanen 1984 Daly 1987 *Kim and Allebach, IEEE T-IP, March 2002

CIC-19, San Jose, CA, 8 November /120 Achromatic spatial contrast sensitivity curves

CIC-19, San Jose, CA, 8 November /120 Chrominance spatial frequency response Based on Mullen’s data* *K.T. Mullen, J. Physiol., 1985

CIC-19, San Jose, CA, 8 November /120 Spatial Frequency Response of Opponent Channels Chrominance [Kolpatzik and Bouman*] [Kolpatzik and Bouman*]Luminance[Nasanen] *B. Kolpatzik, and C. A. Bouman, J. Electr. Imaging, July 1992

CIC-19, San Jose, CA, 8 November /120 Illustration of difference in spatial frequency response of luminance and chrominance channels Original image O 1 - filtered

CIC-19, San Jose, CA, 8 November /120 Illustration of difference in spatial frequency response of luminance and chrominance channels Original image O 2 - filtered

CIC-19, San Jose, CA, 8 November /120 Illustration of difference in spatial frequency response of luminance and chrominance channels Original image O 3 - filtered

CIC-19, San Jose, CA, 8 November /120 Application areas for spatiochromatic models Color image display on low-cost devices -PDA -Cellphone Color image printing -Inkjet -Laser electrophotographic Digital video display -LCD -DMD -Plasma panel Lossy color image compression -JPEG -MPEG

CIC-19, San Jose, CA, 8 November /120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

CIC-19, San Jose, CA, 8 November /120 What is digital halftoning? Digital halftoning is the process of rendering a continuous-tone image with a device that is capable of generating only two or a few levels of gray at each point on the device output surface. The perception of additional levels of gray depends on a local average of the binary or multilevel texture.

CIC-19, San Jose, CA, 8 November /120 What is digital halftoning (cont.) Detail is rendered by local modulation of the texture

CIC-19, San Jose, CA, 8 November /120 The Two Fundamental Goals of Digital Halftoning Representation of Tone -smooth, homogeneous texture. -free from visible structure or contouring. Diamond dot screen Bayer screen Error diffusion DBS

CIC-19, San Jose, CA, 8 November /120 The Two Fundamental Goals of Digital Halftoning (cont.) Representation of Detail -sharp, distinct, and good contrast in rendering of fine structure in image. -good rendering of lines, edges, and type characters. -freedom from moire due to interference between halftone algorithm and image content Diamond dot screen DBS screenError diffusionDBS

CIC-19, San Jose, CA, 8 November /120 Types of Halftone Texture PeriodicAperiodic Clustered Dot Dispersed Dot

CIC-19, San Jose, CA, 8 November /120 Basic structure of screening algorithm The threshold matrix is periodically tiled over the entire continuous-tone image.

CIC-19, San Jose, CA, 8 November /120 Error diffusion Q() w k,l g[m,n]f[m,n] d[m,n] u[m,n]

CIC-19, San Jose, CA, 8 November /120 Direct binary search* *D. Lieberman and J. P. Allebach, IEEE T-IP, Nov Printer model

CIC-19, San Jose, CA, 8 November /120 The Search Heuristic

CIC-19, San Jose, CA, 8 November /120 DBS Convergence: 0, 1, 2, 4, 6, and 8 Iterations

CIC-19, San Jose, CA, 8 November /120 Swaps vs. Toggles Toggle onlySwap and toggle

CIC-19, San Jose, CA, 8 November /120 Dual interpretation for DBS Minimize mean-squared error at distance D Minimize maximum error at distance 2D

CIC-19, San Jose, CA, 8 November /120 Illustration of Dual Interpreta- tion f[m] f[m] * p[m] f[m] * c pp [m] ~ ~~ g[m] g[m] * p[m] g[m] * c pp [m] ~ ~~

CIC-19, San Jose, CA, 8 November /120 Impact of scale parameter S on halftone texture S 1 =0.5S 2 S 3 =2.0S 2 S 2 =300 x printer resolution- viewing distance

CIC-19, San Jose, CA, 8 November /120 Does it make a difference which model we use? Reason for normalization -Bandwidths of models differ significantly. -Causes a significant difference in texture between the models. -For any fixed model, can achieve a similar range of textures by varying scale parameter. -Would like to compare the models at the same texture scale. Normalization method -Match the 50% point from the maximum for Nasanen’s model.

CIC-19, San Jose, CA, 8 November /120 Normalized contrast sensitivity functions

CIC-19, San Jose, CA, 8 November /120 Comparison between models Nasanen Daly CampbellMannos

CIC-19, San Jose, CA, 8 November /120 Comparison between models (cont.) In 2003, Monga, Geisler, and Evans published a comparision of the effectiveness of four different color HVS models in the context of error diffusion halftoning* They concluded that the Flohr et al** model resulted in the best overall image quality *V. Monga, W. Geisler, B. L. Evans, IEEE SP Letters, April 2003 **U.Agar and J. P. Allebach, IEEE T-IP, Dec. 2005

CIC-19, San Jose, CA, 8 November /120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones -Embedding of spatiochromatic model within DBS for color halftoning -Use of spatiochromatic model with hybrid screen to improve highlight texture -Application of spatiochromatic model to design of tile sets for periodic clustered-dot screens Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

CIC-19, San Jose, CA, 8 November /120 Embedding of spatiochromatic model within DBS for color halftoning* InputRGB Continuous-tone Image Initial CMY Halftone under test RGB to Y y C x C z Lum. and Chrom. Spatial Freq. Res. HVS model + CMY to Y y C x C z g[m] f YyCxCz [m] f YyCxCz (x) ~ f[m] g YyCxCz [m] - g YyCxCz (x) ~ e (x) T ~ ~ e (x )d x E = accept or reject the trial halftone change Lum. and Chrom. Spatial Freq. Res. HVS model *U.Agar and J. P. Allebach, IEEE T-IP, Dec. 2005

CIC-19, San Jose, CA, 8 November /120 Results from embedding model in DBS CDBS in RGB CDBS in Y y C x C z

CIC-19, San Jose, CA, 8 November /120 Use of spatiochromatic models with hybrid screen to improve highlight texture* The hybrid screen is a screening algorithm which generates stochastic dispersed-dot textures in highlights and shadows, and periodic clustered-dot textures in midtones. It is based on two main concepts: supercell and core Dispersed-dotClustered-dot Periodic recursive ordering pattern regularly nucleated clusters Stochastic blue noisegreen noise Smooth transition *Lin and Allebach, IEEE T-IP, Dec. 2006; Lee and Allebach, IEEE T-IP, Feb

CIC-19, San Jose, CA, 8 November /120 Simple clustered-dot screen Halftone using simple clustered-dot screen Contouring Continuous-tone input a=0 a=127/255 a=16/255

CIC-19, San Jose, CA, 8 November /120 Supercell approach Supercell is a set of microcells combined together as a single period of screen Supercell is used To increase the gray levels To create more accurately angled screen microcell with macrocell growing sequence Increasing the gray levels Creating more accurately angled screen 14.93° 15.9° Note that microcells are not identical

CIC-19, San Jose, CA, 8 November /120 Limitation on supercell Clustered-dot microcell with Bayer macrocell growing sequence Abrupt texture change - Bayer structure Periodic dot withdrawal pattern Clustered-dot microscreen with stochastic-dispersed macrocell growing sequence Homogeneous dot distribution Maze-like artifact Stochastic dot withdrawal pattern

CIC-19, San Jose, CA, 8 November /120 Role of highlight and shadow cores The core is a small region in each microcell where the original microcell growing sequence is ignored and the sequence can be randomized  the first dot can move around within the core  creates blue-noise-like texture There are separate core regions for highlights and shadows Highlight core Shadow core The first dot placement within the core microcell growing sequence microcell growing sequence varies from cell to cell macrocell growing sequence Conventional supercell Hybrid screen with core region

CIC-19, San Jose, CA, 8 November /120 Improvement of texture quality with hybrid screen The hybrid screen – clustered-dot microcell with 2x2 core with DBS macrocell growing sequence More homogeneous dot distribution No noticeable dot withdrawal pattern Homogeneous dot distribution Clustered-dot microscreen with stochastic-dispersed macrocell growing sequence Maze-like artifact Stochastic dot withdrawal pattern No maze-like artifact

CIC-19, San Jose, CA, 8 November /120 Joint color screen design framework Luminance filter sum Luminance filter S(.) 2 Luminance filter sum Luminance filter S(.) 2 T Luminance filter Chrominance filter T Luminance filter Chrominance filter S(.) 2 sum ∑

CIC-19, San Jose, CA, 8 November /120 Joint screen design results Cyan halftone – plane independent screen design Cyan halftone – joint screen design with magenta screen

CIC-19, San Jose, CA, 8 November /120 Joint screen design results Magenta halftone – plane independent screen design Magenta halftone – joint screen design with cyan screen

CIC-19, San Jose, CA, 8 November /120 Joint screen design results Cyan and magenta halftone – plane independent screen design Cyan and magenta halftone – joint screen design Dot-on-dot printing decreased Uniform distribution

CIC-19, San Jose, CA, 8 November /120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones -Embedding of spatiochromatic model within DBS for color halftoning -Use of spatiochromatic model with hybrid screen to improve highlight texture -Application of spatiochromatic model to design of tile sets for periodic clustered-dot screens Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

CIC-19, San Jose, CA, 8 November /120 Application of spatiochromatic model to design o f t i l e s e t s f o r p e r i o d i c c l u s t e r e d - d o t s c r e e n s * Continuous Parameter Halftone Cell (CPHC) *F. Baqai and J. P. Allebach, Proc. IEEE, Jan. 2002

CIC-19, San Jose, CA, 8 November /120 Finding Discrete Parameter Halftone Cell (DPHC) Compute number of pixels in unit cell = |det(N)| Assign pixels to unit cell in order of decreasing area of overlap with CPHC Skip over pixels that are congruent to a pixel that has already been assigned to DPHC AreaDPHC

CIC-19, San Jose, CA, 8 November /120 Threshold Assignment by Growing Dots and Holes Simultaneously i[m] s[m] Abs. = 0.26 Abs. = 0.53 Abs. = 0.74

CIC-19, San Jose, CA, 8 November /120 Neugebauer Primaries R i (  ) D65 CIE XYZ CMF’s Color Device Model

CIC-19, San Jose, CA, 8 November /120 Opponent Color Channels Use linearized version of L * a * b * color space to represent opponent color channels of the human visual system Flohr et al [1993]

CIC-19, San Jose, CA, 8 November /120 Spatial Frequency Response of Opponent Channels Chrominance [Kolpatzik and Bouman] [Kolpatzik and Bouman]Luminance[Nasanen] cycles/sample

CIC-19, San Jose, CA, 8 November /120 Overall Framework for Perceptual Model Part I

CIC-19, San Jose, CA, 8 November /120 Overall Framework for Perceptual Model Part II

CIC-19, San Jose, CA, 8 November /120 Best Worst Optimized for Registration Errors Conventional Magnified Scanned Textures for Various Screens Absorptance = 0.25 MSE = 9 x Best MSE = 4 x Best MSE = 5 x Best

CIC-19, San Jose, CA, 8 November /120 Weighted Spectra of Error in Y y C x C z BestWorst

CIC-19, San Jose, CA, 8 November /120 Weighted Spectra of Error in Y y C x C z Optimized for Registration Errors Conventional

CIC-19, San Jose, CA, 8 November /120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

CIC-19, San Jose, CA, 8 November /120 An image quality example Noise in low-light parts of scene Moire on Prof. Bouman’s shirt JPEG artifacts Poor color rendering – too red Poor contrast and tone – too dark Glare from flash ”It's a little frightening to think that this picture is associated with the Transactions on Image Processing” – C. A. Bouman

CIC-19, San Jose, CA, 8 November /120 Imaging Pipeline Camera scanner Image capture Image processing Image output l Display l Printer l Enhance l Compose l Compress

CIC-19, San Jose, CA, 8 November /120 Image quality perspectives – image vs. system Imaging systems based -Resolution (modulation transfer function) -Dynamic range -Noise characteristics Image-based -Sharpness -Contrast -Graininess/mottle

CIC-19, San Jose, CA, 8 November /120 Image quality vs. print quality Image quality -Broader viewpoint -Often focuses on issues that arise during image processing phase below, especially compression. -May also consider image capture and display Print quality -Specifically considers issues that arise during printing Image capture Image processing Image output

CIC-19, San Jose, CA, 8 November /120 Typical image quality issues See discussion of photograph of Charles A. Bouman et al

CIC-19, San Jose, CA, 8 November /120 Typical print quality issues Bands – orthogonal to process direction Streaks – parallel to process direction Spots -Repetitive -Random Color plane registration errors Ghosting Toner scatter Swath misalignment new/pq/4700/home.html

CIC-19, San Jose, CA, 8 November /120 Image quality assessment functionalities Metrics vs. maps -Local or global strength of a particular defect – a single number -Map showing defect strength throughout the image – an image Single defect vs. summative measures -Assess strength of a single defect, i.e. noise -Assess overall image quality – must account for all significant defects and their interactions Reference vs. no-reference methods

CIC-19, San Jose, CA, 8 November /120 Image quality assessment factors Masking – image content may mask visibility of defect -Texture -Edges Tent-pole effect – worse defect dominates percept of image quality defects and overall assessment of image quality

CIC-19, San Jose, CA, 8 November /120 Pyramid-Based Image Quality Metrics Daly, 1993 Visual Difference Predictor (VDP) Lubin, 1995 Sarnoff Visual Discrimination Model (VDM) Taylor&Allebach,1998 Image Fidelity Assessor (IFA) Mantiuk & Daly, 2005 High Dynamic Range VDP Wang & Wandell, 2002 SSIM (not HVS based) Wencheng Wu, 2000, Color Image Fidelity Assessor (CIFA) Teo & Heeger, 1994 Perceptual Distortion Metric (PDM) Avadhana & Algazi, 1999, Picture Distortion Metric Doll et al.,1998 Georgia Tech Vision (GTV) Model Monochromatic Chromatic Not HVS based Watson & Solomon, 1997 Model of Visual Contrast Gain Control Watson & Ahumada, 2005, Model for Fovea Detection of Spatial Contrast Zhang&Wandell,1998 Color Image Distortion Maps Jin,Feng&Newell,1998 Color Visual Difference Model (CVDM) Lian, 2001 Color Visual Difference Predictor (CVDP)

CIC-19, San Jose, CA, 8 November /120 Structural Similarity (SSIM) Index* The SSIM Index expresses the similarity of image X and image Y at a point (i, j) where is a measure of local luminance similarity is a measure of local contrast similarity is a measure of local structure similarity *Wang & Bovik, IEEE Signal Processing Letters, March 02 *Wang, Bovik, Sheikh & Simoncelli, Trans on IP, March 04

CIC-19, San Jose, CA, 8 November /120 Luminance similarity or, the luminance of the pixel; A typical window is a 11X11 circular-symmetric Gaussian weighting function. where, local average luminance window function

CIC-19, San Jose, CA, 8 November /120 Contrast similarity where or, the luminance of the pixel; A typical window is a 11X11 circular-symmetric Gaussian weighting function.

CIC-19, San Jose, CA, 8 November /120 Structural similarity where Structure comparison is conducted after luminance subtraction and variance normalization. Specifically, Prof. Bovik associates and with the structure of the two images. Correlation coefficient between X and Y

CIC-19, San Jose, CA, 8 November /120 Synopsis of tutorial General framework for spatiochromatic models for the HVS Introduction to digital halftoning Application of spatiochromatic models to design of color halftones Overview of use of HVS models in image quality assessment Color Image Fidelity Assessor

CIC-19, San Jose, CA, 8 November /120 *C. Taylor, Z. Pizlo, and J. P. Allebach, IS&T PICS, May *

CIC-19, San Jose, CA, 8 November /120 Model for assessment of color image fidelity Color extension of Taylor’s achromatic IFA The model predicts perceived image fidelity -Assesses visible differences in the opponent channels -Explains the nature of visible difference (luminance change vs. color shift) Color Image Fidelity Assessor (CIFA) Ideal Rendered Viewing parameters Image maps of predicted visible differences *W. Wu, Z. Pizlo, and J. P. Allebach, IS&T PICS, Apr. 2001

CIC-19, San Jose, CA, 8 November /120 Chromatic difference (Definition) Objective: evaluate the spatial interaction between colors First transform CIE XYZ to opponent color space (O 2,O 3 ) * * X. Zhang and B.A. Wandell, “A SPATIAL EXTENSION OF CIELAB FOR DIGITAL COLOR IMAGE REPRODUCTION”, SID-97 Then normalize to obtain opponent chromaticities (o 2,o 3 ) Define chromatic difference (analogous to luminance contrast c 1 ) Luminance  Red-Green  Blue-Yellow 

CIC-19, San Jose, CA, 8 November /120 Opponent color representation (13.3,o 2,0.17) (13.3,0.24,o 3 ) (Y,0.24,0.17) (Y,o 2,o 3 )

CIC-19, San Jose, CA, 8 November /120 Chromatic difference (illustration) Chromatic difference is a measure of chromaticity variation Chromatic difference is a spatial feature derived from opponent chromaticity that has little dependence upon luminance Chromatic difference is the amplitude of the sinusoidal grating

CIC-19, San Jose, CA, 8 November /120 CIFA Ideal Y Image Rendered Y Image Ideal O 2 Image Rendered O 2 Image Ideal O 3 Image Rendered O 3 Image Blue - yellow IFA Red - green IFA Achromatic* IFA Chromatic IFAs * Previous work of Taylor et al (Y,O 2,O 3 ): Opponent representation of an image Multi-resolution Y images Image map of predicted visible luminance differences Image map of predicted visible blue-yellow differences Image map of predicted visible red-green differences

CIC-19, San Jose, CA, 8 November /120 Psychometric LUT (f,o 2,c 2 ) Chromatic diff. discrimination Red-green IFA Psychometric Selector Channel Response Predictor Limited Memory Prob. Sum. Lowpass Pyramid Lowpass Pyramid Chromatic Diff. Decomposition Chromatic Diff. Decomposition  +–+– Adaptation level Contrast Decomposition Contrast Decomposition Achromatic IFA Psychometric LUT (f,Y,c 1 ) Lum. contrast discrimination Contrast: luminance contrast & chromatic difference

CIC-19, San Jose, CA, 8 November /120 Estimating parameters of LUT (Psychophysical method) Red-green stimulus: (Y,o 2,o 3 ) specifies the background color, c 2 is the ref. chromatic difference Which stimulus has less chromatic difference? Probability of choosing left

CIC-19, San Jose, CA, 8 November /120 Representative results Results for f = 16, 8, 4, 2, 1 cycle/deg are drawn in red, green, blue, yellow, and black. Threshold is not affected strongly by the reference chromatic difference Chromatic channels function like low-pass filters Reference c 3 Reference c 2 Threshold Red-green discrimination at RG1:(Y,o 2,o 3 )=(5,0.2,-0.3) Blue-yellow discrimination at BY1:(Y,o 2,o 3 )=(5,0.3,0.2)

CIC-19, San Jose, CA, 8 November /120 CIFA output for example distortions (Hue change) LuminanceR-GB-Y

CIC-19, San Jose, CA, 8 November /120 CIFA output for example distortions (Blurring) Luminance R-G B-Y

CIC-19, San Jose, CA, 8 November /120 CIFA output for example distortions (Limited gamut) LuminanceR-GB-Y

CIC-19, San Jose, CA, 8 November /120 Thank you for your attention!