Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dithering (Digital Halftoning)

Similar presentations


Presentation on theme: "Dithering (Digital Halftoning)"— Presentation transcript:

1 Dithering (Digital Halftoning)
Mach bands can be removed by adding noise along the boundary lines General perceptive principle: replaced structured errors with noisy ones and people complain less Old industry dating to the late 1800’s Methods for producing grayscale images in newspapers and books 9/16/04 © University of Wisconsin, CS559 Spring 2004

2 Dithering to Black-and-White
Black-and-white is still the preferred way of displaying images in many areas Black ink is cheaper than color Printing with black ink is simpler and hence cheaper Paper for black inks is not special To get color to black and white, first turn into grayscale: I=0.299R+0.587G+0.114B This formula reflects the fact that green is more representative of perceived brightness than blue is NOTE that it is not the equation implied by the RGB->XYZ color space conversion matrix 9/16/04 © University of Wisconsin, CS559 Spring 2004

3 Dithering (Digital Halftoning)
Mach bands can be removed by adding noise along the boundary lines General perceptive principle: replaced structured errors with noisy ones and people complain less Old industry dating to the late 1800’s Methods for producing grayscale images in newspapers and books 9/16/04 © University of Wisconsin, CS559 Spring 2004

4 Dithering to Black-and-White
To get color to black and white, first turn into grayscale: I=0.299R+0.587G+0.114B This formula reflects the fact that green is more representative of perceived brightness than blue is NOTE that it is not the equation implied by the RGB->XYZ color space conversion matrix For all dithering we will assume that the image is gray and that intensities are represented as a value in [0, 1.0) Define new array of floating point numbers new_image[i] = old_image[i] / (float)256; To get back: output[i]=(unsigned char)floor(new_image[i]*256) 9/16/04 © University of Wisconsin, CS559 Spring 2004

5 © University of Wisconsin, CS559 Spring 2004
Sample Images 9/16/04 © University of Wisconsin, CS559 Spring 2004

6 © University of Wisconsin, CS559 Spring 2004
Threshold Dithering For every pixel: If the intensity < 0.5, replace with black, else replace with white 0.5 is the threshold This is the naïve version of the algorithm To keep the overall image brightness the same, you should: Compute the average intensity over the image Use a threshold that gives that average For example, if the average intensity is 0.6, use a threshold that is higher than 40% of the pixels, and lower than the remaining 60% 9/16/04 © University of Wisconsin, CS559 Spring 2004

7 Naïve Threshold Algorithm
9/16/04 © University of Wisconsin, CS559 Spring 2004

8 Brightness Preserving Algorithm
9/16/04 © University of Wisconsin, CS559 Spring 2004

9 © University of Wisconsin, CS559 Spring 2004
Random Modulation Add a random amount to each pixel before thresholding Typically add uniformly random amount from [-a,a] Pure addition of noise to the image For better results, add better quality noise For instance, use Gaussian noise (random values sampled from a normal distribution) Should use same procedure as before for choosing threshold Not good for black and white, but OK for more colors Add a small random color to each pixel before finding the closest color in the table 9/16/04 © University of Wisconsin, CS559 Spring 2004

10 © University of Wisconsin, CS559 Spring 2004
Random Modulation 9/16/04 © University of Wisconsin, CS559 Spring 2004

11 © University of Wisconsin, CS559 Spring 2004
Ordered Dithering Image block Break the image into small blocks Define a threshold matrix Use a different threshold for each pixel of the block Compare each pixel to its own threshold The thresholds can be clustered, which looks like newsprint The thresholds can be “random” which looks better Threshold matrix Result 9/16/04 © University of Wisconsin, CS559 Spring 2004

12 © University of Wisconsin, CS559 Spring 2004
Clustered Dithering 9/16/04 © University of Wisconsin, CS559 Spring 2004

13 © University of Wisconsin, CS559 Spring 2004
Dot Dispersion 9/16/04 © University of Wisconsin, CS559 Spring 2004

14 © University of Wisconsin, CS559 Spring 2004
Comparison Clustered Dot Dispersion 9/16/04 © University of Wisconsin, CS559 Spring 2004

15 © University of Wisconsin, CS559 Spring 2004
Pattern Dithering Compute the intensity of each sub-block and index a pattern NOT the same as before Here, each sub-block has one of a fixed number of patterns – pixel is determined only by average intensity of sub-block In ordered dithering, each pixel is checked against the dithering matrix before being turned on Used when display resolution is higher than image resolution – not uncommon with printers Use 3x3 output for each input pixel 9/16/04 © University of Wisconsin, CS559 Spring 2004

16 © University of Wisconsin, CS559 Spring 2004
Pattern Dither (1) 9/16/04 © University of Wisconsin, CS559 Spring 2004

17 © University of Wisconsin, CS559 Spring 2004
Pattern Dither (2) 9/16/04 © University of Wisconsin, CS559 Spring 2004

18 Floyd-Steinberg Dithering
Start at one corner and work through image pixel by pixel Usually scan top to bottom in a zig-zag Threshold each pixel Compute the error at that pixel: The difference between what should be there and what you did put there If you made the pixel 0, e = original; if you made it 1, e = original-1 Propagate error to neighbors by adding some proportion of the error to each unprocessed neighbor A mask tells you how to distribute the error Easiest to work with floating point image Convert all pixels to 0-1 floating point e 7/16 3/16 5/16 1/16 9/16/04 © University of Wisconsin, CS559 Spring 2004

19 Floyd-Steinberg Dithering
9/16/04 © University of Wisconsin, CS559 Spring 2004

20 © University of Wisconsin, CS559 Spring 2004
Color Dithering All the same techniques can be applied, with some modification Example is Floyd-Steinberg: Uniform color table Error is difference from nearest color in the color table Error propagation same as that for greyscale Each color channel treated independently 9/16/04 © University of Wisconsin, CS559 Spring 2004

21 © University of Wisconsin, CS559 Spring 2004
Color Dithering 9/16/04 © University of Wisconsin, CS559 Spring 2004

22 Comparison to Uniform Quant.
Same color table! 9/16/04 © University of Wisconsin, CS559 Spring 2004

23 © University of Wisconsin, CS559 Spring 2004
Image Manipulation We have now looked at basic image formats and color, including transformations of color Next, operations involving image resampling Scaling, rotating, morphing, … But first, we need some signal processing Also important for anti-aliasing, later in class 9/16/04 © University of Wisconsin, CS559 Spring 2004

24 © University of Wisconsin, CS559 Spring 2004
Enlarging an Image To enlarge an image, you have to add pixels between the old pixels What values do you choose for those pixels? 9/16/04 © University of Wisconsin, CS559 Spring 2004

25 © University of Wisconsin, CS559 Spring 2004
Rotating an Image Pixels in the new image come from their rotated positions in the original image These rotated locations might not be in “nice” places Original Need these pixels for the new image 9/16/04 © University of Wisconsin, CS559 Spring 2004

26 Images as Samples of Functions
We can view an image as a set of samples from an ideal function This is particularly true if we are plotting a function, but it is also the case that a digital photograph is a sample of some function If we knew what the function was, we could enlarge the image by resampling the function Failing that, we can reconstruct the function and then resample it reconstruct resample 9/16/04 © University of Wisconsin, CS559 Spring 2004

27 © University of Wisconsin, CS559 Spring 2004
Why Signal Processing? Signal processing provides the tools for understanding sampling and reconstruction Sampling: How do we know how many pixels are required, and where to place them? Reconstruction: What happens when a monitor converts a set of points into a continuous picture? How do we enlarge, reduce, rotate, smooth or enhance images? Jaggies, Aliasing How do we avoid jagged, pixelated lines 9/16/04 © University of Wisconsin, CS559 Spring 2004

28 A Striking Aliasing Example
One sample per pixel, different sampling frequencies 9/16/04 © University of Wisconsin, CS559 Spring 2004

29 © University of Wisconsin, CS559 Spring 2004
We will learn… …why this is better, and how to explain the remaining artifacts 100 randomized samples per pixel 9/16/04 © University of Wisconsin, CS559 Spring 2004

30 © University of Wisconsin, CS559 Spring 2004
Filtering Images Work in the discrete spatial domain Convert the filter into a matrix, the filter mask Move the matrix over each point in the image, multiply the entries by the pixels below, then sum eg 3x3 box filter Effect is averaging 9/16/04 © University of Wisconsin, CS559 Spring 2004

31 © University of Wisconsin, CS559 Spring 2004
Box Filter Box filters smooth by averaging neighbors In frequency domain, keeps low frequencies and attenuates (reduces) high frequencies, so clearly a low-pass filter Spatial: Box Frequency: sinc 9/16/04 © University of Wisconsin, CS559 Spring 2004

32 © University of Wisconsin, CS559 Spring 2004
Box Filter 9/16/04 © University of Wisconsin, CS559 Spring 2004

33 © University of Wisconsin, CS559 Spring 2004
Handling Boundaries At (0,0) for instance, you might need pixel data for (-1,-1), which doesn’t exist Option 1: Make the output image smaller – don’t evaluate pixels you don’t have all the input for Option 2: Replicate the edge pixels Equivalent to: posn = x + i; if ( posn < 0 ) posn = 0; and so on for other indices Option 3: Reflect image about edge Equivalent to: posn = x + i; if ( posn < 0 ) posn = -posn; and similar for others 9/16/04 © University of Wisconsin, CS559 Spring 2004

34 © University of Wisconsin, CS559 Spring 2004
Bartlett Filter Triangle shaped filter in spatial domain In frequency domain, product of two box filters, so attenuates high frequencies more than a box Spatial: Triangle (BoxBox) Frequency: sinc2 9/16/04 © University of Wisconsin, CS559 Spring 2004

35 © University of Wisconsin, CS559 Spring 2004
Constructing Masks: 1D Sample the filter function at matrix “pixels”, then normalize eg 2D Bartlett Can go to edge of pixel or middle of next: results are slightly different 1 1 3 1 5 1 2 1 1 2 1 4 1 2 9/16/04 © University of Wisconsin, CS559 Spring 2004

36 © University of Wisconsin, CS559 Spring 2004
Constructing Masks: 2D Multiply 2 1D masks together using outer product M is 2D mask, m is 1D mask 0.2 0.6 0.2 0.2 0.04 0.12 0.04 0.6 0.12 0.36 0.12 0.2 0.04 0.12 0.04 9/16/04 © University of Wisconsin, CS559 Spring 2004

37 © University of Wisconsin, CS559 Spring 2004
Bartlett Filter 9/16/04 © University of Wisconsin, CS559 Spring 2004

38 © University of Wisconsin, CS559 Spring 2004
Gaussian Filter Attenuates high frequencies even further In 2d, rotationally symmetric, so fewer artifacts 9/16/04 © University of Wisconsin, CS559 Spring 2004

39 © University of Wisconsin, CS559 Spring 2004
Gaussian Filter 9/16/04 © University of Wisconsin, CS559 Spring 2004

40 Constructing Gaussian Mask
Use the binomial coefficients Central Limit Theorem (probability) says that with more samples, binomial converges to Gaussian 1 1 2 1 4 1 1 1 1 2 1 1 4 6 4 1 16 1 1 6 15 20 15 6 1 64 9/16/04 © University of Wisconsin, CS559 Spring 2004

41 © University of Wisconsin, CS559 Spring 2004
High-Pass Filters A high-pass filter can be obtained from a low-pass filter If we subtract the smoothed image from the original, we must be subtracting out the low frequencies What remains must contain only the high frequencies High-pass masks come from matrix subtraction: eg: 3x3 Bartlett 9/16/04 © University of Wisconsin, CS559 Spring 2004

42 © University of Wisconsin, CS559 Spring 2004
High-Pass Filter 9/16/04 © University of Wisconsin, CS559 Spring 2004

43 © University of Wisconsin, CS559 Spring 2004
Edge Enhancement High-pass filters give high values at edges, low values in constant regions Adding high frequencies back into the image enhances edges One approach: Image = Image + [Image – smooth(Image)] Low-pass High-pass 9/16/04 © University of Wisconsin, CS559 Spring 2004

44 © University of Wisconsin, CS559 Spring 2004
Edge-Enhance Filter 9/16/04 © University of Wisconsin, CS559 Spring 2004

45 © University of Wisconsin, CS559 Spring 2004
Edge Enhancement 9/16/04 © University of Wisconsin, CS559 Spring 2004

46 Fixing Negative Values
The negative values in high-pass filters can lead to negative image values Most image formats don’t support this Solutions: Truncate: Chop off values below min or above max Offset: Add a constant to move the min value to 0 Re-scale: Rescale the image values to fill the range (0,max) 9/16/04 © University of Wisconsin, CS559 Spring 2004

47 © University of Wisconsin, CS559 Spring 2004
Filtering and Color To filter a color image, simply filter each of R,G and B separately Re-scaling and truncating are more difficult to implement: Adjusting each channel separately may change color significantly Adjusting intensity while keeping hue and saturation may be best, although some loss of saturation is probably OK 9/16/04 © University of Wisconsin, CS559 Spring 2004

48 © University of Wisconsin, CS559 Spring 2004
Resampling Making an image larger is like sampling the original function at higher density You need more pixels to represent the same thing, so a higher pixel density Reducing an image in size is like sampling at lower density Generating new samples of the “same” function is called resampling In theory, 2 steps: Reconstruction and sampling – but not in practice Many other image manipulation tasks require resampling Original More samples Original spacing = bigger 9/16/04 © University of Wisconsin, CS559 Spring 2004

49 © University of Wisconsin, CS559 Spring 2004
General Scenario You are trying to create a new image of some form, and you need data from a particular place in the existing image Always: Figure out where the new sample comes from in the original image Original New 9/16/04 © University of Wisconsin, CS559 Spring 2004

50 © University of Wisconsin, CS559 Spring 2004
Resampling at a Point We want to reconstruct the original “function” at the required point We will use information from around the point to do this We do it using a filter We will justify this shortly Which filter? We’ll look at Bartlett (triangular) Other filters also work You might view this as interpolation, but to understand what’s happening, we’ll view it as filtering ? Use these to reconstruct 9/16/04 © University of Wisconsin, CS559 Spring 2004

51 © University of Wisconsin, CS559 Spring 2004
Resampling at a Point Place a Bartlett filter at the required point Multiply the value of the neighbor by the filter at that point, and add them Convolution with discrete samples The filter size is a parameter Say the filter is size 3, and you need the value at x=5.75 You need the image samples, I(x), from x=5, x=6 and x=7 You need the filter value, H(s), at s=-0.75, s=0.25 and s=1.25 Compute: I(5)*H(-0.75)+I(6)*H(0.25)+I(7)*H(1.25) 9/16/04 © University of Wisconsin, CS559 Spring 2004

52 © University of Wisconsin, CS559 Spring 2004
General Scenario You are trying to create a new image of some form, and you need data from a particular place in the existing image Always: Figure out where the new sample comes from in the original image Original New 9/16/04 © University of Wisconsin, CS559 Spring 2004

53 © University of Wisconsin, CS559 Spring 2004
Resampling at a Point We want to reconstruct the original “function” at the required point We will use information from around the point to do this We do it using a filter We will justify this shortly Which filter? We’ll look at Bartlett (triangular) Other filters also work You might view this as interpolation, but to understand what’s happening, we’ll view it as filtering ? Use these to reconstruct 9/16/04 © University of Wisconsin, CS559 Spring 2004

54 © University of Wisconsin, CS559 Spring 2004
Resampling at a Point Place a Bartlett filter at the required point Multiply the value of the neighbor by the filter at that point, and add them Convolution with discrete samples The filter size is a parameter Say the filter is size 3, and you need the value at x=5.75 You need the image samples, I(x), from x=5, x=6 and x=7 You need the filter value, H(s), at s=-0.75, s=0.25 and s=1.25 Compute: I(5)*H(-0.75)+I(6)*H(0.25)+I(7)*H(1.25) 9/16/04 © University of Wisconsin, CS559 Spring 2004

55 Functional Form for Filters
Consider the Bartlett in 1D: To apply it at a point xorig and find the contribution from point x where the image has value I(x) Extends naturally to 2D: -w/2 w/2 9/16/04 © University of Wisconsin, CS559 Spring 2004

56 © University of Wisconsin, CS559 Spring 2004
Common Operations Image scaling by a factor k (e.g. 0.5 = half size): To get xorig given xnew, divide by k: Image rotation by an angle : This rotates around the bottom left (top left?) corner. It’s up to you to figure out how to rotate about the center Be careful of radians vs. degrees: all C++ standard math functions take radians, but OpenGL functions take degrees 9/16/04 © University of Wisconsin, CS559 Spring 2004

57 Ideal Image Size Reduction
To do ideal image resampling, we would reconstruct the original function based on the samples A requirement for perfect enlargement or size reduction Almost never possible in practice, and we’ll see why 9/16/04 © University of Wisconsin, CS559 Spring 2004

58 An Reconstruction Example
Say you have a sine function of a particular frequency And you sample it too sparsely You could draw a different sine curve through the samples 9/16/04 © University of Wisconsin, CS559 Spring 2004

59 © University of Wisconsin, CS559 Spring 2004
Some Intuition To reconstruct a function, you need to reconstruct every frequency component that’s in it This is in the frequency domain, but that’s because it’s easy to talk about “components” of the function But we’ve just seen that to accurately reconstruct high frequencies, you need more samples The effect on the previous slide is called aliasing The correct frequency is aliased by the longer wavelength curve 9/16/04 © University of Wisconsin, CS559 Spring 2004

60 © University of Wisconsin, CS559 Spring 2004
Nyquist Frequency Aliasing cannot happen if you sample at a frequency that is twice the original frequency – the Nyquist sampling limit You cannot accurately reconstruct a signal that was sampled below its Nyquist frequency – you do not have the information There is no point sampling at higher frequency – you do not gain extra information Signals that are not bandlimited cannot be accurately sampled and reconstructed They would require an infinite sampling frequency Can you reconstruct something with a sharp edge in it? How do you know where the edge should be? 9/16/04 © University of Wisconsin, CS559 Spring 2004

61 Sampling in Spatial Domain
Sampling in the spatial domain is like multiplying by a spike function You take some ideal function and get data for a regular grid of points 9/16/04 © University of Wisconsin, CS559 Spring 2004

62 Sampling in Frequency Domain
Sampling in the frequency domain is like convolving with a spike function Follows from the convolution theory: multiplication in spatial equals convolution in frequency Spatial spike function in the frequency domain is also the spike function Original Sampled 9/16/04 © University of Wisconsin, CS559 Spring 2004

63 Reconstruction (Frequency Domain)
To reconstruct, we must restore the original spectrum That can be done by multiplying by a square pulse Sampled Original 9/16/04 © University of Wisconsin, CS559 Spring 2004

64 Reconstruction (Spatial Domain)
Multiplying by a square pulse in the frequency domain is the same as convolving with a sinc function in the spatial domain 9/16/04 © University of Wisconsin, CS559 Spring 2004

65 Aliasing Due to Under-sampling
If the sampling rate is too low, high frequencies get reconstructed as lower frequencies High frequencies from one copy get added to low frequencies from another 9/16/04 © University of Wisconsin, CS559 Spring 2004

66 © University of Wisconsin, CS559 Spring 2004
More Aliasing Poor reconstruction also results in aliasing Consider a signal reconstructed with a box filter in the spatial domain (square box pixels, which means using a sinc in the frequency domain): 9/16/04 © University of Wisconsin, CS559 Spring 2004

67 © University of Wisconsin, CS559 Spring 2004
Aliasing in Practice We have two types of aliasing: Aliasing due to insufficient sampling frequency Aliasing due to poor reconstruction You have some control over reconstruction If resizing, for instance, use an approximation to the sinc function to reconstruct (instead of Bartlett, as we used last time) Gaussian is closer to sinc than Bartlett But note that sinc function goes on forever (infinite support), which is inefficient to evaluate You have some control over sampling if creating images using a computer Remove all sharp edges (high frequencies) from the scene before drawing it That is, blur character and line edges before drawing 9/16/04 © University of Wisconsin, CS559 Spring 2004

68 © University of Wisconsin, CS559 Spring 2004
Painterly Filters Many methods have been proposed to make a photo look like a painting Today we look at one: Painterly-Rendering with Brushes of Multiple Sizes Basic ideas: Build painting one layer at a time, from biggest to smallest brushes At each layer, add detail missing from previous layer 9/16/04 © University of Wisconsin, CS559 Spring 2004

69 © University of Wisconsin, CS559 Spring 2004
Algorithm 1 function paint(sourceImage,R1 ... Rn) // take source and several brush sizes { canvas := a new constant color image // paint the canvas with decreasing sized brushes for each brush radius Ri, from largest to smallest do // Apply Gaussian smoothing with a filter of size const * radius // Brush is intended to catch features at this scale referenceImage = sourceImage * G(fs Ri) // Paint a layer paintLayer(canvas, referenceImage, Ri) } return canvas 9/16/04 © University of Wisconsin, CS559 Spring 2004

70 © University of Wisconsin, CS559 Spring 2004
Algorithm 2 procedure paintLayer(canvas,referenceImage, R) // Add a layer of strokes { S := a new set of strokes, initially empty D := difference(canvas,referenceImage) // euclidean distance at every pixel for x=0 to imageWidth stepsize grid do // step in size that depends on brush radius for y=0 to imageHeight stepsize grid do { // sum the error near (x,y) M := the region (x-grid/2..x+grid/2, y-grid/2..y+grid/2) areaError := sum(Di,j for i,j in M) / grid2 if (areaError > T) then { // find the largest error point (x1,y1) := max Di,j in M s :=makeStroke(R,x1,y1,referenceImage) add s to S } paint all strokes in S on the canvas, in random order 9/16/04 © University of Wisconsin, CS559 Spring 2004

71 © University of Wisconsin, CS559 Spring 2004
Results Original Biggest brush Medium brush added Finest brush added 9/16/04 © University of Wisconsin, CS559 Spring 2004

72 © University of Wisconsin, CS559 Spring 2004
Point Style Uses round brushes We provide a routine to “paint” round brush strokes into an image for the project 9/16/04 © University of Wisconsin, CS559 Spring 2004


Download ppt "Dithering (Digital Halftoning)"

Similar presentations


Ads by Google