Download presentation
Presentation is loading. Please wait.
Published byChad Welch Modified over 9 years ago
1
Image 2013, Fall
2
Image Processing Quantization Uniform Quantization Halftoning & Dithering Pixel Operation Add random noise Add luminance Add contrast Add saturation Filtering Blur Detected edges Warping Scale Rotate Warp Combining Composite Morph Ref] Thomas Funkhouser, Princeton Univ.
3
What is an Image? An image is a 2D rectilinear array of samples A pixel is a sample, not a little square
4
Image Resolution Intensity resolution Each pixel has only “Depth” bits for colors/intensities Spatial resolution Image has only “Width” x “Height” pixels Temporal resolution Monitor refreshes images at only “Rate” Hz Width x HeightDepthRate NTSC640x480830 Workstation1280x10242475 Film3000 x 20001224 Laser Printer6600x51001- Typical Resolutions Height Width Depth
5
Source of Error Intensity quantization Not enough intensity resolution Spatial aliasing Not enough spatial resolution Temporal aliasing Not enough temporal resolution Real valueSample value
6
Quantization Artifact due to limited intensity resolution Frame buffers have limited number of bits per pixel Physical devices have limited dynamic range
7
Uniform Quantization 반올림 (Round)
8
Uniform Quantization Image with decreasing bits per pixel
9
Reducing Effects of Quantization Our eyes tend to integrate or average the fine detail into an overall intensity Halftoning A technique used in newspaper printing The process of converting a continuous tone image into an image that can be printed with one color ink (grayscale) or four color inks (color) Dithering Dithering is used to create a wide variety of patterns for use as backgrounds, fills and shadings, as well as for creating halftones, and for correcting aliasing. Halftones are created through a process called dithering Dithering Methods( 방법론 ), Halfton result( 결과물 )
10
Classical Halftoning Use dots of varying size to represent intensities Area of dots proportional to intensity in image
11
Halftoning A technique used in newspaper printing Only two intensities are possible, blob of ink and no blob of ink But, the size of the blob can be varied Also, the dither patterns of small dots can be used
12
Classical Halftoning Original ImageHalfton Image
13
Halfton patterns Use cluster of pixels to represent intensity Trade spatial resolution for intensity resolution 2 by 2 pixel grid patterns 3 by 3 pixel grid patterns mask
14
Halfton patterns
15
Dithering Distribute errors among pixels Exploit spatial integration in our eye Display greater range of perceptible intensities
16
Halftoning – Moire Patterns Moire[more-ray] patterns Repeated use of same dot pattern for particular shade results in repeated pattern (Perceived as a moire pattern) result from overlapping repetitive elements Instead, randomize halftone pattern
17
Halftoning – Moire Patterns
19
Signal Processing (1/2) Signal Real World : Continuous Signal Digital World: Discrete Signal Image spatial domain not temporal domain intensity variation over space
20
Signal Processing (2/2) Sampling and reconstruction process Sampling The process of selecting a finite set of values from a signal Samples (the selected values) Reconstruction recreate the original continuous signal from the samples
21
Sampling and Reconstruction
22
Aliasing In general Artifact due to under-sampling or poor reconstruction Specifically, in graphics Spatial aliasing Temporal aliasing Sampling below the Nyquist rate
23
Nyquist rate and aliasing Nyquist rate lower bound on the sampling rate Sampling at the Nyquist rate sampling rate > 2* max frequency 주파수 (frequency): 초당 사이클의 횟수 (Hz)
24
Spatial Aliasing Artifact due to limited spatial resolution
25
Spatial Aliasing Artifact due to limited spatial resolution
26
Temporal Aliasing Artifact due to limited temporal resolution Spinning Backward The wheel is spinning at 5 revolution per second –3 rd row appear to be spinning backward at revolution per second
27
Temporal Aliasing Artifact due to limited temporal resolution Flickering
28
Anti-Aliasing Sample at higher rate Not always possible Doesn’t always solve problem Pre-filter to form bandlimited signal Form bandlimited function(low-pass filter) Trades aliasing for blurring
29
Pixel Operations Grayscale Compute luminance L L=0.30R+0.59G+0.11B (old) 0.2125R+0.7154G+0.0721B (CRT, HDTV) Original ImageGrayscale Image
30
Pixel Operations Negative Image Simply inverse pixel components Ex) RGB(5, 157, 178) RGB(250, 98, 77) Original ImageNegative Image (R, G, B) (255-R, 255-G, 255-B)
31
Pixel Operations Adjusting Brightness Simply scale pixel components Must clamp to range (e.g., 0 to 255) (R, G, B) (R, G, B) * scale
32
Pixel Operations Adjusting Contrast Compute mean luminance L for all pixels Luminance = 0.30* r + 0.59*g + 0.11*b Scale deviation from L for each pixel component Must clamp to range (e.g., 0 to 255) R L + (R-L)*scale G L + (G-L)*scale B L + (B-L)*scale
33
Pixel Operations Changing Brightness Changing Contrast
34
Pixel Operations Adjusting Saturation Color Convert RGB HSV Scale from S for each pixel component Color Convert HSV RGB Must clamp to range (e.g., 0 to 255)
35
Filtering Convolution Spatial domain Output pixel is weighted sum of pixels in neighborhood of input image Patten of weights is the “filter”
36
Filtering Convolution
37
Filtering Adjust Blurriness Convolve with a filter whose entries sum to one Each pixel becomes a weighted average of its neighbors Filter Sum = 1
38
Filtering Edge Detection Convolve with a filter that finds differences between neighbor pixels
39
Filtering Sharpening Convolve with a Sharpening filter
40
Filtering Embossing Convert the image into grayscale Convolve with a Embossing filter Plus 0.5 for each pixel Must clamp to range (e.g., 0 to 255)
41
Summary Image representation A pixel is a sample, not a little square Images have limited resolution Halftoning and dithering Reduce visual artifacts due to quantization Distribute errors among pixels Exploit spatial integration in our eye Sampling and reconstruction Reduce visual artifacts due to aliasing Filter to avoid undersampling Blurring is better than aliasing
42
Image Processing Quantization Uniform Quantization Halftoning & Dithering Pixel Operation Add random noise Add luminance Add contrast Add saturation Filtering Blur Detected edges Warping Scale Rotate Warp Combining Composite Morph Ref] Thomas Funkhouser, Princeton Univ.
43
Image Warping Move pixels of image Mapping Forward Reverse Resampling Point sampling Triangle Filter Gaussian filter
44
Mapping Define transformation Describe the destination (x,y) for every location (u,v) in the source (or vice-verse, if invertible) Source (u, v)Destination (x, y)
45
Example Mappings Scale by factor x = factor * u y = factor * v Source (u, v) Destination (x, y)
46
Example Mappings Rotate by degrees x = ucos vsin y = usin vcos Source (u, v)Destination (x, y)
47
Example Mappings Shear in X by factor x = u + factor * v y = v Shear in Y by factor x = u y = v + factor * u
48
Other Mappings Any function of u and v X = f x (u, v) Y = f y (u, v)
49
Image Warping Implementation I Forward mapping for (int u =0; u < umax; u++) { for (int v = 0; v < vmax; v++) { float x = f x (u, v); float y = f y (u, v); dst (x, y) = src(u, v) ; }
50
Forward Mapping Iterate over source image Many source pixels can map to same destination pixel Some destination pixels may not be covered
51
Image Warping Implementation II Reverse mapping for (int x =0; x < xmax; x++) { for (int y = 0; y < ymax; y++) { float u = f x -1 (x, y); float v = f y -1 (x, y); dst (x, y) = src(u, v); }
52
Reverse Mapping Iterate over destination image Must resample source May oversample, but much simpler
53
Resampling Evaluate source image at arbitrary (u,v) (u, v) does not usually have integer coordinates
54
Resampling Point sampling Triangle filter Gaussain filter
55
Point Sampling Take value at closest pixel int iu = trunc(u + 0.5) int iv = trunc(v + 0.5) dst(x, y) = src(iu, iv) This method is simple, but it cause aliasing
56
Filtering Compute weighted sum of pixel neighborhood Weights are normalized values of kernel function Equivalent to convolution at samples
57
Triangle Filtering Kernel is triangle function
58
Triangle Filtering (with width = 1) Bilinearly interpolate four closest pixels a = linear interpolation of src(u1, v2) and src(u2, v2) b = linear interpolation of src(u1, v1) and src(u2, v1) dst = linear interpolation of “a” and “b”
59
Gaussian Filtering Kernel is Gaussian function Filter width =2
60
Filtering Methods Comparison Trade-offs Aliasing versus blurring Computation speed PointBilinearGaussian
61
Image Warping Implementation Reverse mapping for (int x =0; x < xmax; x++) { for (int y = 0; y < ymax; y++) { float u = f x -1 (x, y); float v = f y -1 (x, y); dst (x, y) = resample_src(u, v, w); }
62
Example: Scale scale (src, dst, sx, sy) : Float w = max (1.0/sx, 1.0/sy) for (int x =0; x < xmax; x++) { for (int y = 0; y < ymax; y++) { float u = x / sx; float v = y / sy; dst (x, y) = resample_src(u, v, w); }
63
Example: Rotate Rotate (src, dst, theta) : for (int x =0; x < xmax; x++) { for (int y = 0; y < ymax; y++) { float u = x*cos(-theta) y*sin(-theta); float v = x*sin(-theta) y*cos(-theta); dst (x, y) = resample_src(u, v, w); }
64
Example: Fun Swirl (src, dst, theta) : for (int x =0; x < xmax; x++) { for (int y = 0; y < ymax; y++) { float u = rot(dist(x, xcenter)*theta); float v = rot(dist(y, yecnter)*theta); dst (x, y) = resample_src(u, v, w); }
65
Combining images Image compositing Blue-screen mattes Alpha channel Image morphing Specifying correspondence Warping Blending
66
Image Compositing Separate an image into “elements” Render independently Composite together Application Cel animation Chroma-keying Blue-screen matting
67
Blue Screen Matting Composite foreground and background images Create background image Create foreground image with blue background Insert non-blue foreground pixels into background Problem : no partial coverage
68
Alpha Channel Encodes pixel coverage informations = 0 no coverage (or transparent) = 1 full coverage (or opaque) 0 < < 1 partial coverage (or semi-transparent) Example: = 0.3
69
Composite with Alpha Control the linear interpolation of foreground and background pixels when elements are composited
70
Pixels with Alpha Alpha channel convention (r, g, b, a) represents a pixel that is a covered by the color C = (r/a, g/a, b/a) Color components are premultiplied by a Can display (r, g, b) value directly Closure in composition algebra
71
Semi-Transparent Objects Suppose we put A over B background G How much of B is blocked by A ? A How much of B show through A ? A ) How much of G shows through both A and B ? A ) B ) α A A+ (1-α A )α B B+ (1-α A )(1-α B )G = A over [ B over G ]
72
Opaque Objects How do we combine 2 partially covered pixels? 3 possible colors (0, A, B) 4 regions (0, A, B, AB)
73
Composition Algebra 12 reasonable combinations
74
Image Morphing Animate transition between two images
75
Cross-Dissolving Blend images with “over” operator alpha of bottom image is 1.0 alpha of top image varies from 0.0 to 1.0 blend(i, j) = (1-t) src(i, j) + t dst(i, j) (0 <= t <= 1)
76
Image Morphing Combines warping and cross-disolving
77
Feature Based Morphing Beier & Neeley use pairs of lines to specify warp Given p in dst image, where is p’ in source image ?
78
Feature Based Morphing Feature Matching Image Warping Morphed Image
79
Feature Based Morphing Morphing Warping Black Or White-Michael Jackson
80
Image Processing Quantization Uniform Quantization Halftoning & Dithering Pixel Operation Add random noise Add luminance Add contrast Add saturation Filtering Blur Detected edges Warping Scale Rotate Warp Combining Composite Morph Ref] Thomas Funkhouser, Princeton Univ.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.