Download presentation
Presentation is loading. Please wait.
Published byみそら ひのと Modified over 5 years ago
1
Last Time Dithering Introduction to signal processing Filtering
Threshold, Brightness preserving threshold, Random modulation, Ordered dithering – clustered, dispersed, Pattern dithering, Floyd-Steinberg Introduction to signal processing Fourier transform Some important transforms High-Frequency Sharp Edges Low Frequency Smooth variation Filtering Box, Bartlett, Gaussian, Edge Detect (High-Pass), Edge Enhance How to apply filters 10/7/2009 © NTUST
2
This Week Resampling Ideal Reconstruction Aliasing Compositing
Painterly Rendering Intro to 3D Graphics Homework 3 due Oct 12 in class 10/7/2009 © NTUST
3
General Scenario You are trying to create a new image of some form, and you need data from a particular place in the existing image Always: Figure out where the new sample comes from in the original image Original New 10/7/2009 © NTUST
4
Resampling at a Point ? Use these to reconstruct
We want to reconstruct the original “function” at the required point We will use information from around the point to do this We do it using a filter We will justify this shortly Which filter? We’ll look at Bartlett (triangular) Other filters also work You might view this as interpolation, but to understand what’s happening, we’ll view it as filtering ? Use these to reconstruct 10/7/2009 © NTUST
5
Resampling at a Point Place a Bartlett filter at the required point
Multiply the value of the neighbor by the filter at that point, and add them Convolution with discrete samples The filter size is a parameter Say the filter is size 3, and you need the value at x=5.75 You need the image samples, I(x), from x=5, x=6 and x=7 You need the filter value, H(s), at s=-0.75, s=0.25 and s=1.25 Compute: I(5)*H(-0.75)+I(6)*H(0.25)+I(7)*H(1.25) 10/7/2009 © NTUST
6
Functional Form for Filters
Consider the Bartlett in 1D: To apply it at a point xorig and find the contribution from point x where the image has value I(x) Extends naturally to 2D: -w/2 w/2 10/7/2009 © NTUST
7
Common Operations Image scaling by a factor k (e.g. 0.5 = half size):
To get xorig given xnew, divide by k: Image rotation by an angle : This rotates around the bottom left (top left?) corner. It’s up to you to figure out how to rotate about the center Be careful of radians vs. degrees: all C++ standard math functions take radians, but OpenGL functions take degrees 10/7/2009 © NTUST
8
Ideal Image Size Reduction
To do ideal image resampling, we would reconstruct the original function based on the samples A requirement for perfect enlargement or size reduction Almost never possible in practice, and we’ll see why 10/7/2009 © NTUST
9
An Reconstruction Example
Say you have a sine function of a particular frequency And you sample it too sparsely You could draw a different sine curve through the samples 10/7/2009 © NTUST
10
Some Intuition To reconstruct a function, you need to reconstruct every frequency component that’s in it This is in the frequency domain, but that’s because it’s easy to talk about “components” of the function But we’ve just seen that to accurately reconstruct high frequencies, you need more samples The effect on the previous slide is called aliasing The correct frequency is aliased by the longer wavelength curve 10/7/2009 © NTUST
11
Nyquist Frequency Aliasing cannot happen if you sample at a frequency that is twice the original frequency – the Nyquist sampling limit You cannot accurately reconstruct a signal that was sampled below its Nyquist frequency – you do not have the information There is no point sampling at higher frequency – you do not gain extra information Signals that are not bandlimited cannot be accurately sampled and reconstructed They would require an infinite sampling frequency Can you reconstruct something with a sharp edge in it? How do you know where the edge should be? 10/7/2009 © NTUST
12
Sampling in Spatial Domain
Sampling in the spatial domain is like multiplying by a spike function You take some ideal function and get data for a regular grid of points 10/7/2009 © NTUST
13
Sampling in Frequency Domain
Sampling in the frequency domain is like convolving with a spike function Follows from the convolution theory: multiplication in spatial equals convolution in frequency Spatial spike function in the frequency domain is also the spike function Original Sampled 10/7/2009 © NTUST
14
Reconstruction (Frequency Domain)
To reconstruct, we must restore the original spectrum That can be done by multiplying by a square pulse Sampled Original 10/7/2009 © NTUST
15
Reconstruction (Spatial Domain)
Multiplying by a square pulse in the frequency domain is the same as convolving with a sinc function in the spatial domain 10/7/2009 © NTUST
16
Aliasing Due to Under-sampling
If the sampling rate is too low, high frequencies get reconstructed as lower frequencies High frequencies from one copy get added to low frequencies from another 10/7/2009 © NTUST
17
More Aliasing Poor reconstruction also results in aliasing
Consider a signal reconstructed with a box filter in the spatial domain (square box pixels, which means using a sinc in the frequency domain): 10/7/2009 © NTUST
18
Aliasing in Practice We have two types of aliasing:
Aliasing due to insufficient sampling frequency Aliasing due to poor reconstruction You have some control over reconstruction If resizing, for instance, use an approximation to the sinc function to reconstruct (instead of Bartlett, as we used last time) Gaussian is closer to sinc than Bartlett But note that sinc function goes on forever (infinite support), which is inefficient to evaluate You have some control over sampling if creating images using a computer Remove all sharp edges (high frequencies) from the scene before drawing it That is, blur character and line edges before drawing 10/7/2009 © NTUST
19
Compositing Compositing combines components from two or more images to make a new image Special effects are easier to control when done in isolation Even many all live-action sequences are more safely shot in different layers 10/7/2009 © NTUST
20
Historically … The basis for film special effects (even before computers) Create digital imagery and composite it into live action It was necessary for films (like Star Wars) where models were used It was done with film and masks, and was time consuming and expensive Important part of animation – even hand animation Background change more slowly than foregrounds, so composite foreground elements onto constant background It was a major advance in animation – the multiplane camera first used in Snow White (1937) 10/7/2009 © NTUST
21
Perfect Storm 10/7/2009 © NTUST
22
Animated Example = over 10/7/2009 © NTUST
23
Mattes A matte is an image that shows which parts of another image are foreground objects Term dates from film editing and cartoon production How would I use a matte to insert an object into a background? How are mattes usually generated for television? 10/7/2009 © NTUST
24
Working with Mattes To insert an object into a background
Call the image of the object the source Put the background into the destination For all the source pixels, if the matte is white, copy the pixel, otherwise leave it unchanged To generate mattes: Use smart selection tools in Photoshop or similar They outline the object and convert the outline to a matte Blue Screen: Photograph/film the object in front of a blue background, then consider all the blue pixels in the image to be the background 10/7/2009 © NTUST
25
Compositing Compositing is the term for combining images, one over the other Used to put special effects into live action Or live action into special effects 10/7/2009 © NTUST
26
Alpha Basic idea: Encode opacity information in the image
Add an extra channel, the alpha channel, to each image For each pixel, store R, G, B and Alpha alpha = 1 implies full opacity at a pixel alpha = 0 implies completely clear pixels There are many interpretations of alpha Is there anything in the image at that point (web graphics) Transparency (real-time OpenGL) Images are now in RGBA format, and typically 32 bits per pixel (8 bits for alpha) All images in the project are in this format 10/7/2009 © NTUST
27
Pre-Multiplied Alpha Instead of storing (R,G,B,), store (R,G,B,)
The compositing operations in the next several slides are easier with pre-multiplied alpha To display and do color conversions, must extract RGB by dividing out =0 is always black Some loss of precision as gets small, but generally not a big problem 10/7/2009 © NTUST
28
Compositing Assumptions
We will combine two images, f and g, to get a third composite image Not necessary that one be foreground and background Background can remain unspecified Both images are the same size and use the same color representation Multiple images can be combined in stages, operating on two at a time 10/7/2009 © NTUST
29
Basic Compositing Operation
At each pixel, combine the pixel data from f and the pixel data from g with the equation: F and G describe how much of each input image survives, and cf and cg are pre-multiplied pixels, and all four channels are calculated To define a compositing operation, define F and G 10/7/2009 © NTUST
30
Basic Compositing Operation
F and G are simple functions of the alpha values F and G are chosen (independently) Different choices give different operations To code it, you can write one compositor and give it 6 numbers (3 for F, 3 for G) to say which function Constant of 0 or 1 f is multiplied by -1, 0 or 1. Similar for g 10/7/2009 © NTUST
31
Sample Images Images Alphas 10/7/2009 © NTUST
32
“Over” Operator If there’s some f, get f, otherwise get g over =
10/7/2009 © NTUST
33
“Over” Operator Computes composite with the rule that f covers g
10/7/2009 © NTUST
34
“Inside” Operator Get f to the extent that g is there, otherwise nothing inside = 10/7/2009 © NTUST
35
“Inside” Operator Computes composite with the rule that only parts of f that are inside g contribute 10/7/2009 © NTUST
36
“Outside” Operator Get f to the extent that g is not there, otherwise nothing outside = 10/7/2009 © NTUST
37
“Outside” Operator Computes composite with the rule that only parts of f that are outside g contribute 10/7/2009 © NTUST
38
“Atop” Operator Get f to the extent that g is there, otherwise g
inside = 10/7/2009 © NTUST
39
“Atop” Operator Computes composite with the over rule but restricted to places where there is some g 10/7/2009 © NTUST
40
“Xor” Operator Get f to the extent that g is not there, and g to extent of no f xor = 10/7/2009 © NTUST
41
“Xor” Operator Computes composite with the rule that f contributes where there is no g, and g contributes where there is no f 10/7/2009 © NTUST
42
“Clear” Operator Computes a clear composite
Note that (0,0,0,>0) is a partially opaque black pixel, whereas (0,0,0,0) is fully transparent, and hence has no color 10/7/2009 © NTUST
43
“Set” Operator Computes composite by setting it to equal f
Copies f into the composite 10/7/2009 © NTUST
44
Compositing Operations
F and G describe how much of each input image survives, and cf and cg are pre-multiplied pixels, and all four channels are calculated Operation F G Over Inside Outside Atop Xor Clear Set 10/7/2009 © NTUST
45
Unary Operators Darken: Makes an image darker (or lighter) without affecting its opacity Dissolve: Makes an image transparent without affecting its color 10/7/2009 © NTUST
46
“PLUS” Operator Computes composite by simply adding f and g, with no overlap rules Useful for defining cross-dissolve in terms of compositing: 10/7/2009 © NTUST
47
Obtaining Values Hand generate (paint a grayscale image)
Automatically create by segmenting an image into foreground background: Blue-screening is the analog method Remarkably complex to get right “Lasso” is the Photoshop operation With synthetic imagery, use a special background color that does not occur in the foreground Brightest blue or green is common 10/7/2009 © NTUST
48
Compositing With Depth
Can store pixel “depth” instead of alpha Then, compositing can truly take into account foreground and background Generally only possible with synthetic imagery Image Based Rendering is an area of graphics that, in part, tries to composite photographs taking into account depth 10/7/2009 © NTUST
49
Painterly Filters Many methods have been proposed to make a photo look like a painting Today we look at one: Painterly-Rendering with Brushes of Multiple Sizes Basic ideas: Build painting one layer at a time, from biggest to smallest brushes At each layer, add detail missing from previous layer 10/7/2009 © NTUST
50
Algorithm 1 function paint(sourceImage,R1 ... Rn) // take source and several brush sizes { canvas := a new constant color image // paint the canvas with decreasing sized brushes for each brush radius Ri, from largest to smallest do // Apply Gaussian smoothing with a filter of size const * radius // Brush is intended to catch features at this scale referenceImage = sourceImage * G(fs Ri) // Paint a layer paintLayer(canvas, referenceImage, Ri) } return canvas 10/7/2009 © NTUST
51
Algorithm 2 procedure paintLayer(canvas,referenceImage, R) // Add a layer of strokes { S := a new set of strokes, initially empty D := difference(canvas,referenceImage) // euclidean distance at every pixel for x=0 to imageWidth stepsize grid do // step in size that depends on brush radius for y=0 to imageHeight stepsize grid do { // sum the error near (x,y) M := the region (x-grid/2..x+grid/2, y-grid/2..y+grid/2) areaError := sum(Di,j for i,j in M) / grid2 if (areaError > T) then { // find the largest error point (x1,y1) := max Di,j in M s :=makeStroke(R,x1,y1,referenceImage) add s to S } paint all strokes in S on the canvas, in random order 10/7/2009 © NTUST
52
Results Original Biggest brush Medium brush added Finest brush added
10/7/2009 © NTUST
53
Point Style Uses round brushes
We provide a routine to “paint” round brush strokes into an image for the project 10/7/2009 © NTUST
54
Where to now… We are now done with images
We will spend several weeks on the mechanics of 3D graphics Coordinate systems and Viewing Clipping Drawing lines and polygons Lighting and shading We will finish the semester with modeling and some additional topics 10/7/2009 © NTUST
55
Graphics Toolkits Graphics toolkits typically take care of the details of producing images from geometry Input (via API functions): Where the objects are located and what they look like Where the camera is and how it behaves Parameters for controlling the rendering Functions (via API): Perform well defined operations based on the input environment Output: Pixel data in a framebuffer – an image in a special part of memory Data can be put on the screen Data can be read back for processing (part of toolkit) 10/7/2009 © NTUST
56
OpenGL OpenGL is an open standard graphics toolkit
Derived from SGI’s GL toolkit Provides a range of functions for modeling, rendering and manipulating the framebuffer What makes a good toolkit? Alternatives: Direct3D, Java3D - more complex and less well supported 10/7/2009 © NTUST
57
A Good Toolkit… Everything is a trade-off Functionality
Compact: a minimalist set of commands Orthogonal: commands do different things and can be combined in a consistent way Speed Ease-of-Use and Documentation Portability Extensibility Standards and ownership Not an exhaustive list … 10/7/2009 © NTUST
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.