Last Time Dithering Introduction to signal processing Filtering

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

Week 7 - Monday.  What did we talk about last time?  Specular shading  Aliasing and antialiasing.
CS 551 / CS 645 Antialiasing. What is a pixel? A pixel is not… –A box –A disk –A teeny tiny little light A pixel is a point –It has no dimension –It occupies.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 5 Ravi Ramamoorthi
School of Computing Science Simon Fraser University
Computational Photography Prof. Feng Liu Spring /06/2015.
CS248 Midterm Review. CS248 Midterm Mon, November 4, 7-9 pm, Terman Aud Mostly “short answer” questions – Keep your answers short and sweet! Covers lectures.
General Functions A non-periodic function can be represented as a sum of sin’s and cos’s of (possibly) all frequencies: F(  ) is the spectrum of the function.
Advanced Computer Graphics (Spring 2006) COMS 4162, Lecture 3: Sampling and Reconstruction Ravi Ramamoorthi
CS248 Midterm Review. CS248 Midterm Mon, November 3, 7-9 pm, Gates B01 Mostly “short answer” questions – Keep your answers short and sweet! Covers lectures.
1Notes. 2Atop  The simplest (useful) and most common form of compositing: put one image “atop” another  Image 1 (RGB) on top of image 2 (RGB)  For.
University of British Columbia CPSC 414 Computer Graphics © Tamara Munzner 1 Sampling Week 7, Fri 17 Oct 2003 p1 demos sampling.
Graphics and Still Images John H. Krantz Hanover College.
Image Enhancement.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 3: Sampling and Reconstruction Ravi Ramamoorthi
CS248 Midterm Review. CS248 Midterm Mon, November 5, 7-9 pm, Terman Aud Mon, November 5, 3-5 pm, Gates 392 Mostly “short answer” questions Covers through.
3D Graphics Goal: To produce 2D images of a mathematically described 3D environment Issues: –Describing the environment: Modeling (mostly later) –Computing.
Vector vs. Bitmap SciVis V
02/14/02(c) University of Wisconsin 2002, CS 559 Last Time Filtering Image size reduction –Take the pixel you need in the output –Map it to the input –Place.
V Obtained from a summer workshop in Guildford County July, 2014
02/12/02 (c) 2002 University of Wisconsin, CS 559 Filters A filter is something that attenuates or enhances particular frequencies Easiest to visualize.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 3 Ravi Ramamoorthi
© Chun-Fa Chang Sampling Theorem & Antialiasing. © Chun-Fa Chang Motivations “ My ray traced images have a lot more pixels than the TV screen. Why do.
Digital Images The digital representation of visual information.
Color Correct and Remove Keystoning A minimalist approach to photographing your art By Paul Marley.
Vector vs. Bitmap
CS559: Computer Graphics Lecture 6: Edge Detection, Image Compositing, and Warping Li Zhang Spring 2010.
Filtering and Color To filter a color image, simply filter each of R,G and B separately Re-scaling and truncating are more difficult to implement: –Adjusting.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
09/19/2002 (C) University of Wisconsin 2002, CS 559 Last Time Color Quantization Dithering.
Lecture 7: Intro to Computer Graphics. Remember…… DIGITAL - Digital means discrete. DIGITAL - Digital means discrete. Digital representation is comprised.
CS559: Computer Graphics Lecture 4: Compositing and Resampling Li Zhang Spring 2008.
2/17/04© University of Wisconsin, CS559 Spring 2004 Last Time Resampling –Ideal reconstruction –You can only ideally reconstruct band-limited functions.
9/30/04© University of Wisconsin, CS559 Fall 2004 Last Time Reconstruction Aliasing Compositing Intro.
In the name of God Computer Graphics.
02/05/2002 (C) University of Wisconsin 2002, CS 559 Last Time Color Quantization Mach Banding –Humans exaggerate sharp boundaries, but not fuzzy ones.
UniS CS297 Graphics with Java and OpenGL Blending.
(c) 2002 University of Wisconsin, CS 559
Painterly Rendering with Curved Brush Strokes of Multiple Sizes Aaron Hertzmann Siggraph98 資工所碩一 R 蔡銘宏.
9/28/04© University of Wisconsin, CS559 Fall 2004 Last Time Filtering –Box –Bartlett –Gaussian –Edge Detect (High-Pass) –Edge Enhance –How to apply filters.
Guilford County SciVis V104.03
ITEC2110, Digital Media Chapter 2 Fundamentals of Digital Imaging 1 GGC -- ITEC Digital Media.
Vector vs. Bitmap. Vector Images Vector images (also called outline images) are images made with lines, text, and shapes. Test type is considered to be.
BITMAPPED IMAGES & VECTOR DRAWN GRAPHICS
Advanced Computer Graphics
Dithering (Digital Halftoning)
Getting Started with Adobe Photoshop CS6
- Introduction - Graphics Pipeline
In the name of God Computer Graphics.
Binary Notation and Intro to Computer Graphics
Week 7 - Monday CS361.
Vector vs. Bitmap.
Advanced Computer Graphics
Linear Filters and Edges Chapters 7 and 8
Sampling Theorem & Antialiasing
(C) 2002 University of Wisconsin, CS 559
CS451Real-time Rendering Pipeline
General Functions A non-periodic function can be represented as a sum of sin’s and cos’s of (possibly) all frequencies: F() is the spectrum of the function.
Computer Vision Lecture 4: Color
Painterly Rendering with Curved Brush Strokes of Multiple Sizes
Digital Media Dr. Jim Rowan ITEC 2110.
CSCE 643 Computer Vision: Thinking in Frequency
© University of Wisconsin, CS559 Spring 2004
4. Image Enhancement in Frequency Domain
(c) 2002 University of Wisconsin, CS 559
(c) University of Wisconsin, CS559
Lecture 5: Resampling, Compositing, and Filtering Li Zhang Spring 2008
Lecture 8: 3D Transforms Li Zhang Spring 2008
Filtering Images Work in the spatial domain
Computer Graphics Image processing 紀明德
Presentation transcript:

Last Time Dithering Introduction to signal processing Filtering Threshold, Brightness preserving threshold, Random modulation, Ordered dithering – clustered, dispersed, Pattern dithering, Floyd-Steinberg Introduction to signal processing Fourier transform Some important transforms High-Frequency  Sharp Edges Low Frequency  Smooth variation Filtering Box, Bartlett, Gaussian, Edge Detect (High-Pass), Edge Enhance How to apply filters 10/7/2009 © NTUST

This Week Resampling Ideal Reconstruction Aliasing Compositing Painterly Rendering Intro to 3D Graphics Homework 3 due Oct 12 in class 10/7/2009 © NTUST

General Scenario You are trying to create a new image of some form, and you need data from a particular place in the existing image Always: Figure out where the new sample comes from in the original image Original New 10/7/2009 © NTUST

Resampling at a Point ? Use these to reconstruct We want to reconstruct the original “function” at the required point We will use information from around the point to do this We do it using a filter We will justify this shortly Which filter? We’ll look at Bartlett (triangular) Other filters also work You might view this as interpolation, but to understand what’s happening, we’ll view it as filtering ? Use these to reconstruct 10/7/2009 © NTUST

Resampling at a Point Place a Bartlett filter at the required point Multiply the value of the neighbor by the filter at that point, and add them Convolution with discrete samples The filter size is a parameter Say the filter is size 3, and you need the value at x=5.75 You need the image samples, I(x), from x=5, x=6 and x=7 You need the filter value, H(s), at s=-0.75, s=0.25 and s=1.25 Compute: I(5)*H(-0.75)+I(6)*H(0.25)+I(7)*H(1.25) 10/7/2009 © NTUST

Functional Form for Filters Consider the Bartlett in 1D: To apply it at a point xorig and find the contribution from point x where the image has value I(x) Extends naturally to 2D: -w/2 w/2 10/7/2009 © NTUST

Common Operations Image scaling by a factor k (e.g. 0.5 = half size): To get xorig given xnew, divide by k: Image rotation by an angle : This rotates around the bottom left (top left?) corner. It’s up to you to figure out how to rotate about the center Be careful of radians vs. degrees: all C++ standard math functions take radians, but OpenGL functions take degrees 10/7/2009 © NTUST

Ideal Image Size Reduction To do ideal image resampling, we would reconstruct the original function based on the samples A requirement for perfect enlargement or size reduction Almost never possible in practice, and we’ll see why 10/7/2009 © NTUST

An Reconstruction Example Say you have a sine function of a particular frequency And you sample it too sparsely You could draw a different sine curve through the samples 10/7/2009 © NTUST

Some Intuition To reconstruct a function, you need to reconstruct every frequency component that’s in it This is in the frequency domain, but that’s because it’s easy to talk about “components” of the function But we’ve just seen that to accurately reconstruct high frequencies, you need more samples The effect on the previous slide is called aliasing The correct frequency is aliased by the longer wavelength curve 10/7/2009 © NTUST

Nyquist Frequency Aliasing cannot happen if you sample at a frequency that is twice the original frequency – the Nyquist sampling limit You cannot accurately reconstruct a signal that was sampled below its Nyquist frequency – you do not have the information There is no point sampling at higher frequency – you do not gain extra information Signals that are not bandlimited cannot be accurately sampled and reconstructed They would require an infinite sampling frequency Can you reconstruct something with a sharp edge in it? How do you know where the edge should be? 10/7/2009 © NTUST

Sampling in Spatial Domain Sampling in the spatial domain is like multiplying by a spike function You take some ideal function and get data for a regular grid of points  10/7/2009 © NTUST

Sampling in Frequency Domain Sampling in the frequency domain is like convolving with a spike function Follows from the convolution theory: multiplication in spatial equals convolution in frequency Spatial spike function in the frequency domain is also the spike function Original  Sampled 10/7/2009 © NTUST

Reconstruction (Frequency Domain) To reconstruct, we must restore the original spectrum That can be done by multiplying by a square pulse Sampled  Original 10/7/2009 © NTUST

Reconstruction (Spatial Domain) Multiplying by a square pulse in the frequency domain is the same as convolving with a sinc function in the spatial domain  10/7/2009 © NTUST

Aliasing Due to Under-sampling If the sampling rate is too low, high frequencies get reconstructed as lower frequencies High frequencies from one copy get added to low frequencies from another   10/7/2009 © NTUST

More Aliasing Poor reconstruction also results in aliasing Consider a signal reconstructed with a box filter in the spatial domain (square box pixels, which means using a sinc in the frequency domain):   10/7/2009 © NTUST

Aliasing in Practice We have two types of aliasing: Aliasing due to insufficient sampling frequency Aliasing due to poor reconstruction You have some control over reconstruction If resizing, for instance, use an approximation to the sinc function to reconstruct (instead of Bartlett, as we used last time) Gaussian is closer to sinc than Bartlett But note that sinc function goes on forever (infinite support), which is inefficient to evaluate You have some control over sampling if creating images using a computer Remove all sharp edges (high frequencies) from the scene before drawing it That is, blur character and line edges before drawing 10/7/2009 © NTUST

Compositing Compositing combines components from two or more images to make a new image Special effects are easier to control when done in isolation Even many all live-action sequences are more safely shot in different layers 10/7/2009 © NTUST

Historically … The basis for film special effects (even before computers) Create digital imagery and composite it into live action It was necessary for films (like Star Wars) where models were used It was done with film and masks, and was time consuming and expensive Important part of animation – even hand animation Background change more slowly than foregrounds, so composite foreground elements onto constant background It was a major advance in animation – the multiplane camera first used in Snow White (1937) 10/7/2009 © NTUST

Perfect Storm 10/7/2009 © NTUST

Animated Example = over 10/7/2009 © NTUST

Mattes A matte is an image that shows which parts of another image are foreground objects Term dates from film editing and cartoon production How would I use a matte to insert an object into a background? How are mattes usually generated for television? 10/7/2009 © NTUST

Working with Mattes To insert an object into a background Call the image of the object the source Put the background into the destination For all the source pixels, if the matte is white, copy the pixel, otherwise leave it unchanged To generate mattes: Use smart selection tools in Photoshop or similar They outline the object and convert the outline to a matte Blue Screen: Photograph/film the object in front of a blue background, then consider all the blue pixels in the image to be the background 10/7/2009 © NTUST

Compositing Compositing is the term for combining images, one over the other Used to put special effects into live action Or live action into special effects 10/7/2009 © NTUST

Alpha Basic idea: Encode opacity information in the image Add an extra channel, the alpha channel, to each image For each pixel, store R, G, B and Alpha alpha = 1 implies full opacity at a pixel alpha = 0 implies completely clear pixels There are many interpretations of alpha Is there anything in the image at that point (web graphics) Transparency (real-time OpenGL) Images are now in RGBA format, and typically 32 bits per pixel (8 bits for alpha) All images in the project are in this format 10/7/2009 © NTUST

Pre-Multiplied Alpha Instead of storing (R,G,B,), store (R,G,B,) The compositing operations in the next several slides are easier with pre-multiplied alpha To display and do color conversions, must extract RGB by dividing out  =0 is always black Some loss of precision as  gets small, but generally not a big problem 10/7/2009 © NTUST

Compositing Assumptions We will combine two images, f and g, to get a third composite image Not necessary that one be foreground and background Background can remain unspecified Both images are the same size and use the same color representation Multiple images can be combined in stages, operating on two at a time 10/7/2009 © NTUST

Basic Compositing Operation At each pixel, combine the pixel data from f and the pixel data from g with the equation: F and G describe how much of each input image survives, and cf and cg are pre-multiplied pixels, and all four channels are calculated To define a compositing operation, define F and G 10/7/2009 © NTUST

Basic Compositing Operation F and G are simple functions of the alpha values F and G are chosen (independently) Different choices give different operations To code it, you can write one compositor and give it 6 numbers (3 for F, 3 for G) to say which function Constant of 0 or 1 f is multiplied by -1, 0 or 1. Similar for g 10/7/2009 © NTUST

Sample Images Images Alphas 10/7/2009 © NTUST

“Over” Operator If there’s some f, get f, otherwise get g over = 10/7/2009 © NTUST

“Over” Operator Computes composite with the rule that f covers g 10/7/2009 © NTUST

“Inside” Operator Get f to the extent that g is there, otherwise nothing inside = 10/7/2009 © NTUST

“Inside” Operator Computes composite with the rule that only parts of f that are inside g contribute 10/7/2009 © NTUST

“Outside” Operator Get f to the extent that g is not there, otherwise nothing outside = 10/7/2009 © NTUST

“Outside” Operator Computes composite with the rule that only parts of f that are outside g contribute 10/7/2009 © NTUST

“Atop” Operator Get f to the extent that g is there, otherwise g inside = 10/7/2009 © NTUST

“Atop” Operator Computes composite with the over rule but restricted to places where there is some g 10/7/2009 © NTUST

“Xor” Operator Get f to the extent that g is not there, and g to extent of no f xor = 10/7/2009 © NTUST

“Xor” Operator Computes composite with the rule that f contributes where there is no g, and g contributes where there is no f 10/7/2009 © NTUST

“Clear” Operator Computes a clear composite Note that (0,0,0,>0) is a partially opaque black pixel, whereas (0,0,0,0) is fully transparent, and hence has no color 10/7/2009 © NTUST

“Set” Operator Computes composite by setting it to equal f Copies f into the composite 10/7/2009 © NTUST

Compositing Operations F and G describe how much of each input image survives, and cf and cg are pre-multiplied pixels, and all four channels are calculated Operation F G Over Inside Outside Atop Xor Clear Set 10/7/2009 © NTUST

Unary Operators Darken: Makes an image darker (or lighter) without affecting its opacity Dissolve: Makes an image transparent without affecting its color 10/7/2009 © NTUST

“PLUS” Operator Computes composite by simply adding f and g, with no overlap rules Useful for defining cross-dissolve in terms of compositing: 10/7/2009 © NTUST

Obtaining  Values Hand generate (paint a grayscale image) Automatically create by segmenting an image into foreground background: Blue-screening is the analog method Remarkably complex to get right “Lasso” is the Photoshop operation With synthetic imagery, use a special background color that does not occur in the foreground Brightest blue or green is common 10/7/2009 © NTUST

Compositing With Depth Can store pixel “depth” instead of alpha Then, compositing can truly take into account foreground and background Generally only possible with synthetic imagery Image Based Rendering is an area of graphics that, in part, tries to composite photographs taking into account depth 10/7/2009 © NTUST

Painterly Filters Many methods have been proposed to make a photo look like a painting Today we look at one: Painterly-Rendering with Brushes of Multiple Sizes Basic ideas: Build painting one layer at a time, from biggest to smallest brushes At each layer, add detail missing from previous layer 10/7/2009 © NTUST

Algorithm 1 function paint(sourceImage,R1 ... Rn) // take source and several brush sizes { canvas := a new constant color image // paint the canvas with decreasing sized brushes for each brush radius Ri, from largest to smallest do // Apply Gaussian smoothing with a filter of size const * radius // Brush is intended to catch features at this scale referenceImage = sourceImage * G(fs Ri) // Paint a layer paintLayer(canvas, referenceImage, Ri) } return canvas 10/7/2009 © NTUST

Algorithm 2 procedure paintLayer(canvas,referenceImage, R) // Add a layer of strokes { S := a new set of strokes, initially empty D := difference(canvas,referenceImage) // euclidean distance at every pixel for x=0 to imageWidth stepsize grid do // step in size that depends on brush radius for y=0 to imageHeight stepsize grid do { // sum the error near (x,y) M := the region (x-grid/2..x+grid/2, y-grid/2..y+grid/2) areaError := sum(Di,j for i,j in M) / grid2 if (areaError > T) then { // find the largest error point (x1,y1) := max Di,j in M s :=makeStroke(R,x1,y1,referenceImage) add s to S } paint all strokes in S on the canvas, in random order 10/7/2009 © NTUST

Results Original Biggest brush Medium brush added Finest brush added 10/7/2009 © NTUST

Point Style Uses round brushes We provide a routine to “paint” round brush strokes into an image for the project 10/7/2009 © NTUST

Where to now… We are now done with images We will spend several weeks on the mechanics of 3D graphics Coordinate systems and Viewing Clipping Drawing lines and polygons Lighting and shading We will finish the semester with modeling and some additional topics 10/7/2009 © NTUST

Graphics Toolkits Graphics toolkits typically take care of the details of producing images from geometry Input (via API functions): Where the objects are located and what they look like Where the camera is and how it behaves Parameters for controlling the rendering Functions (via API): Perform well defined operations based on the input environment Output: Pixel data in a framebuffer – an image in a special part of memory Data can be put on the screen Data can be read back for processing (part of toolkit) 10/7/2009 © NTUST

OpenGL OpenGL is an open standard graphics toolkit Derived from SGI’s GL toolkit Provides a range of functions for modeling, rendering and manipulating the framebuffer What makes a good toolkit? Alternatives: Direct3D, Java3D - more complex and less well supported 10/7/2009 © NTUST

A Good Toolkit… Everything is a trade-off Functionality Compact: a minimalist set of commands Orthogonal: commands do different things and can be combined in a consistent way Speed Ease-of-Use and Documentation Portability Extensibility Standards and ownership Not an exhaustive list … 10/7/2009 © NTUST