Announcements Big mistake on hint in problem 1 (I’m very sorry).

Slides:



Advertisements
Similar presentations
Texture. Limitation of pixel based processing Edge detection with different threshold.
Advertisements

3-D Computer Vision CSc83020 / Ioannis Stamos  Revisit filtering (Gaussian and Median)  Introduction to edge detection 3-D Computater Vision CSc
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
Boundary Detection - Edges Boundaries of objects –Usually different materials/orientations, intensity changes.
Spatial Filtering (Chapter 3)
Approaches for Retinex and Their Relations Yu Du March 14, 2002.
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Lecture 4 Edge Detection
Texture This isn’t described in Trucco and Verri Parts are described in: – Computer Vision, a Modern Approach by Forsyth and Ponce –“Texture Synthesis.
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Texture Turk, 91.
Computer Vision - A Modern Approach
MSU CSE 803 Stockman Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute.
Computer Vision Introduction to Image formats, reading and writing images, and image environments Image filtering.
Image Forgery Detection by Gamma Correction Differences.
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Announcements For future problems sets: matlab code by 11am, due date (same as deadline to hand in hardcopy). Today’s reading: Chapter 9, except.
CS443: Digital Imaging and Multimedia Filters Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Spring 2008 Ahmed Elgammal Dept.
Color, lightness & brightness Lavanya Sharan February 7, 2011.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Texture Recognition and Synthesis A Non-parametric Multi-Scale Statistical Model by De Bonet & Viola Artificial Intelligence Lab MIT Presentation by Pooja.
Image Enhancement.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Matlab Tutorial Continued Files, functions and images.
Object recognition under varying illumination. Lighting changes objects appearance.
Photometric Stereo & Shape from Shading
MSU CSE 803 Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute some result.
Linear Filtering About modifying pixels based on neighborhood. Local methods simplest. Linear means linear combination of neighbors. Linear methods simplest.
Face Relighting with Radiance Environment Maps Zhen Wen 1, Zicheng Liu 2, Thomas Huang 1 Beckman Institute 1 University of Illinois Urbana, IL61801, USA.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
EE465: Introduction to Digital Image Processing
Neighborhood Operations
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
CAP5415: Computer Vision Lecture 4: Image Pyramids, Image Statistics, Denoising Fall 2006.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
Filtering and Enhancing Images. Major operations 1. Matching an image neighborhood with a pattern or mask 2. Convolution (FIR filtering)
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
Color and Brightness Constancy Jim Rehg CS 4495/7495 Computer Vision Lecture 25 & 26 Wed Oct 18, 2002.
Shape from Shading Course web page: vision.cis.udel.edu/cv February 26, 2003  Lecture 6.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Lighting affects appearance. Lightness Digression from boundary detection Vision is about recovery of properties of scenes: lightness is about recovering.
Texture Synthesis by Image Quilting CS766 Class Project Fall 2004 Eric Robinson.
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 4 – Audio and Digital Image Representation Klara Nahrstedt Spring 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Digital Filters. What are they?  Local operation (neighborhood operation in GIS terminology) by mask, window, or kernel (different words for the same.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Computational Vision CSCI 363, Fall 2012 Lecture 17 Stereopsis II
Edges Edges = jumps in brightness/color Brightness jumps marked in white.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Announcements Project 3a due today Project 3b due next Friday.
- photometric aspects of image formation gray level images
Linear Filters and Edges Chapters 7 and 8
Perceptual Development
Image Processing - in short
Parallel Integration of Video Modules
Texture.
Spatial operations and transformations
?. ? White Fuzzy Color Oblong Texture Shape Most problems in vision are underconstrained White Color Most problems in vision are underconstrained.
Linear Operations Using Masks
Shape from Shading and Texture
Physical Problem of Simultaneous Linear Equations in Image Processing
Spatial operations and transformations
Presentation transcript:

Announcements Big mistake on hint in problem 1 (I’m very sorry).

Announcements On 2e, use I = [zeros(1,50), 9*ones(1,10), zeros(1,10), 3*ones(1,40), zeros(1,50)]. Result will be more interesting. (If you used I in 2c already, that’s ok).

Announcements Best not to use built in code conv or fspecial. These don’t give you easy control needed for assignment.

Problem Set 2: Convolution g h x x f

PS 2: Discrete Filter

From Tuesday

Markov Model Captures local dependencies. –Each pixel depends on neighborhood. Example, 1D first order model P(p1, p2, …pn) = P(p1)*P(p2|p1)*P(p3|p2,p1)*… = P(p1)*P(p2|p1)*P(p3|p2)*P(p4|p3)*…

Example 1 st Order Markov Model Each pixel is like neighbor to left + noise with some probability. Matlab These capture a much wider range of phenomena.

There are dependencies in Filter Outputs Edge –Filter responds at one scale, often does at other scales. –Filter responds at one orientation, often doesn’t at orthogonal orientation. Synthesis using wavelets and Markov model for dependencies: –DeBonet and Viola –Portilla and Simoncelli

We can do this without filters Each pixel depends on neighbors. 1.As you synthesize, look at neighbors. 2.Look for similar neighborhood in sample texture. 3.Copy pixel from that neighborhood. 4.Continue.

This is like copying, but not just repetition Photo Pattern Repeated

With Blocks

Conclusions Model texture as generated from random process. Discriminate by seeing whether statistics of two processes seem the same. Synthesize by generating image with same statistics.

To Think About 3D effects –Shape: Tiger’s appearance depends on its shape. –Lighting: Bark looks different with light angle Given pictures of many chairs, can we generate a new chair?

Lightness Digression from boundary detection Vision is about recovery of properties of scenes: lightness is about recovering material properties. –Simplest is how light or dark material is (ie., its reflectance). –We’ll see how boundaries are critical in solving other vision problems.

Basic problem of lightness Luminance (amount of light striking the eye) depends on illuminance (amount of light striking the surface) as well as reflectance.

Basic problem of lightness B A Is B darker than A because it reflects a smaller proportion of light, or because it’s further from the light?

Planar, Lambertian material. n n L = r*cos(  e where r is reflectance (aka albedo)  is angle between light and n e is illuminance (strength of light) If we combine  and e at a point into E(x,y) then: L(x,y) = R(x,y)*E(x,y)

Can think of E as appearance of white paper with given illuminance. R is appearance of planar object under constant lighting. L is what we see. Problem: We measure L, we want to recover R. How is this possible? Answer: We must make additional assumptions.

Simultaneous contrast effect

Illusions Seems like visual system is making a mistake. But, perhaps visual system is making assumptions to solve underconstrained problem; illusions are artificial stimuli that reveal these assumptions.

Assumptions Light is slowly varying –This is reasonable for planar world: nearby image points come from nearby scene points with same surface normal. Within an object reflectance is constant or slowly varying. Between objects, reflectance varies suddenly.

This is sometimes called the Mondrian world.

L(x,y) = R(x,y)*E(x,y) Formally, we assume that illuminance, E, is low frequency.

L(x,y) = R(x,y)*E(x,y) * = Smooth variations in image due to lighting, sharp ones due to reflectance.

So, we remove slow variations from image. Many approaches to this. One is: Log(L(x,y)) = log(R(x,y)) + log(E(x,y)) Hi-pass filter this, (say with derivative). Why is derivative hi-pass filter? d sin(nx)/dx = ncos(nx). Frequency n is amplified by a factor of n. Threshold to remove small low-frequencies. Then invert process; take integral, exponentiate.

ReflectancesReflectances* Lighting Restored Reflectances (Note that the overall scale of the reflectances is lost because we take derivative then integrate)

These operations are easy in 1D, tricky in 2D. For example, in which direction do you integrate? Many techniques exist.

These approaches fail on 3D objects, where illuminance can change quickly as well.

Our perceptions are influenced by 3D cues.

To solve this, we need to compute reflectance in the right region. This means that lightness depends on surface perception, ie., a different kind of boundary detection.