Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC 2004. Lecture 6: Low-level features 1 Computational Architectures in Biological.

Slides:



Advertisements
Similar presentations
Spatial Filtering (Chapter 3)
Advertisements

Instructor: Mircea Nicolescu Lecture 6 CS 485 / 685 Computer Vision.
October 2, 2014Computer Vision Lecture 8: Edge Detection I 1 Edge Detection.
CDS 301 Fall, 2009 Image Visualization Chap. 9 November 5, 2009 Jie Zhang Copyright ©
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge Detection. Our goal is to extract a “line drawing” representation from an image Useful for recognition: edges contain shape information –invariance.
Lecture 4 Edge Detection
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 4: Introduction to Vision 1 Computational Architectures in Biological.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
Edge Detection Lecture 2: Edge Detection Jeremy Wyatt.
Segmentation (Section 10.2)
Introduction to Computer Vision CS / ECE 181B Thursday, April 22, 2004  Edge detection (HO #5)  HW#3 due, next week  No office hours today.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 8: Stereoscopic Vision 1 Computational Architectures in Biological.
Signal Processing of Germanium Detector Signals David Scraggs University of Liverpool UNTF 2006.
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 4: Introduction to Vision 1 Computational Architectures in Biological.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 5: Introduction to Vision 2 1 Computational Architectures in.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
ENG4BF3 Medical Image Processing
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30 – 11:50am.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
黃文中 Introduction The Model Results Conclusion 2.
Image Processing Edge detection Filtering: Noise suppresion.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
CSC508 What You Should Be Doing Code, code, code –Programming Gaussian Convolution Sobel Edge Operator.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Instructor: S. Narasimhan
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Many slides from Steve Seitz and Larry Zitnick
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Templates, Image Pyramids, and Filter Banks
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Edges.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
CDS 301 Fall, 2008 Image Visualization Chap. 9 November 11, 2008 Jie Zhang Copyright ©
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 7
Digital Image Processing
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
A Neurodynamical Cortical Model of Visual Attention and Invariant Object Recognition Gustavo Deco Edmund T. Rolls Vision Research, 2004.
- photometric aspects of image formation gray level images
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Fourier Transform: Real-World Images
Edge Detection CS 678 Spring 2018.
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 9: Edge Detection II
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Dr. Chang Shu COMP 4900C Winter 2008
ECE 692 – Advanced Topics in Computer Vision
Lecture 2: Edge detection
Image Segmentation Image analysis: First step:
Feature Detection .
CS 565 Computer Vision Nazar Khan Lecture 9.
Lecture 5: Feature invariance
Wavelet transform application – edge detection
Presentation transcript:

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 1 Computational Architectures in Biological Vision, USC Lecture 6: Low-Level Processing and Feature Detection. Reading Assignments: Chapters 7 & 8.

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 2 Low-Level Processing Remember: Vision as a change in representation. At the low-level, such change can be done by fairly streamlined mathematical transforms: - Fourier transform - Wavelet transform these transforms yield a simpler but more organized image of the input. Additional organization is obtained through multiscale representations.

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 3 Biological low-level processing Edge detection and wavelet transforms in V1: hypercolumns and Jets. but… - processing appears highly non-linear, hence convolution of input by wavelets only approximates real responses; - neuronal responses are influenced by context, i.e., neuronal activity at one location depends on activity at possibly distant locations; - responses at one level of processing (e.g., V1) also depend on feedback from higher levels, and other modulatory effects such as attention, training, etc.

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 4 Fourier Transform

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 5 Problem - The Fourier transform does not intuitively encode non-stationary (i.e., time-varying) signals. - One solution is to use the short-term Fourier transform, and repeat for successive time slices. - Another is to use a wavelet transform.

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 6 Wavelet Transform - Mother wavelet  : defines shape and size of window - Convolved with signal (x) after translation (tau) and scaling (s) - Results stored in array indexed by translation and scaling

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 7 Example: small-scale wavelet is applied

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 8 then larger-scale…

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 9 and even larger scale…

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 10 Result is indexed by translation & scale

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 11 Wavelet Transform & Basis Decomposition We define the inner product between two functions: then the continuous wavelet transform: can be thought of as taking the inner product between signal and all of the different wavelets (parameterized by translation & scale):

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 12 Orthonormal two functions are orthogonal iff: and a set of functions is orthonormal iff: with

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 13 Basis If the collection of wavelets forms an orthonormal basis, then we can compute: and fully reconstruct the signal from those coefficients (and knowledge of the wavelet functions) alone: thus the transformation is reversible.

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 14 Edge Detection Very important to both biological and computer vision: - Easy and cheap (computationally) to compute. - Provide strong visual clues to help recognition. - Problem: sensitive to image noise. why? because edge detection is a high-pass filtering process and noise typically has high-pass components (e.g., speckle noise).

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 15 Laplacian Edge Detection Edges are defined as zero-crossings of the second derivative (Laplacian if more than one-dimensional) of the signal. This is very sensitive to image noise; thus typically we first blur the image to reduce noise. We then use a Laplacian-of-Gaussian filter to extract edges. Smoothed signal First derivative (gradient)

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 16 Derivatives in 2D Gradient: for discrete images: magnitude and direction:

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 17 Laplacian-of-Gaussian Laplacian:

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 18

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 19 Another Edge Detection Scheme - Maxima of the modulus of the Gradient in the Gradient direction (Canny-Deriche): -use optimal 1 st derivative filter to estimate edges -estimate noise level from RMS of 2 nd derivative of filter responses -determine two thresholds, Thigh and Tlow from the noise estimate -edges are points which are locally maximum in gradient direction -a hysteresis process is employed to complete edges, i.e., - edge will start when filter response > Thigh - but may continue as long as filter response > Tlow

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 20

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 21

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 22

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 23

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 24

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 25

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 26

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 27

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 28

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 29

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 30

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 31

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 32

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 33 Biological Feature Extraction Center-surround vs. Laplacian of Gaussian.

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 34 Using Pyramids to Compute Biological Features - Build Gaussian Pyramid - Take difference between pixels at same image locations but different scales - Result: difference-of-Gaussians receptive fields

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 35 Illusory Contours Some mechanism is responsible for our illusory perception of contours where there are none…

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 36 Gabor “jets” Similar to a biological hypercolumn: collection of Gabor filters with various orientations and scales, but all centered at one visual location.

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 37 Non-Classical Surround Sillito et al, Nature, 1995: response of neurons is modulated by stimuli outside the neuron’s receptive field. Method: - Map receptive field location and size - Check that neuron does not respond to stimuli outside the mapped RF - Present stimulus in RF - Compare this baseline response to response obtained when stimuli are also present outside the RF. Result: - stimuli outside RF similar to the one inside RF inhibit neuron - stimuli outside RF very similar to the one inside RF do not affect (or enhance very slightly) neuron

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 38 Non-classical surround inhibition

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 39 Example

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 40 Non-Classical Surround & Edge Detection Holt & Mel, 2000

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 41 Long-range Excitation

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 42 Long-range Gilbert et al, Stimulus outside RF enhances neuron’s response if placed and oriented such as to form a contour.

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 43 Modeling long-range connections

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 44 Contour completion

Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 45 Grouping and Object Segmentation We can do much more than simply extract and follow contours…