Biologically Inspired Algorithms for Computer Vision: Motion Estimation from Steerable Wavelet Construction D. Conte A, J. Ng B, E. Grisan A, A. Ruggeri.

Slides:



Advertisements
Similar presentations
Shapelets Correlated with Surface Normals Produce Surfaces Peter Kovesi School of Computer Science & Software Engineering The University of Western Australia.
Advertisements

Gabor Filter: A model of visual processing in primary visual cortex (V1) Presented by: CHEN Wei (Rosary) Supervisor: Dr. Richard So.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
University of Ioannina - Department of Computer Science Wavelets and Multiresolution Processing (Background) Christophoros Nikou Digital.
Dynamic Occlusion Analysis in Optical Flow Fields
Pyramids and Texture. Scaled representations Big bars and little bars are both interesting Spots and hands vs. stripes and hairs Inefficient to detect.
Computer Vision Lecture 16: Texture
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
A New Block Based Motion Estimation with True Region Motion Field Jozef Huska & Peter Kulla EUROCON 2007 The International Conference on “Computer as a.
Multi-resolution Analysis TFDs, Wavelets Etc. PCG applications.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Optical Flow Methods 2007/8/9.
1 1 Contour Enhancement and Completion via Left-Invariant Second Order Stochastic Evolution Equations on the 2D-Euclidean Motion Group Erik Franken, Remco.
Computer Vision Introduction to Image formats, reading and writing images, and image environments Image filtering.
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Motion Computing in Image Analysis
2D Fourier Theory for Image Analysis Mani Thomas CISC 489/689.
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
Multiscale transforms : wavelets, ridgelets, curvelets, etc.
DIGITAL SIGNAL PROCESSING IN ANALYSIS OF BIOMEDICAL IMAGES Prof. Aleš Procházka Institute of Chemical Technology in Prague Department of Computing and.
ENG4BF3 Medical Image Processing
Image Representation Gaussian pyramids Laplacian Pyramids
EE513 Audio Signals and Systems Digital Signal Processing (Systems) Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
The Wavelet Tutorial: Part3 The Discrete Wavelet Transform
Image recognition using analysis of the frequency domain features 1.
CAP5415: Computer Vision Lecture 4: Image Pyramids, Image Statistics, Denoising Fall 2006.
Medical Image Analysis Image Enhancement Figures come from the textbook: Medical Image Analysis, by Atam P. Dhawan, IEEE Press, 2003.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
Correspondence-Free Determination of the Affine Fundamental Matrix (Tue) Young Ki Baik, Computer Vision Lab.
Experimenting with Multi- dimensional Wavelet Transformations Tarık Arıcı and Buğra Gedik.
Image Processing Edge detection Filtering: Noise suppresion.
Advanced Digital Signal Processing
Using the Local Phase of the Magnitude of the Local Structure Tensor for Image Registration Anders Eklund, Daniel Forsberg, Mats Andersson, Hans Knutsson.
Lecture 7: Sampling Review of 2D Fourier Theory We view f(x,y) as a linear combination of complex exponentials that represent plane waves. F(u,v) describes.
School of Electrical & Computer Engineering Image Denoising Using Steerable Pyramids Alex Cunningham Ben Clarke Dy narath Eang ECE November 2008.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
1 Computational Vision CSCI 363, Fall 2012 Lecture 21 Motion II.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Edges.
Functional Brain Signal Processing: EEG & fMRI Lesson 14
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
By Dr. Rajeev Srivastava CSE, IIT(BHU)
2D Fourier Transform.
Presented by: Class Presentation of Custom DSP Implementation Course on: This is a class presentation. All data are copy rights of their respective authors.
Computer vision. Applications and Algorithms in CV Tutorial 3: Multi scale signal representation Pyramids DFT - Discrete Fourier transform.
Instructor: Mircea Nicolescu Lecture 7
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
An Adaptive Face-recognition Method Based On Phase Information Presented By:- Suvendu Kumar Dash Department of Electronics and Communication Engineering.
Electronics And Communications Engineering Nalla Malla Reddy Engineering College Major Project Seminar on “Phase Preserving Denoising of Images” Guide.
Digital Image Processing Lecture 8: Fourier Transform Prof. Charlene Tsai.
Medical Image Analysis
Linear Filters and Edges Chapters 7 and 8
Linear Filters and Edges Chapters 7 and 8
Wavelets : Introduction and Examples
Moo K. Chung1,3, Kim M. Dalton3, Richard J. Davidson2,3
Image Primitives and Correspondence
Common Classification Tasks
Anisotropic Double Cross Search Algorithm using Multiresolution-Spatio-Temporal Context for Fast Lossy In-Band Motion Estimation Yu Liu and King Ngi Ngan.
Computer Vision Lecture 16: Texture II
Analysis of Contour Motions
Coupled Horn-Schunck and Lukas-Kanade for image processing
Prediction of Orientation Selectivity from Receptive Field Architecture in Simple Cells of Cat Visual Cortex  Ilan Lampl, Jeffrey S. Anderson, Deda C.
Presentation transcript:

Biologically Inspired Algorithms for Computer Vision: Motion Estimation from Steerable Wavelet Construction D. Conte A, J. Ng B, E. Grisan A, A. Ruggeri A A Department of Information Engineering, University of Padova, Padova, Italy B Department of Bioengineering, Imperial College London, London, UK Introduction Since structural information is mostly carried by phase, image reconstruction from a phase spectrum gives better results than using magnitude spectrum [4]. At variance with traditional Fourier analysis, using wavelets provides image features that are localized both in space and in frequency, decomposing the image in local amplitude and local phase information, within a narrow frequency band. Bharath and Ng in [1], designed a complex orientation steerable framework, applying it to image denoising. Here, we use this framework to perform motion estimation analysis. Given the steered filters output, we are able to compute the optical flow from a measure of local phase difference, and then to obtain an estimation of the displacement field between two images. Materials and Methods Correspondence Davide Conte, University of Padova and University of Verona, Italy. Jeffrey Ng, Imperial College London, UK. The authors would like to thank Dr. Anil Bharath, head of the Bioengineering Vision Research Group at Imperial College London. References [1] A. A. Bharath and J. Ng. A steerable complex wavelet construction and its application to image denoising. IEEE Transactions on Image Processing, 14(7): 948–959, [2] P. Dayan and L. F. Abbott. Theoretical Neuroscience, MIT Press, [3]M. Hubener et al., Spatial Relationships among Three Columnar Systems in Cat Area 17, J.of Neuroscience, 17(23): , [4] A. V. Oppenheim and J. S. Lim. The importance of phase in signals. Proceedings of the IEEE, 69(5): 529–541, [5] M. Felsberg. Optical flow estimation from monogenic phase. 1 st International Workshop on Complex Motion, 3417: 1–13, [6] E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger. Shiftable multiscale transforms. IEEE Transactions on Information Theory, 38(2): 587–607, [7] D. Conte. Biologically Inspired Algorithms for Computer Vision: Phase-Based motion Estimation, M.Sc. Thesis, University of Padova, April [8] S. S. Beauchemin and J. L. Barron. The computation of optical flow. ACM Computing Surveys, 27(3): 433–467, [9] H. Knutsson and M. Andersson. Morphons: Paint on priors and elastic canvas for segmentation and registration. SCIA 2005, LNCS 3540: 292–301, Results and Discussion Spatial filter kernels were built along 4 different orientations for each frequency band. Each filter provides a complex response and the steerability property [6] allows us to compute a single filter response along the estimated dominant orientation θ x. If the signal is locally 1-D around the point x, we can define a local image model as: where local amplitude A(x) is supposed to be constant around x, and I 0 represents the mean intensity value of the image. The local phase at point x is therefore extracted as argument of the complex steered filter response:  (−π, π] Considering orientation and phase information as a whole, both can be combined in the phase vector: Considering two frames of a video, we can assume that the new frame has been obtained from the first one by a local displacement d(x), and the same condition stands also for the phase vectors: Since the restriction to a narrow band of frequency components allows the approximation of the phase vector with a Taylor series around x [5],[7], if the displacement is sufficiently small with respect to the considered level of resolution of the image, remembering that the local frequency ω x is defined as and that n x n x T projects the displacement vector d along the dominant orientation, we obtain: The concept of displacement from phase is not new, and applications of the method in biomedical image processing have successfully been done [9]. However, motion estimation from steerable filters differs from traditional phase-based methods, because we computed the phase difference along the dominant orientation only, instead of interpolating phase difference values from different directions. The method has been succesfully applied on a synthetic pattern and on a registration problem of IR retinal blood vessels images, but there is certainly the need of a deep comparison with other motion estimation methods, and, before that, the need of improving estimates precision. Orientation-steerable wavelet-based filters allow a 2-D generalization of the concept of analytic signal defined in a 1-D context by means of the oriented in-quadrature filters. Moreover, the filters emulate the properties of receptive fields in biological vision: The computational model derived from the properties of neurons in the Primary Visual Cortex [2], was used as inspiring model to design a system that permits the decomposition of image information at different scales (i.e. different spatial frequencies) and orientations with different phase-symmetry behaviour. Symmetric and antisymmetric orientented in- quadrature filters V1 neurons selectivity to orientation (a) and spatial frequencies (b) of visual stimuli [3]. In figure (b) white areas represent neurons that are sensitive to high spatial frequencies, while gray areas represent neurons that a re sensitive to low spatial frequencies. This can be seen as the analogous for the phase to the Optical Flow Constraint Equation [8], and indeed it suffers from the same ill-positioning, since only the projection of d along n x is determinable from the difference of the phase vectors (the local 1-dimensionality of the signal leads to the so called aperture problem [8]). To overcome this limit and try to compute the true displacement vector, we followed a weighted least squares approach (WLS) integrating information over a small region Ω around x. θxθx Finally, a multi-resolution approach can be used to integrate displacement estimates from different scales. Optical flow estimation on synthetic pattern, with plot of absolute error for constant motion No smoothing has been applied after the WLS, that demonstrates of being able to handle the aperture problem. (a) Original first frame, an IR image of retinal blood vessels; (b) Normalized image subtraction of the two frames (gray means zero); (c) Normalized image subtraction after using the estimated displacement field to warp the first frame. (a) (b) (c) Local phase measured along dominant orientation, with the signal represented as a sinusoidal function given by the local image model. Illustration of the aperture problem (a) (b)