Motion Detection Using MATLAB Laura DeMar Jin Han Advisor: Professor Rudko.

Slides:



Advertisements
Similar presentations
Ter Haar Romeny, EMBS Berder 2004 How can we find a dense optic flow field from a motion sequence in 2D and 3D? Many approaches are taken: - gradient based.
Advertisements

Perception Chapter 9: Event Perception Event Perception: an event is defined as a change in both time and space. Thus far we have discussed how our visual.
Dynamic Occlusion Analysis in Optical Flow Fields
Purpose The aim of this project was to investigate receptive fields on a neural network to compare a computational model to the actual cortical-level auditory.
Image processing (spatial &frequency domain) Image processing (spatial &frequency domain) College of Science Computer Science Department
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
Insects as Gibsonian Animals Amelia Grant-Alfieri Mandyam V. Srinivasan Ecological Psychology, 1998 Centre for Visual Science, Research School of Biological.
Multimedia communications EG 371Dr Matt Roach Multimedia Communications EG 371 and EE 348 Dr Matt Roach Lecture 6 Image processing (filters)
Introduction: Neurons and the Problem of Neural Coding Laboratory of Computational Neuroscience, LCN, CH 1015 Lausanne Swiss Federal Institute of Technology.
黃文中 Preview 2 3 The Saliency Map is a topographically arranged map that represents visual saliency of a corresponding visual scene. 4.
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
A New Block Based Motion Estimation with True Region Motion Field Jozef Huska & Peter Kulla EUROCON 2007 The International Conference on “Computer as a.
1 © 2010 Cengage Learning Engineering. All Rights Reserved. 1 Introduction to Digital Image Processing with MATLAB ® Asia Edition McAndrew ‧ Wang ‧ Tseng.
Motion based Correspondence for Distributed 3D tracking of multiple dim objects Ashok Veeraraghavan.
Today Introduction to MCMC Particle filters and MCMC
Baysian Approaches Kun Guo, PhD Reader in Cognitive Neuroscience School of Psychology University of Lincoln Quantitative Methods 2011.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Computation and Visual Orientation In Flies MSc Evolutionary and Adaptive Systems Computer Vision Juan Pablo Calderon.
Computational Architectures in Biological Vision, USC, Spring 2001
Motion detection with movement detectors. It is a non-linear device: response to velocity a and velocity b is not equal to velocity a+b movement detection.
Sensing self motion Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems Position sensing Velocity.
1 Computational Vision CSCI 363, Fall 2012 Lecture 26 Review for Exam 2.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
Vision Surveillance Paul Scovanner.
Digital Image Processing Chapter # 4 Image Enhancement in Frequency Domain Digital Image Processing Chapter # 4 Image Enhancement in Frequency Domain.
Methods Neural network Neural networks mimic biological processing by joining layers of artificial neurons in a meaningful way. The neural network employed.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
1 Computational Vision CSCI 363, Fall 2012 Lecture 20 Stereo, Motion.
Filtering and Enhancing Images. Major operations 1. Matching an image neighborhood with a pattern or mask 2. Convolution (FIR filtering)
EE663 Image Processing Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
National Taiwan A Road Sign Recognition System Based on a Dynamic Visual Model C. Y. Fang Department of Information and.
CS332 Visual Processing Department of Computer Science Wellesley College Analysis of Motion Measuring image motion.
Lecture 21 Neural Modeling II Martin Giese. Aim of this Class Account for experimentally observed effects in motion perception with the simple neuronal.
Visual Motion Detection Laura DeMar Jin Han Advisor: Professor Rudko.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Chapter 11 Filter Design 11.1 Introduction 11.2 Lowpass Filters
Escape Behavior of Flesh-Fly (Sarcophagidae): Verifying the mechanism of escape initiation Dae-eun Kim School of Biological Sciences.
Spatio-temporal saliency model to predict eye movements in video free viewing Gipsa-lab, Grenoble Département Images et Signal CNRS, UMR 5216 S. Marat,
Understanding early visual coding from information theory By Li Zhaoping Lecture at EU advanced course in computational neuroscience, Arcachon, France,
Computational Vision CSCI 363, Fall 2012 Lecture 22 Motion III
Unit 2- Force and Motion Vocabulary- Part I. Frame of Reference  A system of objects that are not moving with respect to each other.
Sensation & Perception. Motion Vision I: Basic Motion Vision.
Computer Graphics & Image Processing Chapter # 4 Image Enhancement in Frequency Domain 2/26/20161.
1 Computational Vision CSCI 363, Fall 2012 Lecture 29 Structure from motion, Heading.
Filters– Chapter 6. Filter Difference between a Filter and a Point Operation is that a Filter utilizes a neighborhood of pixels from the input image to.
Independent Component Analysis features of Color & Stereo images Authors: Patrik O. Hoyer Aapo Hyvarinen CIS 526: Neural Computation Presented by: Ajay.
# x pixels Geometry # Detector elements Detector Element Sizes Array Size Detector Element Sizes # Detector elements Pictorial diagram showing detector.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
The content of lecture This lecture will cover: Fourier Transform
3D Single Image Scene Reconstruction For Video Surveillance Systems
Exploring Spatial Frequency Channels in Stereopsis
Early Processing in Biological Vision
Graphing Motion Walk Around
Vision Tracking System
Digital Image Processing Week IV
Unit 2- Force and Motion Vocabulary- Part I.
Comprehensive Characterization of the Major Presynaptic Elements to the Drosophila OFF Motion Detector  Etienne Serbe, Matthias Meier, Aljoscha Leonhardt,
Figure Tracking by Flies Is Supported by Parallel Visual Streams
Experiencing the World
Lecture 4 Image Enhancement in Frequency Domain
Comprehensive Characterization of the Major Presynaptic Elements to the Drosophila OFF Motion Detector  Etienne Serbe, Matthias Meier, Aljoscha Leonhardt,
Higher-Order Figure Discrimination in Fly and Human Vision
Image Enhancement in Spatial Domain: Neighbourhood Processing
Maximum Response Experimentation
Edge Detection via Lateral Inhibition
Lark Kwon Choi, Alan Conrad Bovik
Presentation transcript:

Motion Detection Using MATLAB Laura DeMar Jin Han Advisor: Professor Rudko

Project Goals Model an animal’s visual perception of movements in natural scenes  Use an elementary motion detector of the correlation type  Study an animal’s response to an idealized input stimulus Provide a correct model that gives insight into insect’s reaction to prey and food and non-reaction to background motion

Introduction Bernhard Hassentein and Werner Reichardt  Did series of behavioral experiments on motion vision in insects  Wasps, bees, and flies Led to development of the correlation-type motion detector, also known as the Reichardt Detector

Correlation Model Two mirror subunits Each subunit has a delay and multiplication Image Frames  Object moves by n0 pixels from frame to frame Imprint  Delayed by a time constant D Output  Multiplication of this object with its imprint  Maximum response when D = n0

Spatial filter Symmetric Gaussian filter Edge enhancement Spatial Filter mask Spatially filtered object with size L=20. n0 = 1, D = 6, and tau = 1.5

Temporal Filter  First-order lowpass filter  Filtered response decays exponentially  Filter gain is inversely proportional to the time constant Spatially and Temporally Filtered Object with length L=6. Time constant is 1.5 and D=3

Object Simulation Idealized object  Matrix of ones representing a white rectangle  Size: length L, width K (Pixels)  Moves across the screen in frames  Variable size and direction  Constant velocity

Filtered Moving Object Temporally filtered signals mimic those that would appear in lizard’s retinal image. They decay with time constant tau The larger tau, the longer the memory of the filter Temporally filtered object for tau=20 and n0 = 2

Results  For small objects, the gain of the temporal filter is inversely proportional to tau.  The maximum response peaks when the velocity is equal to the correlator distance  Calculated the size of the maximum response for various object sizes, speeds, and memory time constants

The Effect of the Time Constant For Different Object Sizes  There is a shift in the peak of the maximum response for large object sizes with different tau.  The peak of the maximum response does not occur when the velocity is equal to the correlation distance (D = 6).  Large objects are not tuned for different values of tau.

The Effect of Tau on Correlation Distance and Tuning (Large Object)  For small correlation distance (D) and small tau, the response is broader  The tuning of the motion detector depends on tau

Future Work Acceleration Swaying motion Relation with physiological results

References J.M Zanker, M.V. Srinivasan, M. Egelhaaf. “Speed Tuning in Elementary Motion Detectors Of the Correlation Type”. Biological Cybernetics, Alexander Borst and Martin Egelhaaf. “Principles of Visual Motion Detection”. TINS, Vol. 12, No 8, Alexander Borst. “Models of Motion Detection”. Nature neuroscience supplement, Vol. 3, November Michael P. Eckert and Jochen Zeil. “Towards an Ecology of Motion Vision”. In: Motion Vision - Computational, Neural, and Ecological Constraints (Edited by Johannes M. Zanker and Jochen Zeil) Springer Verlag, Berlin Heidelberg New York, Johannes M. Zanker. “An Early-Vision Computational Model to Analyze Motion Signal Distributions in Artificial and Natural Stimuli”. Department of Psychology, Royal Holloway University of London, England. 2001

Questions?