Computational Architectures in Biological Vision, USC, Spring 2001

Slides:



Advertisements
Similar presentations
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Advertisements

Computer Vision Optical Flow
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2004 Vision as Optimal Inference The problem of visual processing can be thought of as.
Introduction to Cognitive Science Lecture 2: Vision in Humans and Machines 1 Vision in Humans and Machines September 10, 2009.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
May 2004Motion/Optic Flow1 Motion Field and Optical Flow Ack, for the slides: Professor Yuan-Fang Wang Professor Octavia Camps.
Optical Illusions KG-VISA Kyongil Yoon 3/31/2004.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 4: Introduction to Vision 1 Computational Architectures in Biological.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 1 Computational Architectures in Biological.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 8: Stereoscopic Vision 1 Computational Architectures in Biological.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 11: Visual Illusions 1 Computational Architectures in Biological.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Michael Arbib & Laurent Itti: CS664 – USC, spring Lecture 6: Object Recognition 1 CS664, USC, Spring 2002 Lecture 6. Object Recognition Reading Assignments:
Motion Field and Optical Flow. Outline Motion Field and Optical Flow Definition, Example, Relation Optical Flow Constraint Equation Assumptions & Derivation,
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
1 Chapter 21 Machine Vision. 2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 5: Introduction to Vision 2 1 Computational Architectures in.
Motion from normal flow. Optical flow difficulties The aperture problemDepth discontinuities.
Computer Vision Optical Flow Marc Pollefeys COMP 256 Some slides and illustrations from L. Van Gool, T. Darell, B. Horn, Y. Weiss, P. Anandan, M. Black,
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
1B50 – Percepts and Concepts Daniel J Hulme. Outline Cognitive Vision –Why do we want computers to see? –Why can’t computers see? –Introducing percepts.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
1 Computational Vision CSCI 363, Fall 2012 Lecture 26 Review for Exam 2.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
Perception Introduction Pattern Recognition Image Formation
1-1 Measuring image motion velocity field “local” motion detectors only measure component of motion perpendicular to moving edge “aperture problem” 2D.
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
1 Computational Vision CSCI 363, Fall 2012 Lecture 20 Stereo, Motion.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 8 Seeing Depth.
CSE 185 Introduction to Computer Vision Feature Tracking and Optical Flow.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
1 Computational Vision CSCI 363, Fall 2012 Lecture 28 Structure from motion.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
CS332 Visual Processing Department of Computer Science Wellesley College Binocular Stereo Vision Region-based stereo matching algorithms Properties of.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #16.
Course 11 Optical Flow. 1. Concept Observe the scene by moving viewer Optical flow provides a clue to recover the motion. 2. Constraint equation.
1 Computational Vision CSCI 363, Fall 2012 Lecture 21 Motion II.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
CS332 Visual Processing Department of Computer Science Wellesley College Analysis of Motion Measuring image motion.
3D Imaging Motion.
Vision Photoreceptor cells Rod & Cone cells Bipolar Cells Connect in between Ganglion Cells Go to the brain.
1 Perception and VR MONT 104S, Fall 2008 Lecture 4 Lightness, Brightness and Edges.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 19 Motion.
1 Perception and VR MONT 104S, Fall 2008 Lecture 6 Seeing Motion.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Computational Vision CSCI 363, Fall 2012 Lecture 22 Motion III
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
Week 4 Motion, Depth, Form: Cormack Wolfe Ch 6, 8 Kandell Ch 27, 28 Advanced readings: Werner and Chalupa Chs 49, 54, 57.
1 Computational Vision CSCI 363, Fall 2012 Lecture 16 Stereopsis.
Sensation & Perception. Motion Vision I: Basic Motion Vision.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
1 Computational Vision CSCI 363, Fall 2012 Lecture 29 Structure from motion, Heading.
1 Computational Vision CSCI 363, Fall 2012 Lecture 32 Biological Heading, Color.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Measuring motion in biological vision systems
Range Imaging Through Triangulation
Announcements more panorama slots available now
Announcements Questions on the project? New turn-in info online
Coupled Horn-Schunck and Lukas-Kanade for image processing
Measuring image motion
Announcements more panorama slots available now
Optical flow and keypoint tracking
Detection of image intensity changes
Presentation transcript:

Computational Architectures in Biological Vision, USC, Spring 2001 Lecture 9. Motion Perception Reading Assignments: Chapter 10.

Announcements no lecture next week (3/13) white papers due the following week (3/20): in one page (no more, no less), describe: introduction about why your topic is important brief summary of state of the art in the field plans for the project

Moving objects should not be confused with… spatio-temporal noise changes in lighting and shadows appearance or disappearance of image elements (e.g., flickering light)

What defines motion, then? changes in the image with time, which do not fall under the categories of the previous slide. and keep in mind that: “objects” may not be contiguous but may consist of several disjoint parts (e.g., flock of birds) motion may be non-rigid and involve deformations (e.g, person walking) moving objects may be partially occluded (e.g., behind a picket fence) observer and eye motion cause the entire scene to appear to move

Motion Field vs. Optical Flow Motion field: assigns a motion vector to every point in the image – this is the “true” motion of objects. Optical Flow: apparent motion of the brightness pattern of the image – this is what we can measure. are they identical? ideally, yes, but… Rotating sphere: motion field but no optical flow Moving sun: optical flow but no motion field

The Computational Challenge Infer motion field from the apparent optical flow. This is particularly difficult and requires high-level knowledge in many ambiguous situations (see previous slide). Just computing the optical flow is a very complicated problem! correspondence problem: given series of frames, how do we pair features in one frame to corresponding features in the next frame, such as to infer the motion of that feature? aperture problem: if we don’t see the whole object, recovering its motion is ambiguous.

The Correspondence Problem Many possible pairing between features extracted from successive frames may exist… identical to the correspondence problem in stereo vision, except that here we can use more than two frames to infer the pairing.

Note Sometimes the “true” solution may not be the most obvious one!

The Aperture Problem in the upper two images: even though the edge is moving down-right, we perceive its motion as being rightwards only… For and edge viewed through an aperture, we can only recover the component of the motion field that is orthogonal to the edge.

Basic Approach Standard assumption: brightness constraint equation: the brightness of an image patch does not change as the observer moves around the object. so, (Ix, Iy) . (u, v) = -It Hence values (u,v) satisfying the constraint form a straight line, and all a local measurement can do is identify this line (aperture problem). We can compute the component of the optical flow in the direction of the brightness gradient (Ix,Iy)T, but not in the orthogonal direction.

Typical Algorithms Most computer vision algorithms are based on adding a regularization constraint to the basic equation. e.g., Regularizing constraint: the optical flow field should vary smoothly in most parts of the image associated error measure: Error associated with the brightness constraint: algorithm: find (u,v) that minimize

Reichardt Motion Detector Basic Reichardt detector: compares light intensity at two locations, with a time delay. hence, it is tuned to a given velocity and direction of motion. Reichardt, 1950’s Analog VLSI implementation (Harrison et al, 2000)

Full Reichardt Detector Combines a detector in one direction with another one for the opposite direction.

Spatio-Temporal Energy Model Adelson & Bergen, 1984. Remember the complex cells in V1: preferred stimulus is a bar with a given orientation and moving in a given direction (aperture problem yields that the preferred direction of motion is orthogonal to the preferred bar orientation). The spatio-temporal energy model is based on such cells with 3D (x,y,t) bandpass receptive fields.

Spatio-Temporal Energy Model (x,y) bandpass receptive field tunes 3D filter to oriented bars (x,t) bandpass receptive field tunes 3D filter to given velocity in x direction. idem for (y,t) y t x

Quadrature Pairs Take sum of squared responses from a zero-phase and a 90deg-phase filters: result is an “energy” which does not depend on phase of stimulus.

Locomotion Several properties of the motion field can be exploited for locomotion: pure translation in the direction of gaze yields an expanding flow field one point has zero motion: the focus of expansion (FOE); we will reach it if we continue moving in the same direction forever.

Complex Motion We can recognize objects from their motion with amazing facility (e.g., Johansson, 1973). In addition to basic feature detection and regularization constraints, this must be thanks to additional “high-level” constraints on the shape of objects and their usual appearance when they move.

What Motion Perception Tells Us depth and form motion parallax structure from motion biological motion classification and identification living vs. non-living actions gender individuals (self and friends) doesn't involve complex thought processes, quick to compute

What Motion Perception Tells Us visual guidance direction identification judging time of arrival retinal image size expands exponentially as we approach an object: judging time to contact a wide range of creatures respond to looming (expansion)

Case Study Biological perception of looming motion and time-to-contact.

Looming Motion time: t distance: D(t) angle subtended by object on retina:  absolute rate of image expansion:  relative rate of expansion:  response of looming-sensitive neuron in the locust:  Sun & Frost, Nature Neuroscience, 1998

Looming Neurons in the pigeon’s nucleus rotundus. Sun & Frost, Nature Neuroscience, 1998

Looming Detector in the Locust Gabbiani et al, Science, 1996. First stage (DCMD neurons): extract features Second stage (LGMD neuron): compute looming

Neural response to looming proportional to time-derivative of visual angle times negative exponential of that angle.

Detecting Looming Motion Analog-VLSI implementation (Indiveri & Koch)