1 Computational Vision CSCI 363, Fall 2012 Lecture 20 Stereo, Motion.

Slides:



Advertisements
Similar presentations
Ter Haar Romeny, EMBS Berder 2004 How can we find a dense optic flow field from a motion sequence in 2D and 3D? Many approaches are taken: - gradient based.
Advertisements

Visual Sensation & Perception How do we see?. Structure of the eye.
Chapter 7: Processing the Image Review structure of the eye Review structure of the retina Review receptive fields –Apply to an image on the retina –Usage.
Dynamic Occlusion Analysis in Optical Flow Fields
Depth Cues Pictorial Depth Cues: aspects of 2D images that imply depth
COMPUTATIONAL NEUROSCIENCE FINAL PROJECT – DEPTH VISION Omri Perez 2013.
Extra Credit for Participating in Experiments Go to to sign upwww.tatalab.ca We are recruiting for a variety of experiments including basic.
Low-Level Vision. Low Level Vision--outline Problem to be solved Example of one computation—lines How simple computations yield more complex information.
What happens when no correspondence is possible? Highly mismatched stereo-pairs lead to ‘binocular rivalry’ TANGENT ALERT! Open question: Can rivalry and.
1 Computational Vision CSCI 363, Fall 2012 Lecture 33 Color.
What is Stereopsis? The process in visual perception that leads to the sensation of depth due to the slightly different perspectives that our two eyes.
© 2004 by Davi GeigerComputer Vision April 2004 L1.1 Binocular Stereo Left Image Right Image.
Motion Depth Cues – Motion 1. Parallax. Motion Depth Cues – Parallax.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 8: Stereoscopic Vision 1 Computational Architectures in Biological.
The Human Visual System Vonikakis Vasilios, Antonios Gasteratos Democritus University of Thrace
Ch 31 Sensation & Perception Ch. 3: Vision © Takashi Yamauchi (Dept. of Psychology, Texas A&M University) Main topics –convergence –Inhibition, lateral.
Lecture 4: Edge Based Vision Dr Carole Twining Thursday 18th March 2:00pm – 2:50pm.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Computational Architectures in Biological Vision, USC, Spring 2001
Reading Gregory 24 th Pinker 26 th. Seeing Depth What’s the big problem with seeing depth ?
1B50 – Percepts and Concepts Daniel J Hulme. Outline Cognitive Vision –Why do we want computers to see? –Why can’t computers see? –Introducing percepts.
PSYC 330: Perception Depth Perception. The Puzzle The “Real” World and Euclidean Geometry The Retinal World and Projective Geometry Anamorphic art.
Careers for Psychology and Neuroscience Majors Oct. 19th5-7pm in SU 300 Ballroom B.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
1 Computational Vision CSCI 363, Fall 2012 Lecture 26 Review for Exam 2.
1 Perception and VR MONT 104S, Fall 2008 Lecture 7 Seeing Color.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
1-1 Measuring image motion velocity field “local” motion detectors only measure component of motion perpendicular to moving edge “aperture problem” 2D.
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
Ch 31 Sensation & Perception Ch. 3: Vision © Takashi Yamauchi (Dept. of Psychology, Texas A&M University) Main topics –convergence –Inhibition, lateral.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 8 Seeing Depth.
1 Computational Vision CSCI 363, Fall 2012 Lecture 28 Structure from motion.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
CS332 Visual Processing Department of Computer Science Wellesley College Binocular Stereo Vision Region-based stereo matching algorithms Properties of.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Course 11 Optical Flow. 1. Concept Observe the scene by moving viewer Optical flow provides a clue to recover the motion. 2. Constraint equation.
1 Computational Vision CSCI 363, Fall 2012 Lecture 21 Motion II.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
1 Computational Vision CSCI 363, Fall 2012 Lecture 5 The Retina.
3D Imaging Motion.
Vision Photoreceptor cells Rod & Cone cells Bipolar Cells Connect in between Ganglion Cells Go to the brain.
1 Perception and VR MONT 104S, Fall 2008 Lecture 4 Lightness, Brightness and Edges.
1 Computational Vision CSCI 363, Fall 2012 Lecture 24 Computing Motion.
1 Computational Vision CSCI 363, Fall 2012 Lecture 6 Edge Detection.
1 Perception and VR MONT 104S, Fall 2008 Lecture 6 Seeing Motion.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Computational Vision CSCI 363, Fall 2012 Lecture 22 Motion III
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
1 Computational Vision CSCI 363, Fall 2012 Lecture 16 Stereopsis.
Sensation & Perception. Motion Vision I: Basic Motion Vision.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
1 Computational Vision CSCI 363, Fall 2012 Lecture 29 Structure from motion, Heading.
1 Computational Vision CSCI 363, Fall 2012 Lecture 18 Stereopsis III.
Computational Vision CSCI 363, Fall 2012 Lecture 17 Stereopsis II
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
1 Computational Vision CSCI 363, Fall 2012 Lecture 32 Biological Heading, Color.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Computational Vision CSCI 363, Fall 2016 Lecture 15 Stereopsis
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Measuring motion in biological vision systems
Early Processing in Biological Vision
Smoothing the intensities
Binocular Stereo Vision
Detecting image intensity changes
Measuring image motion
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Detection of image intensity changes
Visual Perception: One World from Two Eyes
Presentation transcript:

1 Computational Vision CSCI 363, Fall 2012 Lecture 20 Stereo, Motion

2 Do Humans Use the Same Constraints as Marr-Poggio? 1.Similarity: We probably use this one. Humans cannot fuse a white dot with a black dot. 2.Epipolar: The brain probably uses some version of this. If one image is shifted upward (or downward), people cannot fuse the two images. 3.Uniqueness: We probably don't rely on this. There are examples of images where can fuse two features in one image with a single feature in the other.

3 Violations of Uniqueness constraint Panum's limiting case: Matching one line with two Braddick's demonstration: Matching one point with two repeatedly Stereo algorithms can deal with Braddick's demonstration with slight modifications.

4 The Continuity Constraint The brain probably uses some form of continuity constraint. Evidence: There is a limit to how quickly disparity can change from one location to the next, and still produce stereo fusion. For a plane that is steeply slanted in depth, people lose the ability to see the slant and just see the step edge.

5 Does the Brain use Zero Crossings? Many machine vision algorithms extract edges first (e.g. with zero crossings) and then compute the disparities of matched edges. They use edges for matching because they correspond to important physical features (e.g. object boundaries). We also know that people can localize the positions of edges very accurately. This accurate localization is required for stereo vision. However, it is not clear what primitives the human brain matches when computing stereo disparity. The information is combined in V1, so it is after the center- surround operators (like the laplacian operators). Zero crossing information would be available.

6 Zero crossings are not enough Stereo Pairs Luminance Profile Convolved images Left Right

7 Perception vs. Computation Perceived Depth Computed depth of zero crossings Positions of peaks and troughs Computed depth using peaks and troughs

8 Some V1 cells are tuned to disparity Tuned Excitatory Tuned Inhibitory Some cells are narrowly tuned for disparity. Most prefer a disparity near zero.

9 Near and Far cells Some cells are broadly tuned for disparity, preferring either near objects or far objects.

10 Causes of Image Motion Image motion can result from numerous causes: A moving object in the scene Eye movements Motion of the observer

11 Uses of Image Motion Image motion on the retina can be used to compute a variety of scene properties. Among them are: Image segmentation (dividing up the scene into individual objects or surfaces) 3D structure of an object (structure from motion) Depth (motion parallax) Time to collision Heading direction Moving object direction Speed of eye movements (for smooth pursuit)

12 Two stages of Motion processing Visual motion processing is thought to occur in two stages: 1)Extract the 2D image velocity field. 2)Use the 2D velocity field to compute properties of the scene (as listed in the previous slide).

13 Models of Motion Detection Problem: A single photoreceptor (or retinal ganglion cell) cannot detect motion unambiguously. A spot of light moving across its receptive field will cause a temporary increase in light followed by a decrease. The photoreceptor cannot distinguish between motion or changes in ambient lighting. Type of models to solve this problem: Correlation models Gradient models Energy models

14 Correlation Models Correlation models compare the response at one location with a delayed response at a neighboring position. Barlow and Levick (uses inhibition) Schematic of Delay and Compare (positive correlation) Prefers right motion

15 The Reichardt Detector The full Reichardt detector has excitation by motion in one direction and inhibition by motion in the opposite direction.

16 Gradient Models The gradient models use the "Contrast Brightness Assumption". In 1 spatial dimension, this states: I xx0x0 t0t0 I x x 0 +  x t 0 +  t

17 The Gradient Constraint Equation (1D) Using a Taylor Series Expansion: Rearranging: This is the Gradient Constraint Equation.

18 The Gradient Constraint Equation (2D) Using a Taylor Series Expansion: Rearranging: Gradient Constraint Equation (2D)

19 The Aperture Problem The gradient constraint equation for a 2D image is 1 equation with 2 unknowns (u and v). To solve for u and v, we must make measurements of I x, I y, and I t at 2 locations where they are not all identical. If our view is limited to an edge seen through an aperture, we cannot solve for both u and v independently. We can only find the component of motion perpendicular to the edge. Aperture Edge Perpendicular velocity component

20 The Aperture Problem is Fundamental The aperture problem is a fundamental problem when one is trying to measure image velocity using local detectors. This is true in biological vision (neurons have local receptive fields). This is also true in machine vision (intensity is detected locally by photodetectors).