Presented by: Cindy Yan EE6358 Computer Vision

Slides:



Advertisements
Similar presentations
Dynamic Occlusion Analysis in Optical Flow Fields
Advertisements

MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Optical Flow Methods 2007/8/9.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints.
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Stereo Computation using Iterative Graph-Cuts
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Image Primitives and Correspondence
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
E.G.M. PetrakisBinary Image Processing1 Binary Image Analysis Segmentation produces homogenous regions –each region has uniform gray-level –each region.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Machine Vision for Robots
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
CS 6825: Binary Image Processing – binary blob metrics
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Digital Image Processing CCS331 Relationships of Pixel 1.
CS 4487/6587 Algorithms for Image Analysis
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Course 11 Optical Flow. 1. Concept Observe the scene by moving viewer Optical flow provides a clue to recover the motion. 2. Constraint equation.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
3D Imaging Motion.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 19 Motion.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Introduction to Scale Space and Deep Structure. Importance of Scale Painting by Dali Objects exist at certain ranges of scale. It is not known a priory.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Lecture 10: Harris Corner Detector CS4670/5670: Computer Vision Kavita Bala.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Machine Vision ENT 273 Lecture 4 Hema C.R.
3D Vision Interest Points.
COMP 9517 Computer Vision Motion 7/21/2018 COMP 9517 S2, 2012.
Mean Shift Segmentation
Geometry 3: Stereo Reconstruction
Common Classification Tasks
Computer Vision Lecture 5: Binary Image Processing
Fitting Curve Models to Edges
Range Imaging Through Triangulation
Computer Vision Lecture 9: Edge Detection II
Course 7 Motion.
5.2 Least-Squares Fit to a Straight Line
CSSE463: Image Recognition Day 30
Announcements Questions on the project? New turn-in info online
CSSE463: Image Recognition Day 30
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
CSSE463: Image Recognition Day 30
Computer and Robot Vision I
Detecting and analysing motion
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Presented by: Cindy Yan EE6358 Computer Vision Dynamic Vision Presented by: Cindy Yan EE6358 Computer Vision

Out Line: Change detection Segmentation using motion Difference Picture (DP) Accumulative Difference Picture (ADP) Segmentation using motion Match correspondence Point matching Line matching Optical Flow in Motion analysis Tracking Computer Vision

Dynamic Scene Analysis System Input: a sequence of image frames Each frame represents an image of the scene at a particular instants of time Changes in a Scene Camera motion Object motion Computer Vision

Camera and world setup SCSO SCMO MCSO MCMO Computer Vision

1. Change Detection Detection of changes in two successive frames of a sequence Changes can be detected at different levels: pixel, edge, regions. Computer Vision

Difference Pictures: Compare the corresponding pixels of the two frames A binary difference picture d(i,j) between f1(i,j) and f2(i,j) is obtained by: Computer Vision

Computer Vision

Size Filter Difference picture results in too many noise pixels. Pixels that belong to a connected component and larger than a minimum size are retained for further analysis. Size filter will reduce the noise, but also filters some desired signals such as slow or small moving objects. Computer Vision

Accumulative Difference Pictures (ADP) Analyzing the changes over a sequence of frames. Comparing every frame of an image sequence to a reference frame. Increase the entry in the accumulative difference picture by 1, whenever the difference exceeds a threshold ADP0 = 0 ADPk(x,y)= ADPk-1(x,y)+DP1k(x,y) Computer Vision

Computer Vision

2. Segmentation Using Motion For a stationary camera: Segmentation involves separating moving components in the scene from stationary components Segmentation may be performed using region-based or edge-based approaches Computer Vision

Time-Varying Edge Detection Moving edge: an edge that moves Moving edges can be detected by combining the temporal and spatial gradients using a logical AND operator t X Y Computer Vision

Performance of the edge detector Slow moving edges will be detected if they have good contrast Poor-contrast edges will be detected if they move well |D| |E| T Computer Vision

3. Match Correspondence Find corresponding image features (points or lines) from two image frames that correspond to the same features in 3D scene. Relaxation Labeling A constraint propagation approach to solve the correspondence problem Proper labels must be assigned to the object in the image Computer Vision

Two concerns in matching problem How are points selected for matching? What are the features that are matched? How are the correct match chosen? What constraints are placed on the displacement vectors? Computer Vision

Relaxation Labeling Process Proper labels must be assigned to the objects in the image Define R,C,L,P for each node. R: contains all possible relations among the nodes C: represents the compatibility among these relations L: contains all labels that can be assigned to nodes P: represents the set of possible levels that can be assigned to a node at any instant in computation Decide which of the possible interpretations is correct on the basis of local evidence Computer Vision

Disparity Computations Matching Problem: pair a point pi=(xi,yi) in the first image with a point pj=(xj,yj) in the second image. Disparity between them is the displacement vector between the two points. Dij = (xi-xj,yi-yj) Computer Vision

How are points selected for matching? Discreteness: A measure of the distinctiveness of individual points Similarity: A measure of how closely two points resemble one another Consistency: A measure of how well a match conforms to nearby matches Computer Vision

Discrete feature Discreteness means that features should be isolated points The discrete feature points can be selected using any corner detector or a feature detector such as Moravec interest operator. Computer Vision

Moravec interest operator Detects points at which intensity values are varying quickly in at least one direction Compute sum of the squares of pixel differences in four directions (horizontal, vertical and both diagonals) over a 5 by 5 window Computer Vision

Moravec interest operator (cont.) Compute the minimum value of these variances Suppress all values that are not local maxima Apply a threshold to remove weak feature points Computer Vision

Moravec interest operator (cont.) After finding all the potential matches, pair each feature point in the first image with all points in the second image within some maximum distance This will eliminate many connections from the complete bipartite graph. Computer Vision

Point Matching Defines object set O = {o1,o2,…om} from image points of frame 1, each element is a node. Define label set L={l1,l2,…,ln } from points of frame 2. Establish relationship set among the nodes of object nodes, such as neighboring points. Establish an initial match set: M(0)={(<o1,l1>,<o1,l2>,…<o1,ln>), …… (<oi,l1>,<oi,l2>,…<oi,ln>) (<om,l1>,<om,l2>,…<om,ln>)}; Computer Vision

Point matching (cont.) The set of potential matches form a bipartite graph. The goal of the correspondence problem is to remove all other connections except one for each node. Computer Vision

Consistency measurement Based on: geometric relation among node in image. gray level or gradient in the original image of the node. Compute similarity (or disparity) of each node with respect to matched pair. E.g., Probability of match between oi and li is: Computer Vision

Consistency measurement (cont.) Update match set M(k) iteratively: If the similarity of <oi,li> is high, encourage the match of its consistent nodes. Otherwise discourage the match of its consistent nodes. Remove the match pair of small similarity (small match probability pi(k)(l|i)from match set M(k) Repeat above steps until each node has no more than one label in M(k) Computer Vision

Line Matching Given: Two set of Lines in image A and image B respectively. Find: Unique correspondences of lines between image A and B. Computer Vision

Matching function Position disparity. Relative position in an image: where is edge direction. Position disparity between two sub-sets of image lines from images A and B. Computer Vision

Line Matching (cont.) Orientation disparity: Computer Vision

Line Matching (cont.) Other disparity: Length of Line Intensity of original image   Contrast   Steepness   Straightness (residues of Least squares) Computer Vision

Kernel Match: Match a small sub-sets from image Lines of frames A and B, for robustness consideration of the kernel, Number of lines should be no less than 3. Lines should be long (stable). Lines should not be paralleled. Lines should be separated as much as possible. Minimize the match function over selected subjects between two image frames. x ----- xth attribute, such as position ,orientation. α ----- weight Computer Vision

Match Expansion: Once kernel matching is completed, the line correspondences obtained will serve a reference for the match of remaining lines. Choose a longest line from unmatched line of image A. Add it into the subset of matched kernel of image A, calculate match functions for every unmatched line in image B. The line of image B with minimum match function is considered a matched line. Add this matched pair of lines into matching kernel and repeat the process until no further line needs to match. Computer Vision

4. Optical Flow in Motion analysis Optical Flow reflects the image changes due to motion during a time interval dt. Computer Vision

Basic elements of motion a). Translation at constant distance from observer b). Translation in depth relative to observer c). Rotation at constant distance about the view axis d). Rotation of a planar object perpendicular to the view axis Computer Vision

FOE: focus of expansion If several independently moving objects are present in the image, each motion has its own FOE Computer Vision

Mutual Velocity of observer Cx=u, cy=v, cz =w are mutual velocities in directions x,y,z respectively. x’, y’ be the image co-ordinates x0,y0,z0 be position of some point at time t0 The position of the same point at t is: Computer Vision

FOE determination: Assume motion directed toward an observer; as t  -∞ The motion can be traced back to the originating point at infinite distance from the observer. The motion toward an observer continues along straight lines and the originating point in the image plane is: Computer Vision

Depth Determination Assume points of the same rigid object and translational motion. At least one actual distance value is known. Assume an object moving towards the observer Computer Vision

Finding the real word co-ordinate Assume motion is along the camera optical axis. Computer Vision

6. Object Tracking Refers to tracking of object motion in a sequence of frames Given: m objects moving in scene, a sequence of n image frames is taken from the scene. Find: the trajectories of each object in the image sequence. Computer Vision

Path coherence assumption: Assume: Change of object location is small Change of scalar velocity is small Change of moving direction is small Principles of Path Coherence function Ф represents a measure of agreement between the derived object trajectory and the motion constraints Function value always positive reflects local absolute angular deviations of the trajectory respond equally to positive and negative velocity changes Normalized Computer Vision

Path Coherence function Ф Trajectory Ti of an object i: In Vector form: Deviation function: The deviation Di of the entire trajectory of the object i is: Computer Vision

Path coherent function: For m trajectories of m moving object in the image sequence, the overall trajectory deviation D: Path coherent function: + Computer Vision

References: Motion Analysis: http://www.icaen.uiowa.edu/~dip/LECTURE/Motion.html Lecture: http://css.engineering.uiowa.edu/~dip/LECTURE/lecture.html Motion Analysis: http://cmp.felk.cvut.cz/~hlavac/Public/Pu/33PVRleto2003/ch15d.pdf Dynamic Motion: http://www.visionlab.sjtu.edu.cn/CV/14.ppt Computer Vision