Effective and Efficient Detection of Moving Targets From a UAV’s Camera

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
Camera calibration and epipolar geometry
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Segmentation Divide the image into segments. Each segment:
Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California.
Midterm review: Cameras Pinhole cameras Vanishing points, horizon line Perspective projection equation, weak perspective Lenses Human eye Sample question:
Computing motion between images
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Computing in Image Analysis
Optical Flow Estimation
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
1 Stanford CS223B Computer Vision, Winter 2006 Lecture 7 Optical Flow Professor Sebastian Thrun CAs: Dan Maynes-Aminzade, Mitul Saha, Greg Corrado Slides.
Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman.
Image Primitives and Correspondence
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Image Stitching Ali Farhadi CSE 455
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
Tzu ming Su Advisor : S.J.Wang MOTION DETAIL PRESERVING OPTICAL FLOW ESTIMATION 2013/1/28 L. Xu, J. Jia, and Y. Matsushita. Motion detail preserving optical.
3D SLAM for Omni-directional Camera
IMAGE MOSAICING Summer School on Document Image Processing
Chapter 14: SEGMENTATION BY CLUSTERING 1. 2 Outline Introduction Human Vision & Gestalt Properties Applications – Background Subtraction – Shot Boundary.
CSCE 643 Computer Vision: Structure from Motion
Object Detection with Discriminatively Trained Part Based Models
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
3D Imaging Motion.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Presented by: Idan Aharoni
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
MOTION Model. Road Map Motion Model Non Parametric Motion Field : Algorithms 1.Optical flow field estimation. 2.Block based motion estimation. 3.Pel –recursive.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Motion and optical flow
Paper – Stephen Se, David Lowe, Jim Little
Motion and Optical Flow
Motion Detection And Analysis
Robust Visual Motion Analysis: Piecewise-Smooth Optical Flow
Image Primitives and Correspondence
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Range Imaging Through Triangulation
Dongwook Kim, Beomjun Kim, Taeyoung Chung, and Kyongsu Yi
Presented by: Yang Yu Spatiotemporal GMM for Background Subtraction with Superpixel Hierarchy Mingliang Chen, Xing Wei, Qingxiong.
From a presentation by Jimmy Huff Modified by Josiah Yoder
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Filtering Things to take away from this lecture An image as a function
Announcements Questions on the project? New turn-in info online
Computational Photography
Filtering An image as a function Digital vs. continuous images
Optical flow and keypoint tracking
Calibration and homographies
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Effective and Efficient Detection of Moving Targets From a UAV’s Camera Sara Minaeian, Jian Liu, and Young-Jun Son IEEE Transactions on Intelligent Transportation Systems, Vol. 19, No. 2, Feb 2018 Presented by: Yang Yu {yuyang@islab.ulsan.ac.kr} March. 10, 2018

Overview Accurately detect and segment multiple independently moving foreground targets from a video taken by a monocular moving camera. Camera motion is estimated through tracking background keypoints using pyramidal Lucas–Kanade. Foreground segmentation is applied by integrating a local motion history function with spatio-temporal differencing over a sliding window. Perspective homography is used at image registration for effectiveness Detection interval is adjusted dynamically.

Framework of Moving Target Detection (1/2) Framework for detection of moving targets via a monocular moving camera. Compensating the camera motion. Subtract the moving background.

Motion compensation Predict a frame in a video by accounting for motion of the camera. Original. Difference. Motion compensated difference. Shifted right by 2 pixels. Shifting frame compensates for panning of camera, thus there is greater overlap between two frames.

Corner Detection max, min  Eigenvalues of M Average grayscale change along direction [u,v]. Direction of rapid change max, min  Eigenvalues of M (max)-1/2 Direction of slow change (min)-1/2

Harris corner Harris corner. Corner response function

Good Features To Track Good Features To Track. KLT. Directly computes . Under certain assumptions, the corners are more stable for tracking. Smaller eigenvalue is higher than a threshold (min(λ1,λ2)>λ) Threshold: image resolution and illumination, compensate for part of the noise. Lower bound on λ: a region of the image with rather uniform brightness, Upper bound: a highly-textured region.

Sliding Window Use temporal sliding window of 3 frames with gap for compensating the background motion at different scales. Extracted keypoints of frame t (non-constant reference) are tracked over two successive frames of and . (number of frames in the interval) Frame per second (fps) rate of the video stream (R(c)), UAV’s altitude (A(v)), Algorithm computational complexity (O(c)) UAV’s speed (S(v)).

Optical Flow Optical flow Basic optical flow constraint equation.

Keypoints Matching Optical flow: pyramidal Lucas–Kanade (PLK) Consider keypoint neighborhood and solve an over-constrained system of equations for estimating the displacement. Weighted least squares, a local, fast method Keypoints displacement vector Vertical and horizontal displacements Spatial gradients and time-based derivative Track keypoints over larger spatial scales of pyramid Refine initial motion velocity assumptions through its lower levels to raw image pixels.

Image Registration (1/2) Transform each frame onto the reference frame based on camera motion estimation. Homography (perspective transformation) estimation. 3× 3 Homography matrix: relationship between keypoints through a projective mapping from one plane (frame) to another Filter out moving foreground keypoints and noise as outliers. Random sample consensus (RANSAC) is an iterative method to estimate parameters of a mathematical model from a set of observed data that contains outliers. RANSAC: define global motion, it finds a solution with the largest inlier support

Image Registration (2/2) Homography can be estimated. K: homogeneous coordinates of refined keypoints (inliers). Vertical and horizontal position. arbitrary scalar(1, homogeneous coordinate) H: unknown Homography matrix. Can be estimated by solving Ah = 0, using homogeneous linear least squares.

Background Eliminate New pixel values of transformed frame , after being warped into frame t. Eliminate background by taking absolute differences of these two perspective transformed images Registration error is minimized: use an independent reference image for transforming the successive frames. Threshold: get rid of shadowing regions and creating the silhouette mask of potential moving foreground. Set boundary bars as background. Registration and warping.

Moving Targets Segmentation (1/2) Differentiate and segment multiple independently moving targets based on the local motion of their detected blobs. Gaussian mask: separate independently moving targets regions Kernel size needs to be large enough to cover most of segmented blobs, but not so large that multiple blobs are overlapped at a time. Connected-components analysis: camera motion estimation error Cluster the closely moving foreground regions that are potentially parts of a unified target

Moving Targets Segmentation (2/2) Morphological operations (erosion and dilation) Remove separate noises and filling the holes in the segmented blobs belonging to the same unified moving targets in the foreground. Edge regions: segmented due to slight movement. “Large enough” blobs: multiple unified moving foreground targets Local motion history: track segmented blobs over time Form a representation of overall motion, by taking gradient of silhouette image over time. Motion gradient and orientation parameters of every single region: estimate the general movement direction of each moving target. Motion segmentation routine: separate independently moving targets based on their local motions.

Experimental results (1/6) Results of different scenarios captured by UAV: (a) A crowd of 4 people moving together; (b) A crowd of 4 people splitting into two groups and a bicyclist passing by (at faster speed); (c) A group of 5 people scattering.

Experimental results (2/6) Comparison with perspective and affine transformation: (a) Original video frames; (b) Results of affine transformation; (c) Results of the proposed method.

Experimental results (3/6) Results of Dataset1, at three frames: t=185; t=329; t=521: (a) Original frames; (b) Optical flow vectors; (c) Detected foreground; (d) Ground-truth blobs

Experimental results (4/6) Comparison with exiting methods on Dataset2: (a) Original frame; (e) Ground-truth data; (b) Multi-layer Homography (MLH); (c) Background motion subtraction (BMS); reconstruct camera motion by interpolation (d) Particle video (PV); (f) Moving camera background subtraction (MCBS); (g) Segmentation with effective cue (SEC); (h) Results of our proposed method.

Experimental results (5/6) Quantitative performance analyses compared to some other methods which have reported their results on the datasets. p: positive (foreground) pixels number n: negative (background) pixels number reported by detection algorithm Tp: true positives Tn: true negatives in test frame compared to ground truth Fp: false positives Fn: false negatives in the test frame N: total number of pixels in test frame, depending on image resolution.

Experimental results (6/6) Performance comparison with BMS on parts of Dataset2: (a) Mean and standard deviation of the metrics; (b) Results of applying the two methods on a series of frames to justify better performance of the proposed method on Recall.

Conclusion Detecte multiple independently moving targets from a monocular moving camera. Keypoints are extracted and tracked onto the next two frames in Sliding-window framework. Frames are registered through perspective transformation onto frame t. Local motion history is used to separate the independently moving targets.

Thank you for attention!

Image Matching using RANSAC Key point matches Choose a random subset Calculate the geometrical transformations (H) of key points between images 𝐻 Calculate outliers Save the transformation if it gives smaller outliers, also save the selected subset of correspondences 𝐻 Is satisfied? Removes bad correspondences

Plane + Perspective Projection Plane equation: 𝑎𝑋+𝑏𝑌+𝑐𝑍=1 Plane equation in perspective projection Simplify: Each element: 𝑎 𝑏 𝑐 𝑋 𝑌 𝑍 =1 𝑋 𝑌 𝑍 =𝑅 𝑋 𝑤 𝑌 𝑤 𝑍 𝑤 +𝑇 𝑎 𝑏 𝑐 𝑋 𝑤 𝑌 𝑤 𝑍 𝑤 =1 𝑋 𝑌 𝑍 = 𝑅+𝑇 𝑎 𝑏 𝑐 𝑋 𝑤 𝑌 𝑤 𝑍 𝑤 𝑋 𝑌 𝑍 =𝐴 𝑋 𝑤 𝑌 𝑤 𝑍 𝑤 Normalize by 𝑍 𝑤 𝑋= 𝑎 1 𝑋 𝑤 + 𝑎 2 𝑌 𝑤 + 𝑎 3 𝑍 𝑤 𝑥= 𝑋 𝑍 = 𝑎 1 𝑥 𝑤 + 𝑎 2 𝑦 𝑤 + 𝑎 3 𝑎 7 𝑥 𝑤 + 𝑎 8 𝑦 𝑤 + 𝑎 9 𝑌= 𝑎 4 𝑋 𝑤 + 𝑎 5 𝑌 𝑤 + 𝑎 6 𝑍 𝑤 Scale ambiguity: 𝑎 9 =1 𝑍= 𝑎 7 𝑋 𝑤 + 𝑎 8 𝑌 𝑤 + 𝑎 9 𝑍 𝑤 𝑦= 𝑌 𝑍 = 𝑎 4 𝑥 𝑤 + 𝑎 5 𝑦 𝑤 + 𝑎 6 𝑎 7 𝑥 𝑤 + 𝑎 8 𝑦 𝑤 + 𝑎 9

The Homography Transformation Solution: solve the following linear equation using 8 sample points 𝑥′= 𝑎𝑥+𝑏𝑦+𝑐 𝑔𝑥+ℎ𝑦+1 𝑦 ′ = 𝑑𝑥+𝑒𝑦+𝑓 𝑔𝑥+ℎ𝑦+1 Example of homography transform 𝑥 1 𝑦 1 1 0 0 0 − 𝑥 1 𝑥 1 ′ − 𝑦 1 𝑥 1 ′ 𝑥 2 𝑦 2 1 0 0 0 − 𝑥 2 𝑥 2 ′ − 𝑦 2 𝑥 2 ′ 𝑥 3 𝑦 3 1 0 0 0 − 𝑥 3 𝑥 3 ′ − 𝑦 3 𝑥 3 ′ 𝑥 4 𝑦 4 1 0 0 0 − 𝑥 4 𝑥 4 ′ − 𝑦 4 𝑥 4 ′ 0 0 0 𝑥 1 ′ 𝑦 1 ′ 1 − 𝑥 1 𝑦 1 ′ − 𝑦 1 𝑦 1 ′ 0 0 0 𝑥 2 ′ 𝑦 2 ′ 1 − 𝑥 2 𝑦 2 ′ − 𝑦 2 𝑦 2 ′ 0 0 0 𝑥 3 ′ 𝑦 3 ′ 1 − 𝑥 3 𝑦 3 ′ − 𝑦 3 𝑦 3 ′ 0 0 0 𝑥 4 ′ 𝑦 4 ′ 1 − 𝑥 4 𝑦 4 ′ − 𝑦 4 𝑦 4 ′ 𝑎 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ = 𝑥 1 ′ 𝑥 2 ′ 𝑥 3 ′ 𝑥 4 ′ 𝑥 5 ′ 𝑥 6 ′ 𝑥 7 ′ 𝑥 8 ′ Image: http://www.corrmap.com/features/homography_transformation.php