Download presentation
Presentation is loading. Please wait.
1
Automated Motion Imagery Data
Exploitation for Airborne Video Surveillance CGI Video Research Team Dr. He, Zhihai (Henry) - Electrical Engineering Dr. Palaniappan, Kannappan - Computer Science Dr. DeSouza, Guilherme N. - Electrical Engineering Dr. Duan, Ye - Computer Science
2
Outline Background and Project Overview
Simulation environment setup and test video sets Moving objects detection and geo-location Video registration and moving objects tracking 3-D urban scene modeling from UAV videos Conclusion and discussion
3
? Background and Motivation - Airborne Surveillance Videos
Massive (hundreds of hours of videos) A cognitive disaster for human analysts Need to develop algorithms to aggregate, filter, fuse, and summarize video data and extract important information. ? Video curtsey to AFRL Reviewed by human within an hour Automated Video Processing 100s’ hours of videos
4
Hierarchical Automated Motion Imagery Data Exploitation
Long-Term Plan Data Visualization Decision Activity Mining and Abnormality Detection Knowledge Spatiotemporal Characterization of Objects Activity Moving Object Extraction and Geo-location Object Registration, Fusion, and Super-resolution Data Motion Imagery Data Signal
5
Fusion and Super-resolution
Video Summarization and Activity Visualization Long-Term Plan Fusion and Super-resolution & Visualization Object Browsing Search for Objects of Interest Objects Information Database Motion Imagery Data Search A red vehicle moving southeast at a speed over 80 miles per hour at about 3:00pm on March 26th of 2006.
6
This Project / First Year 2006-2007
3-D scene modeling Trajectory extraction Multi-object tracking Geo-location Real-time video registration and mosaic
7
Meta data of in common format
Research and Development Plan Preparing test video datasets Algorithm development and refinement Test in simulation environment Evaluate with flight test videos. Code optimization, speedup, documentation and transfer. Input Videos Motion Imagery Desktop Toolset Meta data of in common format
8
Preparing the Test Video Datasets
Simulation test bed In-house video data collection Third-party test video datasets
9
UAV Simulation Setup Provide ground truth for
Moving object detection and tracking Geo-location, speed, trajectory estimation of moving objects 3-D scene reconstruction
10
UAV Simulation Setup – A Close Look
Need to Adjust the relative size of objects Add more structure and background texture.
11
In-House Aerial Video Collection
Balloon UMC UAV Easy access High-quality video scenes with multiple moving objects No ground truth / metadata
12
Third-Party Test Video Sets
DARPA test videos (no meta data) Crystal view test videos (with meta data) AFRL flight test videos (no meta data, few moving objects)
13
Two Scenarios High-Altitude Videos Dominant global camera motion
Local moving objects – a small portion of the scene (< 20%) Ground object structure – negligible. Low-Altitude Videos Scene content change due to 3-D structure of ground objects (parallax) is significant. Local moving objects become a significant part of the scene.
14
Moving Object Detection and Geo-location
15
Reconnaissance and Surveillance in Urban Environments
Limited visibility and resolution Complex and cluttered scenes High background activity
16
Motion Analysis and Detection of Regions of Interest (ROIs)
(a) (b) Figure - Optical Flow, before (a) and after (b) removal of background motion (but before filtering of spurious flows in the image).
17
Motion Analysis and Detection of ROI’s
Remove background motion and determine the dominant component in the foreground (a) (b) Figure – Histogram analysis of the Optical Flow. (a) Magnitudes and (b) Angles.
18
Correlation-based Tracking
(a) (b) Figure – ROI (in red) highlighting the tracked object.
19
Preliminary Results
20
Preliminary Results
21
DARPA Sequence: Hollywood
22
Multi-Object Tracking with Optical Flow Analysis
Note: Need further improvement, especially with fusion with the registration-based tracking technology to be presented next.
23
Video Registration and Multi-Object Tracking
24
Real-Time Registration of Videos
Estimate the global camera motion parameters. Warp video frames into the same coordinate system – mosaic
25
Real-Time Registration of Videos
Vehicle-Camera Motion Translational motion, Rotation, zoom, perspective change. Global Motion Equation [x, y] Frame n [X, Y] Frame n+1 after camera motion Need to estimate 8 parameters Theoretically, we only need to 8 equations (8 point-to-point correspondence)
26
Real-Time Registration of Videos
Design Goals Generic – making no assumption about image content. The registration algorithm works in a wide range of environments. Robust to noise and errors. Low-complexity for real-time computation
27
Real-Time Registration of Videos
Static Objects Dominant Motion Moving Objects Local Motion Structural Blocks Texture Blocks ……… With different reliability levels Camera Motion Estimation
28
Moving Objects Detection
The motion of moving objects does not satisfy the global motion equation. Moving object can be detected based on this equation using hypothesis testing. Assumption Global motion estimation The moving objects are not a significant portion (<30%) of the video scene. Moving object detection and removal
29
Registration Result
30
Multi-Object Tracking
1. Detect moving objects in stabilized frames. 2. Predict locations of the current set of objects. 3. Match predictions to actual measurements. 4. Update object trajectories. 5. Update image stabilized ref coord system. Multi-object Detection and Tracking Unit Tracking VGoF Registration Into Common Coordinate System Moving Object Detection & Feature Extraction Data Association (Correspondence) Update Trajectories Object States Context Prediction Update Coord System
31
Dynamic State Estimation for Tracking
System state Measurements State estimate Dynamic System Measurement System State Estimator State uncertainties System noise Measurement noise System Errors Agile motion Distraction/clutter Occlusion Changes in lighting Changes in pose Shadow (Object or background models are often inadequate or inaccurate) Measurement Errors Camera noise Framegrabber noise Compression artifacts Perspective projection State Error Position Appearance Color Shape Texture etc. Support map
32
Motion Detection - Structure and Flux Tensor Approach
Typical Approach: threshold trace(J) Problem: trace(J) fails to capture the nature of gradient changes and results in ambiguities between stationary versus moving features Alternative Approach: Analyze the eigenvalues and the associated eigenvectors of J Problem: Eigen-decompositions at every pixel is computationally expensive for real time performance Proposed Solution: Flux tensor time derivative of J
33
Motion Detection Flux Tensor vs Gaussian Mixture
34
Multi-object Tracking Stages
Probabilistic Bayesian framework Features Used in Data Association: Proximity and Appearance-based Data Association Strategy: Multi-hypothesis testing Gating Strategies: Absolute and Relative Discontinuity Resolution: Prediction (Kalman filter), or Appearance models Filtering: Temporal consistency check and Spatio-temporal cluster check
35
Association Strategy Multi-hypothesis testing with delayed decision - Many matches are kept with evidence-based pruning Support for multiple interactions - one-to-one object matches, many-to-one, one-to-many, many-to-many, one-to-none, or none-to-one matches Corresponding low-level object tracking events Segmentation errors Group interactions (merge/split) Occlusion Fragmentation Entering object Exiting object ObjectMatchGraph
36
Exp Results: DARPA ET01 Video Frame #50
Registered Frame Motion Detection Results Foreground Mask Tracking Results
37
Registration and Tracking Results
38
Registration and Tracking Results
39
Registration and Tracking Results - Others
40
Registration and Tracking Results - Occlusion
Before trajectory filtering After trajectory filtering
41
3-D Scene Reconstruction from UAV Videos
42
Multi-view Image based Modeling
43
Multi-View Image-Based Modeling Using Deformable Surfaces
44
3D Urban Scene Modeling Using Multi-view Aerial Imagery
45
3D Building Reconstruction Using Multi-view Aerial Images
Processing Stages Feature exaction and matching Camera pose estimation Multi-view image-based modeling Texture mapped 3D urban scene visualization
46
User-Guided Feature Selection & Matching
47
Camera Calibration
48
3D Reconstruction
49
Texture mapping
50
Preliminary Results from Airborne Videos
51
Preliminary Results from Airborne Videos
52
R&D Tasks Completed During the Past 3 Months
Improved the accuracy and robustness of optical flow estimation and moving object segmentation algorithm. Solved the problem of long-term registration and tracking over a large region using dynamic window and related data exchange and management problem. Developed a multi-path motion estimation and registration scheme to significantly improve the registration accuracy.
53
R&D Tasks Completed During the Past 3 Months
Partially solved the long-term drifting error problem. Refine our multi-object tracking algorithm to deal with noise and 3-D structures. Improved the accuracy of feature matching and camera calibration for 3-D scene reconstruction.
54
Remaining R&D Tasks Need to compare the results of the geo-location algorithm against the ground truth and evaluate its performance Need to evaluate the performance of geo-location with test videos. Need to establishing trajectory continuity (object ID matching) across moving coordinate systems Customizing trajectory analysis for airborne video tracking with registration error, large platform motion, zooming, etc Need to solve the identification correspondence problem for tracking between segments. Morphological post processing filters
55
Remaining R&D Tasks Need to handle image noise, shadow, and 3-D structures. Need to evaluate the performance 3-D scene reconstruction using ground truth provided by our simulator. Visualization interface design (mosaic, trajectory, meta data). Code optimization and speedup with C/C++ Input / output interface design and data formats to convert our modules into desktop tools and transfer to NGA.
56
Conclusion and Discussion
3-D scene modeling Trajectory extraction Multi-object tracking Geo-location Real-time video registration and mosaic
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.