[1] Pixel-Level Fusion of Active/Passive Data for Real-Time Composite Feature Extraction and Visualization Alan Steinberg and Robert Pack Space Dynamics.

Slides:



Advertisements
Similar presentations
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Advertisements

Motivating Markov Chain Monte Carlo for Multiple Target Tracking
Visualization and Fusion Craig Scott, Aisha Page, Olusanya Soyannwo, Hamzat Kassim Paul Blackmon, Pierre Knight, Francis Dada Engineering Visualization.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
CROWN “Thales” project Optimal ContRol of self-Organized Wireless Networks WP1 Understanding and influencing uncoordinated interactions of autonomic wireless.
1 Approximated tracking of multiple non-rigid objects using adaptive quantization and resampling techniques. J. M. Sotoca 1, F.J. Ferri 1, J. Gutierrez.
New modules of the software package “PHOTOMOD Radar” September 2010, Gaeta, Italy X th International Scientific and Technical Conference From Imagery to.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
Robust Object Tracking via Sparsity-based Collaborative Model
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Saad M. Khan and Mubarak Shah, PAMI, VOL. 31, NO. 3, MARCH 2009, Donguk Seo
PHD Approach for Multi-target Tracking
Applying Target Decomposition Algorithms on the Detection of Man Made Targets Using Polarimetric SAR University of British Columbia September, 2007 Flavio.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
LADAR Image Visualization Tool By: Robin R. Burton.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
1 LM Approaches to Filtering Richard Schwartz, BBN LM/IR ARDA 2002 September 11-12, 2002 UMASS.
Student: Hsu-Yung Cheng Advisor: Jenq-Neng Hwang, Professor
Object Recognition Scenario? Landmark Detection (objects and humans) –Cluttered Environment –Levels of Occlusion –Types Color Shape Texture –Dynamic confusers.
1 Video Surveillance systems for Traffic Monitoring Simeon Indupalli.
Situation Management to Counter Piracy Alan Steinberg Georgia Tech Research Institute September 2011.
Technology Applicability for Prediction & Recognition of Piracy Efforts NATO ASI September 2011 Salamanca, Spain.
Bala Lakshminarayanan AUTOMATIC TARGET RECOGNITION April 1, 2004.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Radar Video Surveillance
A General Framework for Tracking Multiple People from a Moving Camera
3D SLAM for Omni-directional Camera
© 2001 Mercury Computer Systems, Inc. Real-Time Geo- Registration on High-Performance Computers Alan Chao Monica Burke ALPHATECH, Inc. High Performance.
M. Shimoni*; G. Tolt**; C. Perneel*** and J. Ahlberg** * Signal and Image Centre, Dept. of Electrical Engineering (SIC-RMA), Brussels, Belgium, ** FOI.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
CVPR Workshop on RTV4HCI 7/2/2004, Washington D.C. Gesture Recognition Using 3D Appearance and Motion Features Guangqi Ye, Jason J. Corso, Gregory D. Hager.
Automated Detection and Classification Models SAR Automatic Target Recognition Proposal J.Bell, Y. Petillot.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
1 Howard Schultz, Edward M. Riseman, Frank R. Stolle Computer Science Department University of Massachusetts, USA Dong-Min Woo School of Electrical Engineering.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
AUTOMATIC TARGET RECOGNITION OF CIVILIAN TARGETS September 28 th, 2004 Bala Lakshminarayanan.
Features-based Object Recognition P. Moreels, P. Perona California Institute of Technology.
7 elements of remote sensing process 1.Energy Source (A) 2.Radiation & Atmosphere (B) 3.Interaction with Targets (C) 4.Recording of Energy by Sensor (D)
Template matching and object recognition. CS8690 Computer Vision University of Missouri at Columbia Matching by relations Idea: –find bits, then say object.
Use of 3D Imaging for Information Product Development
Human Detection Mikel Rodriguez. Organization 1. Moving Target Indicator (MTI) Background models Background models Moving region detection Moving region.
Stable Multi-Target Tracking in Real-Time Surveillance Video
1 Value of information – SITEX Data analysis Shubha Kadambe (310) Information Sciences Laboratory HRL Labs 3011 Malibu Canyon.
MAY 2010 WALES, Ltd. AirBorne EO Sensor for BMD - 1/17 UNCLASSIFIED Approved for Public Release - 10-MDA-5450 (04 MAY 10) AIRBORNE ELECTRO-OPTICAL SENSORS.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Looking at people and Image-based Localisation Roberto Cipolla Department of Engineering Research team
Context-based vision system for place and object recognition Antonio Torralba Kevin Murphy Bill Freeman Mark Rubin Presented by David Lee Some slides borrowed.
Hyperspectral remote sensing
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
A Sparse Undersea Sensor Network Decision Support System Based on Spatial and Temporal Random Field April 10, 2007 Defense and Security Symposium 2007.
R I T Rochester Institute of Technology Geometric Scene Reconstruction Using 3-D Point Cloud Data Feng Li and Steve Lach Advanced Digital Image Processing.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
1 Validation for CRR (PGE05) NWC SAF PAR Workshop October 2005 Madrid, Spain A. Rodríguez.
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
Measure of System Effectiveness Missile Defense System By Alfred Terris UNCL:ASSIFIED1.
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Robust Localization Kalman Filter & LADAR Scans
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
University of Pennsylvania 1 GRASP Control of Multiple Autonomous Robot Systems Vijay Kumar Camillo Taylor Aveek Das Guilherme Pereira John Spletzer GRASP.
Presentations Computer Vision Research Lab
National Institute of Standards and Technology (NIST) Advanced Manufacturing Technology Consortia (AMTech) Program Award Number: 70NANB14H056 Development.
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Particle Filtering for Geometric Active Contours
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Presentation transcript:

[1] Pixel-Level Fusion of Active/Passive Data for Real-Time Composite Feature Extraction and Visualization Alan Steinberg and Robert Pack Space Dynamics Laboratory/Utah State University Presented at NATO IST-036/RWS-005 Halden, Norway September 2002

[2] Concept of Pixel-Level Fusion Active Sensor – LADAR Passive Sensor – Electro-Optic (EO) Radiation Fused into One Data Set RANGE DATA FROM MULTI- CHANNEL LADAR PASSIVE IR IMAGE DATA + FUSED RANGE IMAGE

[3] Enables Observation Through Partial Obscuration

[4] Employment Concept Individual Pixel r2r2 r3r3 r4r4 r1r1 Individual Pixel r1r1 r2r2 r3r3 r4r4 r5r5 Cloud Layers Ground Clutter Upper Canopy Lower Canopy

[5] Fused LADAR/EO Image 1 (Co-Registered and Georeferenced at the Pixel Level) LADAR Image EO Image

[6] LADAR Image EO Image Fused LADAR/EO Image 2 (Co-Registered and Georeferenced at the Pixel Level)

[7] Orthoprojection of the Two 3D Images Combined in Geographic Space First Image Second Image

[8] Profile Views in 3D Viewer First LADAR/EO Image First and Second LADAR/EO Images Combined

[9] Visualization Concept - Data Volume Exploration and Clutter Removal

[10] 3D Images of Cluttered Environment Image Plane Vegetation Canopy Ground Plane Terrain (per DTED or detected ground)

[11] Predict Ground Image Plane Bottom of Canopy Layer (Terrain + 3m) Ground Plane Terrain (per DTED or detected ground)

[12] Slice Volume at Bottom of Canopy Image Plane Bottom of Canopy Layer (Terrain + 3m) Ground Plane Terrain (per DTED or detected ground) Residual Shadowed Regions: P D  0

[13] Example of Canopy Slicing Orthoprojected Using LADAR – Trees Removed Oblique EO Scene Residual Shadowed Region: P D  0 Ground with Trees Removed

[14] Two Shots Combined – Trees Removed Residual Shadowed Region: P D  0 Corresponding EO Image

[15] Target Hidden Under Tree Hidden Target

[16] Target Exposed After Tree Removal Exposed Target

[17] Conclusions (1): Pixel-Level Fusion of Active/Passive Data for Real-Time Composite Feature Extraction & Visualization Pixel-Level fusion of LADAR + EO data facilitates target detection & recognition in highly occluded environments Clutter – tree canopies, clouds, etc. – can be removed through geometric filters

[18] Visualization Implication

[19] Some Visualization Implications Target Detection & Recognition >Probabilities of Detection >Probabilities of Target Presence (prior & posterior) >Exploitation of Negative Data Temporal Evolution of Estimation and Expectations >Out-of Sequence Data >Situation Projection >Expected Updates

[20] Prior & Posterior Detection Probabilities; e.g. Presentation of Negative Data Target Detectability Map – – 0.75 p D (x,t+n|t,A) Prior or Posterior Probability Density (e.g. Track Projection Uncertainty) x = Target state (type & location, etc.) t, t+n = Update times A = Action plan (sensor route, sensor controls, processing controls, etc.) p(x,t+n|t) Expectations Map Likely false alarm Likely missed detections p(x,t+n|t)p D (x,t+n|t,A)

[21] Some Visualization Implications Target Detection & Recognition >Probabilities of Detection >Probabilities of Target Presence (prior & posterior) >Exploitation of Negative Data Temporal Evolution of Estimation and Expectations >Out-of Sequence Data >Situation Projection >Expected Updates

[22] Revised Estimates of Track Histories (MTI + Imagery) Time ELEMENTARY SCHOOL MISSILE ASSEMBLY FACILITY MTI Reports Imagery Reports Latency Issues: Out of Sequence Data Initial Estimates of Track Histories (MTI only) Time ELEMENTARY SCHOOL MISSILE ASSEMBLY FACILITY

[23] Adaptive Information Exploitation Process Acknowledge uncertainties Estimate length of time for which decision can be deferred Estimate probability of resolving ambiguity in time >Given current collection plan >By redirecting current resources >By adding resource Estimate Cost & Net Utility of Candidate Action Plans Select & Implement Revised Plan

[24] Estimated HistoryEstimated Present SituationProjection Situation Situation Time Received ReportsExpected Coverage Tgt Selection DecisionTgt Engagement Decision Visualizing Time-Varying Situation Estimation & Expectation (1 of 3) Current Estimate of Present Situation Report Time (Plan A)

[25] Estimated HistoryEstimated Present SituationProjection Situation Situation Time Received ReportsExpected Coverage Tgt Selection DecisionTgt Engagement Decision Visualizing Time-Varying Situation Estimation & Expectation (2 of 3) Current Projection of Future Situation Report Time (Plan A)

[26] Visualizing Time-Varying Situation Estimation & Expectation (3 of 3) Report Time (Plan A) Estimated HistoryEstimated Present SituationProjection Situation Situation Time Received ReportsExpected Coverage Tgt Selection DecisionTgt Engagement Decision Expected Update of Present Situation

[27] Conclusions Pixel-Level fusion of LADAR + EO data facilitates target detection & recognition in highly occluded environments >Clutter – tree canopies, clouds, etc. – can be removed over time through geometric filters Enables visualization of time-varying situation estimation & expectation >Presentation of Detection Probabilities (prior & posterior): e.g. Presenting Negative Data >Presentation of Temporal Evolution of Estimates and of Expectations –Out-of Sequence Data –Situation Projection –Expected Updates & Information Evolution

[28] Thank You Mange Takk!