Next Generation 4-D Distributed Modeling and Visualization of Battlefield Next Generation 4-D Distributed Modeling and Visualization of Battlefield Avideh.

Slides:



Advertisements
Similar presentations
International Maritime Protection Symposium 2005 The Harbour Defence IKC2 Experiment 13 Dec 2005 Tan Choon Kiat Defence Science Technology Agency, Singapore.
Advertisements

Vision Based Control Motion Matt Baker Kevin VanDyke.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Next Generation 4-D Distributed Modeling and Visualization of Battlefield Next Generation 4-D Distributed Modeling and Visualization of Battlefield Avideh.
1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor.
CPSC 695 Future of GIS Marina L. Gavrilova. The future of GIS.
C C V C L Sensor and Vision Research for Potential Transportation Applications Zhigang Zhu Visual Computing Laboratory Department of Computer Science City.
Supervised by Prof. LYU, Rung Tsong Michael Department of Computer Science & Engineering The Chinese University of Hong Kong Prepared by: Chan Pik Wah,
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Testbed for Mobile Augmented Battlefield Visualization: Summing Up May 10, 2006.
Interactive Systems Technical Design
Uncertainty Quantification and Visualization: Geo-Spatially Registered Terrains and Mobile Targets Suresh Lodha Computer Science, University of California,
Decision Making and Reasoning with Uncertain Image and Sensor Data
Virtual Worlds Lab Testbed for Mobile Augmented Battlefield Visualization September, 2003 Testbed for Mobile Augmented Battlefield Visualization September,
Uncertainty Quantification and Visualization: Geo-Spatially Registered Terrains and Mobile Targets Suresh Lodha Computer Science, University of California,
USING GIS TO FOSTER DATA SHARING AND COMMUNICATION SEAN MURPHY IVS BURLINGTON, VT.
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
InerVis Mobile Robotics Laboratory Institute of Systems and Robotics ISR – Coimbra Contact Person: Jorge Lobo Human inertial sensor:
Real Time Abnormal Motion Detection in Surveillance Video Nahum Kiryati Tammy Riklin Raviv Yan Ivanchenko Shay Rochel Vision and Image Analysis Laboratory.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
© Copyright 2005 Synthosys Synthosys 1 Federal Street L3 Ops Communications Bldg - ACIN Camden, NJ Contact: Linda Yu, President CEO (610)
Summary Alan S. Willsky SensorWeb MURI Review Meeting September 22, 2003.
Multimedia Databases (MMDB)
Reading Notes: Special Issue on Distributed Smart Cameras, Proceedings of the IEEE Mahmut Karakaya Graduate Student Electrical Engineering and Computer.
Visual-Spatial Thinking in Digital Libraries —Top Ten Problems Chaomei Chen Brunel University June 28th 2001, Hotel Roanoke and Conference Center, Roanoke,
Trends in Computer Vision Automatic Video Surveillance.
A General Framework for Tracking Multiple People from a Moving Camera
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
Monitoring, Modelling, and Predicting with Real-Time Control Dr Ian Oppermann Director, CSIRO ICT Centre.
DTI Management of Information LINK Project: ICONS Incident reCOgnitioN for surveillance and Security funded by DTI, EPSRC, Home Office (March March.
Video Tracking Using Learned Hierarchical Features
1 Digital Design Center, College of Architecture, University of North Carolina at Charlotte The Charlotte Visualization Center, College of Computing and.
Next Generation 4-D Distributed Modeling and Visualization of Battlefield Next Generation 4-D Distributed Modeling and Visualization of Battlefield Avideh.
Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership.
Towards Coastal Threat Evaluation Decision Support Presentation by Jacques du Toit Operational Research University of Stellenbosch 3 December 2010.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support Vector Machine Journal of Information & Computational Science, 2015.
 proposed work This project aims to design and develop a framework for terrain visualization flexible enough to allow arbitrary visualization of terrain.
Spring 2015 CSc 83020: 3D Photography Prof. Ioannis Stamos Mondays 4:15 – 6:15
Face Modeling, Expression Analysis, Caricature uncalibrated images Reconstructed 3D model Expression analysis from region models automated caricature Model.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
AUTOMATIC TARGET RECOGNITION AND DATA FUSION March 9 th, 2004 Bala Lakshminarayanan.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Data and Applications Security Developments and Directions Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #15 Secure Multimedia Data.
Computer Vision and Digital Photogrammetry Methodologies for Extracting Information and Knowledge from Remotely Sensed Data Toni Schenk, CEEGS Department,
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Georgia Tech GVU Center Mobile Visualization in a Dynamic, Augmented Battlespace Mobile Visualization in a Dynamic, Augmented Battlespace William Ribarsky.
Digital Video Library Network Supervisor: Prof. Michael Lyu Student: Ma Chak Kei, Jacky.
Uncertainty Computation,Visualization, and Validation Suresh K. Lodha Computer Science University of California, Santa Cruz (831)
Computer Vision No. 1 What is the Computer Vision?
Location, Location, Location The Spatial Data in FCS Dr. Michael L. Deaton Integrated Science and Technology, JMU.
Next Generation 4D Distributed Modeling and Visualization Avideh Zakhor (UCB) Ulrich Neumann (USC) William Ribarsky (Georgia Tech) Pramod Varshney (Syracuse)
WLD: A Robust Local Image Descriptor Jie Chen, Shiguang Shan, Chu He, Guoying Zhao, Matti Pietikäinen, Xilin Chen, Wen Gao 报告人:蒲薇榄.
Face Detection 蔡宇軒.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
BRAIN Alliance Research Team Annual Progress Report (Jul – Feb
SENSOR FUSION LAB RESEARCH ACTIVITIES PART I : DATA FUSION AND DISTRIBUTED SIGNAL PROCESSING IN SENSOR NETWORKS Sensor Fusion Lab, Department of Electrical.
Automatic Video Shot Detection from MPEG Bit Stream
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
National Academy of Public Administration Fall Meeting
Coding Approaches for End-to-End 3D TV Systems
Image Based Modeling and Rendering (PI: Malik)
Eric Grimson, Chris Stauffer,
Distributed Sensing, Control, and Uncertainty
No. 1 What is the Computer Vision?
Closing Remarks.
Presentation transcript:

Next Generation 4-D Distributed Modeling and Visualization of Battlefield Next Generation 4-D Distributed Modeling and Visualization of Battlefield Avideh Zakhor UC Berkeley September 2004

Participants yAvideh Zakhor, (UC Berkeley) yBill Ribarsky, (Georgia Tech) yUlrich Neumann (USC) yPramod Varshney (Syracuse) ySuresh Lodha (UC Santa Cruz)

Battlefield Visualization zDetailed, timely and accurate picture of the modern battlefield vital to military zMany sources of info to build “picture”: yArchival data, roadmaps, GIS and databases: static ySensor information from mobile agents at different times and locations yScene itself time varying; moving objects yMultiple modalities: fusion zHow to make sense of all these without information overload?

Visualization Pentagon Decision Making under Uncertainty Decision Making under Uncertainty Processing/ Visualization Uncertainty Processing/ Visualization 4D Modeling/ Update 4D Modeling/ Update Visualization and rendering Visualization and rendering Tracking/ Registration Tracking/ Registration

Research Agenda for z Modeling z Visualization and Rendering yMobile situational visualization yAugmented virtual environments z Add the temporal dimension (4D): yTracking of moving objects in scenes yModeling of time varying objects and scenes yDynamic event analysis, and recognition zPath planning under uncertainty

Acquisition set up for dynamic scene modeling rotating mirror IR line laser Digital camcorder with IR-filter Halogen lamp with IR-filter VIS-light camera PC Sync electronic Reference object for H-line Roast with vertical slices

Captured IR Frames Horizontal line scans from top to bottom at about 1 Hz

Video intensity and IR captured synchronously IR video stream VIS video stream  Frame rate: 30 Hz (NTSC)  Frame rate: 10 Hz  Synchronized with IR video stream

Processing steps Compute depth at the horizontal line Track computed depth values along vertical lines Intraframe and interframe tracking Dense depth estimation

Results Depth video Color video

 Video analysis − Segmenting and tracking moving objects (people, vehicles) in the scene − Determines regions of interest/change and allows for dynamic modeling and rapid modeling Dynamic Event Analysis

Video Scene Analysis: Activity Classification with Uncertainty zExample activities: sitting, bending and standing zThe blue pointer indicates the level of certainty in the classifier decision a b c d

Audio Enhanced Visual Processing with Uncertainty Video Processing and Classification Audio Processing and Classification Visualization Description Generation Video Acquisition Sound Acquisition Fusion Uncertaint y

 VE: captures only a snapshot of the real world, therefore lacks any representation of dynamic events and activities occurring in the scene  AVE Approach: uses sensor models and 3D models of the scene to integrate dynamic video/image data from different sources AVE: Fusion of 2D Video & 3D Model  Visualize all data in a single context to maximize collaboration and comprehension of the big- picture  Address dynamic visualization and change detection

Mobile Situational Visualization System Drawing Area Buttons Pen Tool Mobile Team Collaboration Example collaborators Shared observations of vehicle location, direction, speed

Goal Source Optimal route planning for battlefield risk minimization High risk Moderate risk Low risk Risk free

Lidar Data Classification Using height and height variation Using LiDAR data (no aerial image) Using all five features

Adaptive Stereo/Lidar based registration for modeling outdoor scenes Aerial view Stereo Based Registration LiDAR Based Registration LiDAR based approach seems better at turns. Stereo based approach captures terrain undulations

Punctuated Model Simplification Our initial implementation considers planar loops. The mesh containing the loops is a topological 2-manifold. Example: simple object Detected loops “Inside/outside” binary tree Simplification path

Interactions on AVE  Collaboration with Northrop Grumman -install v.1 AVE system (8/03) for demonstrations -Install v.2 AVE system (9/04) for demonstrations and evaluation license  Tech transfer -Source code for LiDAR modeling to ARMY TEC labs -Integration into ICT training applications for MOUT after-action review  Demos/proposals/talks − NIMA, NRO, ICT, Northrup Grumman, Lockheed Martin, HRL/DARPA, Olympus, Airborne1, Boeing

Transitions for 3D modeling Carried out a 2 day modeling of Potomac Yard Mall in Washington, DC in December 2003 for Night Army Vision Lab, and GSTI Shipped equipment ahead of time Spent one day driving around acquiring data Spent ½ day processing the data Delivered the model to Jeff Turner of GSTI/ Night army vision lab Carried out another 2 day modeling of Ft. McKenna in Geogia in December 2003 in collaboration with Jeff Dehart of the ARL Drove the equipment from DC to Georgia in a van Collected data in one day, processed in few days Delivered the 3D model to Larry Tokarcik’s group. In Discussion with Harris to transition 3D modeling Architecure/software/hardware Invited talk at the registration workshop at CVPR

Technology Transfer on Sitvis We are continuing work centered around the mobile augmented battlefield visualization testbed with both the Georgia Tech and UNC Charlotte homeland security initiatives. Dr. Ribarsky is on the panel to develop the research agenda for the new National Visual Analytics Center, sponsored by DHS. Mobile situational visualization will be part of this agenda. The system is being used as part of the Sarnoff Raptor system, which is deployed to the Army and other military entities. In addition our visualization system is being used as part of the Raptor system at Scott Air Force Base.

Publications (1) z C. Fr ue h and A. Zakhor, "An Automated Method for Large-Scale, Ground- Based City Model Acquisition" in International Journal of Computer Vision, Vol. 60, No. 1, October 2004, pp z C. Fr ue h and A. Zakhor, "Constructing 3D City Models by Merging Ground- Based and Airborne Views" in Computer Graphics and Applications, November/December 2003, pp z C. Frueh and A. Zakhor, "Reconstructing 3D City Models by Merging Ground-Based and Airborne Views", Proceedings of the VLBV, September 2003, pp Madrid, Spain z C. Fr ue h, R. Sammon, and A. Zakhor, "Automated Texture Mapping of 3D City Models With Oblique Aerial Imagery" in 2nd International Symposium on 3D Data Processing, Visualization, and Transmission, zU. Neumann, “Approaches to Large-Scale Urban Modeling” in IEEE computer Graphics and applications zU. Neumann, “Visualizing Reality in an Augmented Virtual Environment”, acepted in Presence zU. Neumann, “Augmented Virtual Environments for Visualization of Dynamic Imagery”, accepted in IEEE Computer Graphics and Applications.

Publications (2) z U. Neumann, “Urban Site Modleing from LIDA”, CGGM’03 z U. Neumann, “Augmented Virtual Environments (AVE): Dynamic Fusion of Imagery and 3D models”, VR 2003 z U. Neumann, “3D Video Surveillance with Augmented Virtual Environments”, accepted in SIGGM z Sanjit Jhala and Suresh K. Lodha, ``Stereo and Lidar-Based Pose Estimation with Uncertainty for 3D Reconstruction'', To appear in the Proceedings of Vision, Visualization, and Modeling Conference, Stanford, Palo Alto, CA November z Hemantha Singamsetty and Suresh K. Lodha, ``An Integrated Geospatial Data Acquisition System for Reconstructing 3D Environments'', To appear in the Proceedings of the IASTED Conference on Advances in Computer Science and Technology (ACST), St. Thomas, Virgin Islands, USA, November 2004.

Publications (3) z Amin Charaniya, Roberto Manduchi, and Suresh K. Lodha, ``Supervised Parametric Classification of Aerial LiDAR Data", Proceedings of the IEEE workshop on Real-Time 3D Sensors and Their Use, Washington DC, June z Sanjit Jhala and Suresh K. Lodha, ``On-line Learning of Motion Patterns using an Expert Learning Framework", Proceedings of the IEEE Workshop on Learning in Computer Vision and Pattern Recognition, Washington DC, June z Srikumar Ramalingam, Suresh K. Lodha, and Peter Sturm, ``A Generic Structure-from-Motion Algorithm for Cross-Camera Scenarios'', Proceedings of the OmniVis (Omnidirectional Vision, Camera Networks, and Non-Classical Cameras) Conference, Prague, Czech Republic, May z Srikumar Ramalingam and Suresh K. Lodha ``Adaptive Enhancement of 3D Scenes using Hierarchical Registration of Texture-Mapped Models", Proceedings of 3DIM Conference, IEEE Computer Society Press, Banff, Alberta, Canada, October 2003, pp.~

Publications (4) z Suresh K. Lodha, Nikolai M. Faaland, and Jose Renteria,``Hierarchical Topology Preserving Compression of 2D Vector Fields using Bintree and Triangular Quadtrees'', IEEE Transactions on Visualization and Computer Graphics, Vol. 9, No. 4, October 2003, pages z Suresh K. Lodha, Krishna M. Roskin, and Jose C. Renteria, ``Hierarchical Topology Preserving Simplification of Terrains", Visual Computer, Vol. 19, No. 6, September z Suresh K. Lodha, Nikolai M. Faaland, Grant Wong, Amin P. Charaniya, Srikumar z Ramalingam, Arthur Keller, ``Consistent Visualization and Querying of Spatial Databases by a Location-Aware Mobile Agent'', Proceedings of Computer Graphics International (CGI), pp.~ , IEEE Computer Society Press, Tokyo, Japan, July z Christopher Campbell, Michael M. Shafae, Suresh K. Lodha and Dominic W. Massaro, z ``Discriminating Visible Speech Tokens using Multi-Modality'', Proceedings of the International Conference on Auditory Display (ICAD), pp.~13--16, Boston, MA, July 2003.

Publications (5) z Amin Charaniya and Suresh K. Lodha, ``Speech Interface for Geo-Spatial Visualization'', Proceedings for the Conference on Computer Science and Technology (CST), Cancun, Mexico, May z William Ribarsky, editor (with Holly Rushmeier). 3D Reconstruction and Visualization of Large Scale Environments. Special Issue of IEEE Computer Graphics & Applications (December, 2003). z Justin Jang, Peter Wonka, William Ribarsky, and C.D. Shaw. Punctuated Simplification of Man-Made Objects. Submitted to The Visual Computer. z Tazama St. Julien, Joseph Scoccinaro, Jonathan Gdalevich, and William Ribarsky. Sharing of Precise 4D Annotations in Collaborative Mobile Situational Visualization. To be submitted, IEEE Symposium on Wearable Computing. z Ernst Houtgast, Onno Pfeiffer, Zachary Wartell, William Ribarsky, and Frits Post. Navigation and Interaction in a Multi-Scale Stereoscopic Environment. Submitted to IEEE Virtual Reality 2004.

Publications (6) z G.L. Foresti, C.S. Regazzoni and P.K. Varshney (Eds.), Multisensor Surveillance Systems : The Fusion Perspective, Kluwer Academic Press, z R. Niu, P. Varshney, K. Mehrotra and C. Mohan, ``Sensor Staggering in Multi-Sensor Target Tracking Systems'', Proceedings of the 2003 IEEE Radar Conference, Huntsville AL, May z L. Snidaro, R. Niu, P. Varshney, and G.L. Foresti, ``Automatic Camera Selection and Fusion for Outdoor Surveillance under Changing Weather Conditions'', Proceedings of the 2003 IEEE International Conference on Advanced Video and Signal Based Surveillance, Miami FL, July z H. Chen, P. K. Varshney, and M.A. Slamani, "On Registration of Regions of Interest (ROI) in Video Sequences" Proceedings of IEEE International Conference on Advanced Video and Signal Based Surveillance, CD-ROM, Miami, FL, July 21-22, z R.Niu and P.K.Varshney, “Target Location Estimation in Wireless Sensor Networks Using Binary Data,”Proceedings of the 38th Annual Conference on Information Sciences and Systems, Princeton, NJ, March 2004.

Publications (7) z L. Snidaro, R. Niu, P. Varshney, and G.L. Foresti, ``Sensor Fusion for Video Surveillance'', Proceedings of the Seventh International Conference on Information Fusion, Stockholm, Sweden, June z E. Elbasi, L. Zuo, K. Mehrotra, C. Mohan and P. Varshney, "Control Charts Approach for Scenario Recognition in Video Sequences," in Proc. Turkish Artificial Intelligence and Neural Networks Symposium(TAINN'04), June z M. Xu, R. Niu, and P. Varshney, `` Detection and Tracking of Moving Objects in Image Sequences with Varying Illumination'', to appear in Proceedings of the 2004 IEEE International Conference on Image Processing, Singapore, October z R. Rajagopalan, C.K. Mohan, K. Mehrotra and P.K. Varshney,"Evolutionary Multi-Objective Crowding Algorithm for Path Computations," to appear in Proc. International Conf. on Knowledge Based Computer Systems (KBCS- 2004), Dec

Future Work Important to make sense of the “world”, not just model it or visualize it Tons of data being collected by a variety of sensors all over the globe all the time How to process or digest the data in order to: Recognize significant events Make decisions despite uncertainty, and take actions Current MURI most concerned about “presenting” the data to military commanders in an uncluttered way  visualization Future work on how to automatically construct the “big picture” of what is happening by combining a variety of modalities of data  Audio, video, 3D models, sensors, pictures,

Battlefield Analysis Distributed sensors Physical layer Processing Model / Update Environment Visualize Analysis/reasoning Recognize events Accomplish tasks Make decision Take actions All of this Changing Dynamically With time

Outline of Talks z9:00 - 9:15 Avideh Zakhor, U.C. Berkeley, "Overview" z9: :00 Chris Frueh and Avideh Zakhor, U.C. Berkeley, "3D modeling and visualization of static and dynamic scenes" z10: :45 Ulrich Neuman, U.S.C. "Data Fusion in Augmented Virtual Environments" z10: :30 Bill Ribarsky, Georgia Tech "Testbed and Results for Mobile Augmented Battlefield Visualization" z1:00 - 1:45 Suresh Lohda, U.C. Santa Cruz "Uncertainty in Data Classification, Pose Estimation and 3D Reconstruction for Cross-Camera and Multiple Sensor Scenarios” z1:45 - 2:30 Pramod Varshney, Syracuse University "Decision Making and Reasoning with Uncertain Image and Sensor Data"