Download presentation
Presentation is loading. Please wait.
1
Slide 1 OneSAF Objective System (OOS) Overview Marlo Verdesca, Eric Root, Jaeson Munro SAICMarlo.K.Verdesca@saic.com11/28/2005
2
Slide 2 Background OneSAF consists of two separate program efforts: OTB OneSAF Testbed Baseline (OTB) is An interactive, high resolution, entity level simulation which represents combined arms tactical operations up to the battalion level Currently supports over 200 user sites Based on the ModSAF baseline ModSAF officially retired as of April 2002 OneSAF Objective System (OOS) is New development. Composable, next generation CGF that can represent from the entity to the brigade level. New core architecture. Need based on cost savings through replacement of legacy simulations: –BBS - OTB - Janus - CCTT/AVCATT SAF Commonly referred to as “OneSAF”
3
Slide 3 What is One Semi-Automated Forces (OneSAF) Objective System (OOS)? A composable, next generation CGF that can represent a full range of operations, systems, and control processes (TTP) from entity up to brigade level, with variable level of fidelity that supports multiple Army M&S domain (ACR, RDA, TEMO) applications. Software only Platform Independent Field to: RDECs / Battle Labs National Guard Armories Reserve Training Centers All Active Duty Brigades and Battalions Constructive simulation capable of stimulating Virtual and Live simulations to complete the L-V-C triangle Automated Composable Extensible Interoperable
4
Slide 4 Target Hardware Platform Target PC-based computing platform: The PC-based computing platform is envisioned as being one of the standard development and fielding platforms for the OneSAF Objective System. The hardware identified in this list is compatible with current OneSAF Testbed Baseline supported Linux PC based hardware, the WARSIM Windows workstation configuration, and the Army's Common Hardware Platform. CPU: 2.8 GHz Xeon / 533 Processor Memory: 1 Gb DDR at 266MHz Monitor: 21 inch (19.8 in Viewable Image Size) Video Card: NVIDIA Quatro 4 700XGL / 64 Mb RAM Hard Drives: Two, 80GB Ultra ATA 100 7200 RPM Floppy Disk Drive: 3.5 inch floppy drive NIC: 10/100/1000 Fast Ethernet PCI Card Network connection: Minimum of 10BaseT Ethernet CD-RW: 40x/10x/40x CD-RW
5
Slide 5 Target Hardware Platform Target PC-based computing platform: The PC-based computing platform is envisioned as being one of the standard development and fielding platforms for the OneSAF Objective System. The hardware identified in this list is compatible with current OneSAF Testbed Baseline supported Linux PC based hardware, the WARSIM Windows workstation configuration, and the Army's Common Hardware Platform. CPU: 2.8 GHz Xeon / 533 Processor Memory: 1 Gb DDR at 266MHz Monitor: 21 inch (19.8 in Viewable Image Size) Video Card: NVIDIA Quatro 4 700XGL / 64 Mb RAM (GPU) Hard Drives: Two, 80GB Ultra ATA 100 7200 RPM Floppy Disk Drive: 3.5 inch floppy drive NIC: 10/100/1000 Fast Ethernet PCI Card Network connection: Minimum of 10BaseT Ethernet CD-RW: 40x/10x/40x CD-RW
6
Slide 6 FY98FY99FY00FY01FY02FY03FY04FY05 OOS Program Milestones FY06 OneSAF Test Bed Baseline (OTB) OneSAF Objective System (OOS) Build A Build B MS A * ORD V1.0ORD V1.1 OTB V1.0 STOC Award BLOCK C BLOCK D FOC V2.0 BLOCK A OOS V1.0 P3I OTB V2.0 MS B/C BLOCK B OneSAF Program Schedule
7
Slide 7 Line of Sight (LOS) Service in OneSAF
8
Slide 8 LOS Variants 1.Low-Resolution Sampling (Geometric) 2.High-Resolution Ray-Trace (Geometric) 3.Attenuated LOS Atmospheric Foliage 4.Ultra-High Resolution Buildings
9
Slide 9 Low Resolution Sampling & High Resolution Ray-Trace Sampled LOS as used in WARSIM Ray-Trace LOS includes strict segment-polygon intersection tests ( picture Courtesy of WARSIM ) WARSIM reuse legacy gives us the sampled LOS model for use by OOS OneSAF enhancements include exact ray-trace LOS model
10
Slide 10 LOS Attenuation Compute path loss through canopy – individual trees or forest area 35% LOS Compute path loss through atmosphere 10% LOS Sensor Model accounts for attenuation path loss. Usually by a thresholding combined with a random draw of LOS result
11
Slide 11 LOS inside Ultra High Resolution Buildings (UHRBs)
12
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 12 Line of Sight (LOS) Query Basic Query: Given two points, is there a line of sight between them?
13
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 13 Line of Sight (LOS) Query Basic Query: Given two points, is there a line of sight between them?
14
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 14 Line of Sight (LOS) Query Basic Query: Given two points, is there a line of sight between them? LOS exists
15
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 15 Line of Sight (LOS) Query Basic Query: Given two points, is there a line of sight between them?
16
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 16 Line of Sight (LOS) Query Basic Query: Given two points, is there a line of sight between them? No LOS
17
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 17 Hybrid GPU/CPU Approach GPU used to conservatively cull LOS queries Create depth map (4k x 4k) from above using orthographic projection Test line segments against depth buffer with reversed depth test Line segments with no pixels passing depth test have LOS CPU Exact tests using raycasting for non-culled queries (using double precision arithmetic) Culling eliminates most of the worst-case (non- intersecting) queries
18
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 18 GPU Culling CPU 1 Exact Test CPU 2 Exact Test LOS Queries Results Render Terrain (once) The terrain is rendered into the depth buffer
19
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 19 A set of LOS queries is received GPU Culling CPU 1 Exact Test CPU 2 Exact Test LOS Queries Results Render Terrain (once)
20
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 20 The first batch of queries is culled using the GPU GPU Culling CPU 1 Exact Test CPU 2 Exact Test LOS Queries Results Render Terrain (once) Batch 1
21
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 21 GPU Culling CPU 1 Exact Test CPU 2 Exact Test LOS Queries Results Render Terrain (once) The non-culled queries are tested using the CPU while the GPU processes batch 2 Batch 1 culled Batch 2 Batch 1 non-culled
22
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 22 Conservative GPU Culling Render terrain only once for static scenes Use GPU memory for fast rendering of dynamic scenes Conservativeness leads to “false positives” requiring more CPU tests Increased resolution decreases false positives Resolution limited to 4k x 4k on current GPUs We use multiple buffers to increase the effective resolution
23
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 23 Exact Test Details Based on ray-casting Ray traverses the spatial grid Test triangles in each grid cell Parallelizable We use one thread for each CPU terrain with uniform grid
24
Slide 24 LOS Integration into OneSAF
25
Slide 25 LOS Integration Process OneSAF/GPU Requirements (SAIC/UNC) OneSAF/GPU Requirements (SAIC/UNC) OneSAF Technical Report (SAIC) OneSAF Technical Report (SAIC) GPU Algorithm Creation (UNC) GPU Algorithm Creation (UNC) Execute Unit Test (SAIC/UNC) Execute Unit Test (SAIC/UNC) OneSAF Scenario Creation (SAIC) OneSAF Scenario Creation (SAIC) OneSAF Benchmark Results (SAIC) OneSAF Benchmark Results (SAIC) Integration into OOS (SAIC) Add several OpenGL dll’s to ERC libraries Place c++ header files for OpenGL among the ERC code Create a new directory among the ERC code - Setup a new makefile/buildfile, to allow GPU to build as its own library Add calls to ERC Initialization to: - Gather all the triangles in the entire database - Gather all features in the database - Pass all triangles and features into the initialization for the GPU Replace all original LOS calls with the GPU counterpart Integration into OOS (SAIC) Add several OpenGL dll’s to ERC libraries Place c++ header files for OpenGL among the ERC code Create a new directory among the ERC code - Setup a new makefile/buildfile, to allow GPU to build as its own library Add calls to ERC Initialization to: - Gather all the triangles in the entire database - Gather all features in the database - Pass all triangles and features into the initialization for the GPU Replace all original LOS calls with the GPU counterpart
26
Slide 26 LOS Past Performance November 2004 Baseline: –OOS Build 23 Block C Database: –Joint Readiness Training Center (JRTC); Ft. Polk LA. Scenario: –200 total entities –100 blue tanks vs. 100 red tanks Performing LOS while traveling and shooting 100x speedup in LOS query 3x simulation speedup November 2004 Baseline: –OOS Build 23 Block C Database: –Joint Readiness Training Center (JRTC); Ft. Polk LA. Scenario: –200 total entities –100 blue tanks vs. 100 red tanks Performing LOS while traveling and shooting 100x speedup in LOS query 3x simulation speedup May 2005 Baseline: –OOS Build 24 of Block D Database: –Joint Readiness Training Center (JRTC); Ft. Polk LA. Scenario: –3,000 total entities –1,500 blue tanks vs. 1,500 red tanks Performing LOS, stationary, not shooting –33 blue RWAs vs. 33 red RWAs Performing LOS while traveling 100-200x speedup in LOS query 10x simulation Speedup May 2005 Baseline: –OOS Build 24 of Block D Database: –Joint Readiness Training Center (JRTC); Ft. Polk LA. Scenario: –3,000 total entities –1,500 blue tanks vs. 1,500 red tanks Performing LOS, stationary, not shooting –33 blue RWAs vs. 33 red RWAs Performing LOS while traveling 100-200x speedup in LOS query 10x simulation Speedup
27
Slide 27 August 2005 Baseline: –OOS Build 24 of Block D Database: –Ft. Hood, Texas Scenario: –5,000 total entities –~2,500 blue tanks vs. 2,500 red tanks Performing LOS, stationary, not shooting –2 blue RWAs vs. 2 red RWAs Performing LOS while traveling 100-200x speedup in LOS query 15-20x simulation speedup August 2005 Baseline: –OOS Build 24 of Block D Database: –Ft. Hood, Texas Scenario: –5,000 total entities –~2,500 blue tanks vs. 2,500 red tanks Performing LOS, stationary, not shooting –2 blue RWAs vs. 2 red RWAs Performing LOS while traveling 100-200x speedup in LOS query 15-20x simulation speedup LOS Current Performance
28
Slide 28 LOS Demonstration
29
Slide 29 Results Average time for Standard LOS service call: 1 – 2 millisecond Average time for GPU LOS service call: 12 microseconds 100-200x speedup in LOS query Overall speedup: 20x simulation improvement
30
Slide 30 Route Planning in OneSAF
31
Slide 31 Types of Routes in OneSAF Direct Routes –Follow the route waypoints exactly as entered by the user Networked Routes –Follow a linear feature, such as rivers or roads Cross Country Routes –Utilize a grid of routing cells that form an implicit network for the A* algorithm to search
32
Slide 32 Route Planning Algorithms in OneSAF Apply the A* algorithm to routing networks The cost function assigns a weight (cost) for a route segment –Segment data includes the segment’s coordinates, slope, terrain type, and an indication of features The routing algorithm evaluates possible route segments based on their cost; a lower cost indicates a more favorable route. While evaluating a route segment, the cost function may call other ERC services, such as line of sight.
33
Slide 33 Cross Country Routing Routing Cell Each page in the runtime database is decomposed into routing cells. These routing cells form an implicit graph that can be searched using A*. Only segments requested by A* are computed at runtime. River a’ Lake Segment a’ has a river and lake associated with it. Segment b’ has no features associated with it. b’
34
Slide 34 Cross Country Routing End point Start point Generated Route Lake River Features: lakes, rivers, trees, buildings, etc.
35
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 35 Motivation In OneSAF, the route-planning bottleneck is feature analysis (intersection tests) Accounts for over 50% of computation time Feature analysis is the best (obvious) candidate as the starting point for performance improvement The route-planning algorithm in OneSAF can use parallel feature-analysis computations
36
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 36 GPU-based Route Planning Terrain Features & Route Segments Feature Buffer (static) Cull Segment Set Against Feature Set (GPU) Cull Feature Set Against Segment Set (GPU) Cull Feature Set Against Single Segment (GPU) Exact Feature/Segment Tests (CPU) Results Render Features (Once) All Segments All Features Feature/Segment Pairs Reduced Segments Reduced Features
37
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Slide 37 3 phases of GPU-based culling GPU-based culling proceeds in 3 phases: Cull segments against the full feature set Cull features against the reduced set of segments Cull reduced feature set against individual segments
38
Slide 38 Route Planning Integration into OneSAF
39
Slide 39 Route Planning Integration Process OneSAF/GPU Requirements (SAIC/UNC) OneSAF/GPU Requirements (SAIC/UNC) OneSAF Technical Report (SAIC) OneSAF Technical Report (SAIC) GPU Algorithm Creation (UNC) GPU Algorithm Creation (UNC) Execute Unit Test (SAIC/UNC) Execute Unit Test (SAIC/UNC) OneSAF Scenario Creation (SAIC) OneSAF Scenario Creation (SAIC) OneSAF Benchmark Results (SAIC) OneSAF Benchmark Results (SAIC) Integration into OOS (SAIC) Add several OpenGL dll’s to ERC libraries Place c++ header files for OpenGL among the ERC code Create a new directory among the ERC code - Setup a new makefile/buildfile, to allow GPU to build as its own library Add calls to ERC Initialization to: - Gather all features in the database - Convert our feature class into UNC feature class Modify our A* algorithm to cost multiple segments at once (batching process) Replace all original route planning calls with the GPU counterpart Integration into OOS (SAIC) Add several OpenGL dll’s to ERC libraries Place c++ header files for OpenGL among the ERC code Create a new directory among the ERC code - Setup a new makefile/buildfile, to allow GPU to build as its own library Add calls to ERC Initialization to: - Gather all features in the database - Convert our feature class into UNC feature class Modify our A* algorithm to cost multiple segments at once (batching process) Replace all original route planning calls with the GPU counterpart
40
Slide 40 Route Planning Past Performance May 2005 Baseline: –OOS Build 24 Block D Database: –Joint Readiness Training Center (JRTC); Ft. Polk LA. –~15K Features Scenario: –Four tank platoons Cross Country traveling Route 25km – 27km 40 total segments / route Full physical models, behavior models, and sensing models 10x speedup in feature analysis computations 2x simulation speedup May 2005 Baseline: –OOS Build 24 Block D Database: –Joint Readiness Training Center (JRTC); Ft. Polk LA. –~15K Features Scenario: –Four tank platoons Cross Country traveling Route 25km – 27km 40 total segments / route Full physical models, behavior models, and sensing models 10x speedup in feature analysis computations 2x simulation speedup
41
Slide 41 Route Planning Current Performance August 2005 Baseline: –OOS Build 24 Block D Database: –Ft. Hood, Texas –~50K Features Scenario: –One tank platoon –~50 Individual Combatants (IC) Tactically Traveling through dense urban environment Route 5km – 8km 11 total segments / route Full physical models, behavior models, and sensing models 30-50x speedup in feature analysis computation 10x simulation speedup August 2005 Baseline: –OOS Build 24 Block D Database: –Ft. Hood, Texas –~50K Features Scenario: –One tank platoon –~50 Individual Combatants (IC) Tactically Traveling through dense urban environment Route 5km – 8km 11 total segments / route Full physical models, behavior models, and sensing models 30-50x speedup in feature analysis computation 10x simulation speedup
42
Slide 42 Route Planning Demonstration
43
Slide 43 Route Planning Results Average time for Standard Feature Intersection checking: 68,000 milliseconds Average time for GPU Feature Intersection checking: 2,200 milliseconds Average overall time for Standard Route Planning service call: 45 seconds Average overall time for GPU Route Planning service call: 4.5 seconds Improvements: 30-50x speedup in feature intersection checking 10x overall simulation speedup Greater improvement if …. –Use of a complex urban terrain database –Increase entity count
44
Slide 44 Collision Detection in OneSAF
45
Slide 45 Basic Architecture Basic Architecture of Collision in OneSAF –Two types of collision Entity collides with entity Entity collides with environment feature Collision detection is performed for each entity Medium and High resolution entities perform collision detection once per tick (15 Hz) Requires that the footprint of the entity be checked for intersections against features with linear, circular, and polygonal geometry. OneSAF Environment provides a service (get_features_in_area) to assist in detection of collisions with environmental objects
46
Slide 46 Current Status of Collision Detection OneSAF continues to finalize collision detection capabilities for version 1.0 –OneSAF Build 24, Block D baseline contained minimal collision detection capabilities SAIC/GPU team incorporate a basic collision detection algorithm into our GPU Build 24, Block D baseline: –Algorithm incorporated performs only the exact entity/feature tests (similar to UNC) –Timing results that are collected are estimates to the true functionality Once collision detection is finalized and integrated into OneSAF: –Collision detection algorithm will be better defined –Provides more accurate timing results while comparing GPU-based algorithms to non-GPU-based algorithms
47
47 Proximity & Collision Queries Geometric reasoning of spatial relationships among objects (in a dynamic environment) d Closest Points & Separation Distance d Penetration Depth Collision Detection Contact Points & Normals
48
48 Applications Rapid Prototyping – tolerance verification Dynamic Simulation – contact force calculation Computer Animation – motion control Motion Planning – distance computation Virtual Environments -- interactive manipulation Haptic Rendering -- restoring force computation Simulation-Based Design – interference detection Engineering Analysis – testing & validation Medical Training – contact analysis and handling Education – simulating physics & mechanics
49
49 Motivation Collision and proximity queries can take a significant amount of time (up to 90%) in many applications: dynamic simulation – automobile safety testing, cloth folding, engineering using prototyping, avatar interaction, etc. path planning – routing, navigation in VE, strategic planning, etc.
50
50 Collision Detection Systems I-COLLIDE (1995) RAPID (1996) V-COLLIDE (1997) H-COLLIDE (1998) PQP (1999) SWIFT (2000) PIVOT (2001) SWIFT++ (2001) DEEP (2002) CULLIDE (2003) QCULLIDE (2004)
51
51 Algorithm Object-Level Pruning Subobject-Level Pruning Exact Tests GPU-based PCS computation Using CPU
52
52 Real-Time Collision Detection for Avatar Motion
53
53 Self-Collision Detection: First interactive self-collision detection algorithm 1.5 orders of magnitude improvement in state of the art Deformable Models
54
54 Real-time Surgical Simulation Path planning for a deformable catheter 1 order of magnitude performance improvement
55
55 3 Stage Algorithm Stage 1: Entities are removed that can be culled conservatively on the GPU Input 1 Entity 1 Entity 2 Entity 3 Entity 4 Entity 5 Entity 4 Entity 5
56
56 3 Stage Algorithm Stage 1: Entities are removed that can be culled conservatively on the GPU Stage 2: Features are checked against entities and conservatively culled on the GPU 2 Entity 4 Entity 5 Input 1 Entity 1 Entity 2 Entity 3 Entity 4 Entity 5 Entity 4 Entity 5
57
57 3 Entity 5 3 Stage Algorithm Stage 1: Entities are removed that can be culled conservatively on the GPU Stage 2: Features are checked against entities and conservatively culled on the GPU Stage 3: Remaining features are checked per entity on the CPU 2 Entity 4 Entity 5 Input 1 Entity 1 Entity 2 Entity 3 Entity 4 Entity 5 Entity 4 Entity 5 Entity 5 is the only colliding entity
58
58 3 Stages of Collision Detection Each stage reduces the complexity of the next phase If features are static, they are rasterized only once for all entity-based occlusion queries Dynamic terrains can be easily supported Dynamic features are easily supported by the algorithm Requires one extra rasterization of the feature set
59
59 3 Stages of Collision Detection Terrain Features are first rendered into a depth buffer at startup At runtime Entities are checked against feature depth buffer Features are checked against entities CPU performs final exact test Feature Buffer (static) Terrain Features & Entities Render Features (Once) Cull Entity Set Against Feature Set (GPU) Cull Feature Set Against Single Entity (GPU) Exact Entity/Feature Tests (CPU) Results Feature/Entity Pairs Reduce Entities Reduce Features per Entity
60
60 Occlusion Queries for Overlap Tests Standard feature of current GPUs When two objects are rendered/drawn, the queries check for overlap We draw features and entities with occlusion queries Find features and entities that overlap Drawing a feature or an entity takes only a few microseconds
61
Slide 61 Collision Detection Integration into OneSAF
62
Slide 62 Collision Detection Integration Process OneSAF/GPU Requirements (SAIC/UNC) OneSAF/GPU Requirements (SAIC/UNC) OneSAF Technical Report (SAIC) OneSAF Technical Report (SAIC) GPU Algorithm Creation (UNC) GPU Algorithm Creation (UNC) Execute Unit Test (SAIC/UNC) Execute Unit Test (SAIC/UNC) OneSAF Scenario Creation (SAIC) OneSAF Scenario Creation (SAIC) OneSAF Benchmark Results (SAIC) OneSAF Benchmark Results (SAIC) Integration into OOS (SAIC) Add several OpenGL dll’s to ERC libraries Place c++ header files for OpenGL among the ERC code Create a new directory among the ERC code - Setup a new makefile/buildfile, to allow GPU to build as its own library Add calls to ERC Initialization to: - Gather all features in a given area - Convert our feature class into UNC feature class Replace all original collision detection calls with the GPU counterpart Integration into OOS (SAIC) Add several OpenGL dll’s to ERC libraries Place c++ header files for OpenGL among the ERC code Create a new directory among the ERC code - Setup a new makefile/buildfile, to allow GPU to build as its own library Add calls to ERC Initialization to: - Gather all features in a given area - Convert our feature class into UNC feature class Replace all original collision detection calls with the GPU counterpart
63
Slide 63 Setting the Stage OneSAF Configuration –OneSAF Plan View Display (PVD) GPU toggle switch (route calculation time) –OneSAF SimCore OneSAF Terrain Database: –Ft. Hood, Texas –Future database – South West Asia (SWA) Scenario Overview: –Multiple tank platoons and IC’s performing LOS, route planning and collision detection through a complex urban environment –Full physical models, behavior models, and sensing models
64
Slide 64 Collision Detection Demonstration
65
Slide 65 Collision Detection Results Average time for Standard Collision Detection query: 20 milliseconds Average time for GPU Collision Detection query: 2.1 milliseconds Improvements: 10x speedup in collision detection query 5x simulation speedup Greater improvement if …. –Use of a complex urban terrain database –Increase entity count
66
Slide 66 2004 Collision Detection Line of Sight (LOS) Route Planning BLOCK D GPU Performance Improvement Schedule 2005 BLOCK D BLOCK CBLOCK D Nov. DARPA Review May DARPA Review Aug. DARPA Tech TODAY 100x LOS query 3x sim. speedup 100-200x LOS query 10x sim. speedup 10x feature int. 2x sim. speedup 30-50x feature int. 10x sim. speedup 10x collision query 5x sim. speedup 100-200x LOS query 20x sim. speedup
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.