Scott Randall Thompson, MSCS, MBA, PMP

Slides:



Advertisements
Similar presentations
Performance Testing - Kanwalpreet Singh.
Advertisements

11 Simulation. 22 Overview of Simulation – When do we prefer to develop simulation model over an analytic model? When not all the underlying assumptions.
Ninth Lecture Hour 8:30 – 9:20 pm, Thursday, September 13
Systems Analysis and Design Feasibility Study. Introduction The Feasibility Study is the preliminary study that determines whether a proposed systems.
Statistical Issues in Research Planning and Evaluation
Chapter 10 Quality Control McGraw-Hill/Irwin
Agenda Overview Why TransCAD Challenges/tips Initiatives Applications.
Chapter 14 Simulation. Monte Carlo Process Statistical Analysis of Simulation Results Verification of the Simulation Model Computer Simulation with Excel.
T T07-01 Sample Size Effect – Normal Distribution Purpose Allows the analyst to analyze the effect that sample size has on a sampling distribution.
The Software Product Life Cycle. Views of the Software Product Life Cycle  Management  Software engineering  Engineering design  Architectural design.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
So What? Operations Management EMBA Summer TARGET You are, aspire to be, or need to communicate with an executive that does not have direct responsibility.
SIMULATION USING CRYSTAL BALL. WHAT CRYSTAL BALL DOES? Crystal ball extends the forecasting capabilities of spreadsheet model and provide the information.
Rev. 0 CONFIDENTIAL Mod.19 02/00 Rev.2 Mobile Terminals S.p.A. Trieste Author: M.Fragiacomo, D.Protti, M.Torelli 31 Project Idea Feasibility.
Reliability Models & Applications Leadership in Engineering
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
RCM Tools Histogram Pareto Chart Cause and Effect Diagram FMEA.
Simulation is the process of studying the behavior of a real system by using a model that replicates the system under different scenarios. A simulation.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Written by Changhyun, SON Chapter 5. Introduction to Design Optimization - 1 PART II Design Optimization.
Requirement Engineering
Csci 418/618 Simulation Models Dr. Ken Nygard, IACC 262B
Quality Control Copyright © 2015 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill.
24 Nov 2007Data Management and Exploratory Data Analysis 1 Exploratory Data Analysis Exploratory Data Analysis (EDA) is an Approach that Employs a Variety.
Develop Schedule is the Process of analyzing activity sequences, durations, resource requirements, and schedule constraints to create the project schedule.
L08: INCORPORATING RELIABILITY INTO THE HCM FREEWAY AND MULTILANE SUBCOMMITTEE JANUARY 22, 2012 Kittelson & Associates, Inc. Cambridge Systematics Dowling.
Introduction to Computational Thinking
Multiple Regression.
(Winter 2017) Instructor: Craig Duckett
Computer Simulation Henry C. Co Technology and Operations Management,
Overview Modern chip designs have multiple IP components with different process, voltage, temperature sensitivities Optimizing mix to different customer.
OPERATING SYSTEMS CS 3502 Fall 2017
Six-Sigma : DMAIC Cycle & Application
CRAM Quarterly Meeting, December 2016
Project Management BBA & MBA
Automate Does Not Always Mean Optimize
Prepared by Lloyd R. Jaisingh
Requirements Validation – II
Oracle Subledger Accounting
36.1 Introduction Objective of Quality Engineering:
Psychology 202a Advanced Psychological Statistics
IS442 Information Systems Engineering
Simulation - Introduction
DSS & Warehousing Systems
12 Inferential Analysis.
Six Sigma Green Belt Training
Professor S K Dubey,VSM Amity School of Business
Tools of Software Development
Analyzing Reliability and Validity in Outcomes Assessment Part 1
Multiple Regression.
Scenario Modeling in GoldSim
Ellen Roland ROLANDS & ASSOCIATES Corporation
Chapter 10 – Software Testing
DMAIC Roadmap DMAIC methodology is central to Six Sigma process improvement projects. Each phase provides a problem solving process where-by specific tools.
12 Inferential Analysis.
SPDA-1-3-OBS Software Upgrade
System Construction and Implementation
Quality Control Lecture 3
Overview of Workflows: Why Use Them?
RAD Evolution Workshop Outcome
Psych 231: Research Methods in Psychology
Inferential Statistics
Soft System Stakeholder Analysis
Measure Phase Wrap Up and Action Items
Analyzing Reliability and Validity in Outcomes Assessment
Chapter Nine: Using Statistics to Answer Questions
Measurement: Assessment and Metrics Westcott CH. 15
Soft System Stakeholder Analysis
MGS 3100 Business Analysis Regression Feb 18, 2016
Process Wind Tunnel for Improving Business Processes
Presentation transcript:

Scott Randall Thompson, MSCS, MBA, PMP srthompson@alionscience.com EXTENDED ANALYTICS FOR JTLS-GO® Scott Randall Thompson, MSCS, MBA, PMP srthompson@alionscience.com 12 December 2018

Agenda M&S Outcome Overview JTLS-GO Enhancements to Support Outcome Generation Alion Outcome Analysis Overview and Development Roadmap Objectives: Provide high-level, minimal-technical overview of outcome data analysis initiatives Additional information available upon request outside of briefing

High-Level Overview M&S process, generally speaking… M&S tools take in a ‘model’ and produce an ‘outcome’ Outcomes are result table(s) in database, usually, from a run For live or virtual events, M&S tools also generate outputs which can then be analyzed using pre-built tools or visualizations Extracting analytic results is often a manual process Alion is working on ‘automating’ and providing visualization of analytic results

Randomness (Abbreviated) In case you ever wondered why/how ‘same run’ gets ‘different results’ Random number generators! (Well, usually Pseudo Random Numbers…) Variables that resolve per-run to a number in a distribution JTLS-GO handling: Attributes of the model have a statistical distribution and range for certain elements The JTLS design team has several Operations Research trained people that decide what is the best distribution to represent the stochastic process being implemented.  Examples! When computing the direction of an impact point of a weapon to its aimpoint, a uniformly distributed random variate between 0.0 and 360.0 is  drawn. Note a second random variate is drawn using a Normal Distribution based on the Circular Error Probable (CEP) database parameter for the type of weapon or firing system. Naturally 50% of the weapons impacts will fall within this CEP and 50% outside of the CEP, as per the definition of CEP. The number of weapon hits needed to sink a ship draws from a Weibull distribution. JTLS started at the Naval Postgraduate School and the professors there insisted that the Weibull distribution be used because they were convinced that it best represented the stochastic event of sinking a ship. This means that the database builder must provide the Weibell Shape and Weibull Scale parameter for each Ship Unit Prototype.

JTLS-GO Usage Overview JTLS-GO Tool Usage “Today” (from JTLS-GO materials) Many organizations perform OPLAN Evaluation Identical operational scenarios may be repeated many times with a few minor ‘model’ changes in order to evaluate alternative outcomes Analytic use of JTLS demands that the characteristics and behaviors of all modeled capabilities accurately reflect real world parameters. The results need to be accurate to allow meaningful comparison. Alion Understanding of JTLS-GO Usage Most/All JTLS-GO runs are human-in-the-loop Data is saved at each checkpoint and end-state made available in the AAR tables / tool JTLS-GO runs do not occur in parallel (random seeding, saved tables, etc.)

Automated Post-Processing Alion’s Concept Current State Why did I receive this M&S result from this single run? Why did my plan succeed in meeting objectives? Why did the results fail to meet objectives? M&S Outcome Insights From Built-in Analysis Analyst Manual Post- Processing Analyst Future State I understand cause of this M&S result I know why my plan succeeded in meeting objectives I know why the results failed to meet objectives I know what modifications to make to progress toward meeting objectives M&S Outcomes Analyst New Insights From Analysis Automated Post-Processing

Benefits of These Capabilities Higher-Level Benefits: Scenario validation (“IVV”) Ability to establish a scenario baseline Ability to pre-vet scenarios before live/virtual events Specifics: Automate standard statistical reporting so that analysts can parse data faster. Automation of a standard set of analysis use cases of interest to most/all M&S scenarios Use of advanced techniques to determine causal factors Additional visualization (map, histograms, bar charts, selecting additional reports to be generated, analytic dashboard) (And then scenario specific analysis can also be developed) (And other use cases that may be of interest… outliers, etc.)

JTLS-GO Tool Enhancements JTLS-GO version 5.0 supports a Live/Virtual run with orders Has to run ‘serially’ due to random number seeding Cannot launch multiple runs automatically This means generating N=30* number of runs will take a really long time Under ideal conditions, 1 run of given scenario = ~12-14 hours or 3 game days** Taking about 15-16 days to conduct 30 runs to achieve statistically significant outcome Alion’s JTLS-GO analytic enhancements in 5.1***: Changing seed on multiple runs at launch to run in parallel; 3 runs at a time Throughput now means N=30 can be achieved in an estimated 5 days instead of 15 AAR data will be saved for each run, then the process will continue to the next run Enhanced capability provides data for all 30 runs *N=30 rule confirmed by T distribution **Korean peninsula ‘demo’ scenario provided ***Final release number pending

JTLS-GO Analyzing Data Outcomes Alion Approach Ingest outcomes from M&S tools Process and apply use-cases on them Attempt to provide ‘generic’ use cases and reporting of value Provide scenario-specific use cases as needed Generate reports N=30 runs provides enough data to analyze some statistical items for several use cases BLUE objectives achieved, RED objectives achieved Can do more if needed (outliers, more complicated use cases) Analytic Capabilities Automated standard statistical reporting Advanced techniques to determine causal factors Analytic dashboard to assist analysts

First Full Demonstration Development Roadmap Dec 2018 Alpha Version Apr 2019 Beta Version Jul 2019 Version 1.0 First Full Demonstration Feb 2020 Version 1.1 Fully Operational (Note - only using Korean Peninsula Scenario) Dec 2018 - Automated process, basic statistical reporting July 2019 – 1.0 GUI Available (development GUI available now)

Next Major Steps Capability Development to IOC/FOC Scenario Ingestion Partial/Full-Automation Additional Use Case Development Transition to Classified Environment to Run Classified Models

QUESTIONS? srthompson@alionscience.com