Decision Analysis and Its Applications to Systems Engineering The Hampton Roads Area International Council on Systems Engineering (HRA INCOSE) chapter.

Slides:



Advertisements
Similar presentations
© 2008 Pearson Addison Wesley. All rights reserved Chapter Seven Costs.
Advertisements

Introductory Mathematics & Statistics for Business
1 Copyright © 2010, Elsevier Inc. All rights Reserved Fig 2.1 Chapter 2.
STATISTICS HYPOTHESES TEST (II) One-sample tests on the mean and variance Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National.
Detection of Hydrological Changes – Nonparametric Approaches
1 ESTIMATION IN THE PRESENCE OF TAX DATA IN BUSINESS SURVEYS David Haziza, Gordon Kuromi and Joana Bérubé Université de Montréal & Statistics Canada ICESIII.
BUS 220: ELEMENTARY STATISTICS
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
Addition Facts
Overview of Lecture Parametric vs Non-Parametric Statistical Tests.
C82MST Statistical Methods 2 - Lecture 2 1 Overview of Lecture Variability and Averages The Normal Distribution Comparing Population Variances Experimental.
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
1 Contact details Colin Gray Room S16 (occasionally) address: Telephone: (27) 2233 Dont hesitate to get in touch.
STATISTICAL INFERENCE ABOUT MEANS AND PROPORTIONS WITH TWO POPULATIONS
Chapter 7 Sampling and Sampling Distributions
Hypothesis Test II: t tests
Department of Engineering Management, Information and Systems
On Comparing Classifiers : Pitfalls to Avoid and Recommended Approach
6. Statistical Inference: Example: Anorexia study Weight measured before and after period of treatment y i = weight at end – weight at beginning For n=17.
(This presentation may be used for instructional purposes)
Detection Chia-Hsin Cheng. Wireless Access Tech. Lab. CCU Wireless Access Tech. Lab. 2 Outlines Detection Theory Simple Binary Hypothesis Tests Bayes.
ABC Technology Project
5-1 Chapter 5 Theory & Problems of Probability & Statistics Murray R. Spiegel Sampling Theory.
Hypothesis Tests: Two Independent Samples
Chapter 4 Inference About Process Quality
Quantitative Methods for Researchers Paul Cairns
Lecture 8: Testing, Verification and Validation
Comparing Two Population Parameters
Chapter 5 Test Review Sections 5-1 through 5-4.
Chapter 8: Introduction to Hypothesis Testing. 2 Hypothesis Testing An inferential procedure that uses sample data to evaluate the credibility of a hypothesis.
Addition 1’s to 20.
25 seconds left…...
Putting Statistics to Work
Week 1.
Statistical Inferences Based on Two Samples
We will resume in: 25 Minutes.
Chapter 8 Estimation Understandable Statistics Ninth Edition
CHAPTER 15: Tests of Significance: The Basics Lecture PowerPoint Slides The Basic Practice of Statistics 6 th Edition Moore / Notz / Fligner.
IP, IST, José Bioucas, Probability The mathematical language to quantify uncertainty  Observation mechanism:  Priors:  Parameters Role in inverse.
Chapter 11: The t Test for Two Related Samples
Experimental Design and Analysis of Variance
Testing Hypotheses About Proportions
Simple Linear Regression Analysis
Oct 9, 2014 Lirong Xia Hypothesis testing and statistical decision theory.
Multiple Regression and Model Building
Nov 7, 2013 Lirong Xia Hypothesis testing and statistical decision theory.
January Structure of the book Section 1 (Ch 1 – 10) Basic concepts and techniques Section 2 (Ch 11 – 15): Inference for quantitative outcomes Section.
Slide Slide 1 Copyright © 2007 Pearson Education, Inc Publishing as Pearson Addison-Wesley. Section 7-2 Estimating a Population Proportion Created by Erin.
Introduction into Simulation Basic Simulation Modeling.
4/4/2015Slide 1 SOLVING THE PROBLEM A one-sample t-test of a population mean requires that the variable be quantitative. A one-sample test of a population.
Basics of Statistical Estimation
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
Hypothesis testing Another judgment method of sampling data.
Pattern Recognition and Machine Learning
Inferential Statistics & Hypothesis Testing
BCOR 1020 Business Statistics Lecture 21 – April 8, 2008.
Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides
Chapter 9 Title and Outline 1 9 Tests of Hypotheses for a Single Sample 9-1 Hypothesis Testing Statistical Hypotheses Tests of Statistical.
Choosing Statistical Procedures
Difference Two Groups 1. Content Experimental Research Methods: Prospective Randomization, Manipulation Control Research designs Validity Construct Internal.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
1/2555 สมศักดิ์ ศิวดำรงพงศ์
Statistical Inference for the Mean Objectives: (Chapter 9, DeCoursey) -To understand the terms: Null Hypothesis, Rejection Region, and Type I and II errors.
Education 793 Class Notes Decisions, Error and Power Presentation 8.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Lecture 1.31 Criteria for optimal reception of radio signals.
CONCEPTS OF HYPOTHESIS TESTING
9 Tests of Hypotheses for a Single Sample CHAPTER OUTLINE
Presentation transcript:

Decision Analysis and Its Applications to Systems Engineering The Hampton Roads Area International Council on Systems Engineering (HRA INCOSE) chapter The Hampton Roads Society of Cost Estimating and Analysis (SCEA) chapter Newport News, Virginia Monday, November 17, 2009 This document is confidential and is intended solely for the use and information of the client to whom it is addressed. Analysis of Decision Algorithms – A Specification of Performance Dr. Swapan K. Sarkar Booz Allen Hamilton McLean, Virginia

1 Problem Assumptions Quantify the Performance of Decision Algorithms used in Combat ID –Potentially 400 Air Contacts –Evaluate the Contact and Determine Identity (e.g. Friendly or Hostile) based on: : Electronic Support Measure (ESM) Non-Cooperative Target Recognition (NCTR) Performance is Defined as: –type I error The Probability of False Alarm Say Air Contact is Hostile when in fact it is Friendly –type II error The Probability of Missed Detection Say Air Contact is Friendly when in fact it is Hostile

2 The Combat ID System is the Integration of a number of Processes: –ESM Detection –NCTR Detection, –Correlation –Tracking –Decision Algorithm –Reporting Build Test such that only the Decision Algorithm is Tested Set up Test to Quantify type I and type II error A Testing the Decision Algorithm ESM Detection NCTR Detection Other Sensors Correlation / Track Initiation Tracking / State Propagation Decision Algorithm Reporting / User Interface A-Priori Library

3 Hypothesis Testing Fundamentally, a Decision is The Test of a Hypothesis –Is the contact Hostile or Friendly By accepting this Concept –A Large and Well Tested Body of Statistics become available –Can Compare Decision Algorithms Performance Vs. Optimality –Allows Validation of Decision Algorithm For Design Specification Comparison between Contending Algorithms

4 Optimal Decision are Bayesean The Bayes Optimal Decision Algorithm provides: –Upper Bound on Performance –Insight into Establishing Test: How much Information is needed to conduct Test –Is Optimal since it Maximizes the Separation between Distributions The Distributions are the Friendly and Hostile Aircraft Parameters –Minimizes the Expected Error –Can be designed for Gaussian and Non-Gaussian Distribution

5 Introduction Consider a Binary System where –Observation is a parameterized measurement z Corresponding to a Friendly Target or a Hostile Target Range of z is the Observation Space Reduces the Test to which: –H 0 : null hypothesis –H 1 : alternative hypothesis Represents Truth based on the Measurements z

6 Modeling the Observation Space P(H i |z): Probability that H i was the true Hypothesis given a measured Observation z Correct Hypothesis is then the one corresponding to the maximum of m probabilities Decision Rule will be to Choose H 0 if: P(H i |z) > P(H 1 |z), P(H 2 |z), …, P(H m |z) For the Binary case, the rule becomes:

7 Decision Outcomes Binary Hypothesis - Testing has Four Possible Outcomes: –Say H 0 and the null hypothesis is true –Say H 1 and the alternative hypothesis is true –Say H 1 and the null hypothesis is true –Say H 0 and the alternative hypothesis is true Third Condition is a type I error, referred to as a False Alarm (P F ) Fourth Condition is a type II error, referred to as a Missed Detection (P M ) The Probability of Detection (P D ): 1 - P M

8 Consequences Decision Making Involve –Consequences –Associated Costs Consequences of one Decision may not be the same as the Consequences of a different Decision In the context of Contact Identification, the Consequences of Identifying a Hostile Contact are different from not Correctly Identifying the Hostile

9 Objective C ij : cost associated with making a decision D i when the true hypothesis is H j P(D i, H j ): joint probability function such that one says D i when in fact H j is true The decision criteria which minimizes the probability of error is the maximum a posteriori (MAP) test:

10 Likelihood Ratio Test The Associated Decision Rule Becomes the Likelihood Ratio Test: MAP Test: Known as the Optimal Observer since it: –Minimizes the average error and as such –Is an upper bound on the performance of a decision algorithm for a given set of parameters

11 Bayes Classifier for Normal Distribution For the Gaussian case: –After some complex Algebraic Manipulation the log likelihood ratio test is: –Z : Measured Parameters –M i : Mean value if Parameters for Aircraft i – i : Covariance of Parameters for Aircraft i –For n Aircraft Type: Take Maximum Score for Log likely Aircraft Type

12 Test Set Design The Test Consists of a Set of Friendly and Hostile Aircraft –The Sets have Overlap in Allegiance and Nationality –Care must be taken such that there is no Ambiguity in the Target Set Example: –Friend 1: { } –Friend 2: { } –Hostile 1: { } Aircrafts that have Emitters are: –Emitters: { } The Test Set of Friendly is defined to be the Union of Friend 1 and Friend 2 which intersects the Set of Emitters –Friendly: { } The Test Set of Hostile is the intersection of the Hostile 1 and Emitter Set –Hostile: { }

13 Test Set Design (contd.) Enforce a condition of No Ambiguity –There is no overlap in the Target Set –Test Friendly: { } –Test Hostile: {5 6 16}

14 Design of Experiment The Decision Algorithms are presented with the Parametric Measurement data Each Measurement is Uniquely Identified Identification allows the Decision Algorithms to: –Accumulate Information, Correct the Parametric Data, and present an Inferred Identity of the Track The Evaluation of Error was conducted by building a Set of Track Reports from the Test Sets – Enough Parametric Data was given so that the Algorithms can infer the Track Identity – Enough means that for Near Normal Distribution Separability between Sets is close to the desired type I error –This is 1 minus the type I error for the Normal Inverse Cumulative Distribution where the Expected Value and Variance came from the Bayes Classifier

15 The Test Two Tests of 1000 Monte Carlo Generated Tracks –Each Track was Randomly Generated from the appropriate Test Set –Each Track, either Test Hostile or Test Friendly, had 7 types of ESM events and 3 NCTR events –A Track Report was composed of 3 ESM and 1 NCTR Report –A set of ESM Event Parameters was generated using a Hypergeometric Distribution –The Distribution of the ESM or NCTR Events were generated via Monte Carlo, from the a priori Library –Target Truth was compared with the Algorithm Declaration –Scores were determined for Target Type, Allegiance, and Nationality –Scores were assigned to Target Type

16 Track Scoring The Track Output was a Simple Binomial –If the Target Type was Correct, The Algorithm received a score of 1 Note, if the Algorithm failed to Declare a Target type –That was considered an Error The type I error is: N: Number of Samples

17 Results The Probability of type I error, Small is Better Algorithm C is the Best Algorithm –While it is not statistically better than Algorithm D its performance is significantly greater that Algorithm A and B The Probability of type II error, Algorithmtype I errortype II error Bayes Classifier A B0.79 C D

18 Algorithmic OC The Operating Characteristic (OC) is the Trade Off between Probability of False Alarm and Probability of Detection The OC Curve gives a complete characterization of the Algorithmic performance Based on the Target type I and tpye II error

19 Discussion Algorithms A and B were penalized for NOT disclosing a Target Type Algorithms C and D DID disclose correctly a significantly higher percentage of the time The number of Parameters were selected such that the bound on type I error was 5% –There were enough information available to make good disclosure 95% of the time Nationality and Allegiance does not measure type I or type II error –There were ambiguity between the Target Type and Nationality or Allegiance Improvements could be made in all Algorithms for the Declaration of Target Type –Apparent that the Bayes Classifier had a type I error of 5% and a type II error of 2%