Presentation is loading. Please wait.

Presentation is loading. Please wait.

Decision Analysis and Its Applications to Systems Engineering The Hampton Roads Area International Council on Systems Engineering (HRA INCOSE) chapter.

Similar presentations


Presentation on theme: "Decision Analysis and Its Applications to Systems Engineering The Hampton Roads Area International Council on Systems Engineering (HRA INCOSE) chapter."— Presentation transcript:

1 Decision Analysis and Its Applications to Systems Engineering The Hampton Roads Area International Council on Systems Engineering (HRA INCOSE) chapter The Hampton Roads Society of Cost Estimating and Analysis (SCEA) chapter Newport News, Virginia Monday, November 17, 2009 This document is confidential and is intended solely for the use and information of the client to whom it is addressed. Analysis of Decision Algorithms – A Specification of Performance Dr. Swapan K. Sarkar Booz Allen Hamilton McLean, Virginia

2 1 Problem Assumptions Quantify the Performance of Decision Algorithms used in Combat ID –Potentially 400 Air Contacts –Evaluate the Contact and Determine Identity (e.g. Friendly or Hostile) based on: : Electronic Support Measure (ESM) Non-Cooperative Target Recognition (NCTR) Performance is Defined as: –type I error The Probability of False Alarm Say Air Contact is Hostile when in fact it is Friendly –type II error The Probability of Missed Detection Say Air Contact is Friendly when in fact it is Hostile

3 2 The Combat ID System is the Integration of a number of Processes: –ESM Detection –NCTR Detection, –Correlation –Tracking –Decision Algorithm –Reporting Build Test such that only the Decision Algorithm is Tested Set up Test to Quantify type I and type II error A Testing the Decision Algorithm ESM Detection NCTR Detection Other Sensors Correlation / Track Initiation Tracking / State Propagation Decision Algorithm Reporting / User Interface A-Priori Library

4 3 Hypothesis Testing Fundamentally, a Decision is The Test of a Hypothesis –Is the contact Hostile or Friendly By accepting this Concept –A Large and Well Tested Body of Statistics become available –Can Compare Decision Algorithms Performance Vs. Optimality –Allows Validation of Decision Algorithm For Design Specification Comparison between Contending Algorithms

5 4 Optimal Decision are Bayesean The Bayes Optimal Decision Algorithm provides: –Upper Bound on Performance –Insight into Establishing Test: How much Information is needed to conduct Test –Is Optimal since it Maximizes the Separation between Distributions The Distributions are the Friendly and Hostile Aircraft Parameters –Minimizes the Expected Error –Can be designed for Gaussian and Non-Gaussian Distribution

6 5 Introduction Consider a Binary System where –Observation is a parameterized measurement z Corresponding to a Friendly Target or a Hostile Target Range of z is the Observation Space Reduces the Test to which: –H 0 : null hypothesis –H 1 : alternative hypothesis Represents Truth based on the Measurements z

7 6 Modeling the Observation Space P(H i |z): Probability that H i was the true Hypothesis given a measured Observation z Correct Hypothesis is then the one corresponding to the maximum of m probabilities Decision Rule will be to Choose H 0 if: P(H i |z) > P(H 1 |z), P(H 2 |z), …, P(H m |z) For the Binary case, the rule becomes:

8 7 Decision Outcomes Binary Hypothesis - Testing has Four Possible Outcomes: –Say H 0 and the null hypothesis is true –Say H 1 and the alternative hypothesis is true –Say H 1 and the null hypothesis is true –Say H 0 and the alternative hypothesis is true Third Condition is a type I error, referred to as a False Alarm (P F ) Fourth Condition is a type II error, referred to as a Missed Detection (P M ) The Probability of Detection (P D ): 1 - P M

9 8 Consequences Decision Making Involve –Consequences –Associated Costs Consequences of one Decision may not be the same as the Consequences of a different Decision In the context of Contact Identification, the Consequences of Identifying a Hostile Contact are different from not Correctly Identifying the Hostile

10 9 Objective C ij : cost associated with making a decision D i when the true hypothesis is H j P(D i, H j ): joint probability function such that one says D i when in fact H j is true The decision criteria which minimizes the probability of error is the maximum a posteriori (MAP) test:

11 10 Likelihood Ratio Test The Associated Decision Rule Becomes the Likelihood Ratio Test: MAP Test: Known as the Optimal Observer since it: –Minimizes the average error and as such –Is an upper bound on the performance of a decision algorithm for a given set of parameters

12 11 Bayes Classifier for Normal Distribution For the Gaussian case: –After some complex Algebraic Manipulation the log likelihood ratio test is: –Z : Measured Parameters –M i : Mean value if Parameters for Aircraft i – i : Covariance of Parameters for Aircraft i –For n Aircraft Type: Take Maximum Score for Log likely Aircraft Type

13 12 Test Set Design The Test Consists of a Set of Friendly and Hostile Aircraft –The Sets have Overlap in Allegiance and Nationality –Care must be taken such that there is no Ambiguity in the Target Set Example: –Friend 1: {1 2 3 4 12 13 17 19 21 22 23 24 25} –Friend 2: {1 3 15 21 22} –Hostile 1: {1 5 6 7 8 9 12 15 16 19 21} Aircrafts that have Emitters are: –Emitters: {1 3 4 5 6 11 12 13 14 15 16 17 18 19 20 21 23 24 25} The Test Set of Friendly is defined to be the Union of Friend 1 and Friend 2 which intersects the Set of Emitters –Friendly: {1 3 4 12 13 15 17 19 21 23 24 25} The Test Set of Hostile is the intersection of the Hostile 1 and Emitter Set –Hostile: {1 5 6 12 15 16 19 21}

14 13 Test Set Design (contd.) Enforce a condition of No Ambiguity –There is no overlap in the Target Set –Test Friendly: {3 4 13 17 23 24 25} –Test Hostile: {5 6 16}

15 14 Design of Experiment The Decision Algorithms are presented with the Parametric Measurement data Each Measurement is Uniquely Identified Identification allows the Decision Algorithms to: –Accumulate Information, Correct the Parametric Data, and present an Inferred Identity of the Track The Evaluation of Error was conducted by building a Set of Track Reports from the Test Sets – Enough Parametric Data was given so that the Algorithms can infer the Track Identity – Enough means that for Near Normal Distribution Separability between Sets is close to the desired type I error –This is 1 minus the type I error for the Normal Inverse Cumulative Distribution where the Expected Value and Variance came from the Bayes Classifier

16 15 The Test Two Tests of 1000 Monte Carlo Generated Tracks –Each Track was Randomly Generated from the appropriate Test Set –Each Track, either Test Hostile or Test Friendly, had 7 types of ESM events and 3 NCTR events –A Track Report was composed of 3 ESM and 1 NCTR Report –A set of ESM Event Parameters was generated using a Hypergeometric Distribution –The Distribution of the ESM or NCTR Events were generated via Monte Carlo, from the a priori Library –Target Truth was compared with the Algorithm Declaration –Scores were determined for Target Type, Allegiance, and Nationality –Scores were assigned to Target Type

17 16 Track Scoring The Track Output was a Simple Binomial –If the Target Type was Correct, The Algorithm received a score of 1 Note, if the Algorithm failed to Declare a Target type –That was considered an Error The type I error is: N: Number of Samples

18 17 Results The Probability of type I error, Small is Better Algorithm C is the Best Algorithm –While it is not statistically better than Algorithm D its performance is significantly greater that Algorithm A and B The Probability of type II error, Algorithmtype I errortype II error Bayes Classifier0.050.02 A0.990.86 B0.79 C0.470.33 D0.50.35

19 18 Algorithmic OC The Operating Characteristic (OC) is the Trade Off between Probability of False Alarm and Probability of Detection The OC Curve gives a complete characterization of the Algorithmic performance Based on the Target type I and tpye II error

20 19 Discussion Algorithms A and B were penalized for NOT disclosing a Target Type Algorithms C and D DID disclose correctly a significantly higher percentage of the time The number of Parameters were selected such that the bound on type I error was 5% –There were enough information available to make good disclosure 95% of the time Nationality and Allegiance does not measure type I or type II error –There were ambiguity between the Target Type and Nationality or Allegiance Improvements could be made in all Algorithms for the Declaration of Target Type –Apparent that the Bayes Classifier had a type I error of 5% and a type II error of 2%


Download ppt "Decision Analysis and Its Applications to Systems Engineering The Hampton Roads Area International Council on Systems Engineering (HRA INCOSE) chapter."

Similar presentations


Ads by Google