Topic 11: Level 2 David L. Hall. Topic Objectives Introduce the concept of Level 2 processing Survey and introduce methods for approximate reasoning –Introduce.

Slides:



Advertisements
Similar presentations
FT228/4 Knowledge Based Decision Support Systems
Advertisements

CHAPTER 13 Inference Techniques. Reasoning in Artificial Intelligence n Knowledge must be processed (reasoned with) n Computer program accesses knowledge.
Rulebase Expert System and Uncertainty. Rule-based ES Rules as a knowledge representation technique Type of rules :- relation, recommendation, directive,
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Becerra-Fernandez, et al. -- Knowledge Management 1/e -- © 2004 Prentice Hall Chapter 7 Technologies to Manage Knowledge: Artificial Intelligence.
Chapter 4: Reasoning Under Uncertainty
Bayesian Decision Theory
5/17/20151 Probabilistic Reasoning CIS 479/579 Bruce R. Maxim UM-Dearborn.
Modeling Human Reasoning About Meta-Information Presented By: Scott Langevin Jingsong Wang.
AI – CS364 Uncertainty Management Introduction to Uncertainty Management 21 st September 2006 Dr Bogdan L. Vrusias
What is Statistical Modeling
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
AI – CS364 Uncertainty Management 26 th September 2006 Dr Bogdan L. Vrusias
© 2002 Franz J. Kurfess Reasoning under Uncertainty 1 CPE/CSC 481: Knowledge-Based Systems Dr. Franz J. Kurfess Computer Science Department Cal Poly.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
16722 Sensing and Sensors Mel Siegel )
Lecture 05 Rule-based Uncertain Reasoning
BCOR 1020 Business Statistics
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Inference about Population Parameters: Hypothesis Testing
Chapter 4: Reasoning Under Uncertainty
Business Statistics - QBM117 Introduction to hypothesis testing.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Knowledge Acquisition. Concepts of Knowledge Engineering Knowledge engineering The engineering discipline in which knowledge is integrated into computer.
Chapter 10 Artificial Intelligence. © 2005 Pearson Addison-Wesley. All rights reserved 10-2 Chapter 10: Artificial Intelligence 10.1 Intelligence and.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 26 of 41 Friday, 22 October.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
1 1 Slide © 2007 Thomson South-Western. All Rights Reserved OPIM 303-Lecture #7 Jose M. Cruz Assistant Professor.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
© C. Kemke Reasoning under Uncertainty 1 COMP 4200: Expert Systems Dr. Christel Kemke Department of Computer Science University of Manitoba.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Artificial Intelligence By Michelle Witcofsky And Evan Flanagan.
1 Reasoning Under Uncertainty Artificial Intelligence Chapter 9.
From Rough Set Theory to Evidence Theory Roman Słowiński Laboratory of Intelligent Decision Support Systems Institute of Computing Science Poznań University.
Computing & Information Sciences Kansas State University Wednesday, 22 Oct 2008CIS 530 / 730: Artificial Intelligence Lecture 22 of 42 Wednesday, 22 October.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
C M Clarke-Hill1 Analysing Quantitative Data Forming the Hypothesis Inferential Methods - an overview Research Methods.
© 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang 12-1 Chapter 12 Advanced Intelligent Systems.
Uncertainty Management in Rule-based Expert Systems
Reasoning Under Uncertainty. 2 Objectives Learn the meaning of uncertainty and explore some theories designed to deal with it Find out what types of errors.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Reasoning with Uncertainty دكترمحسن كاهاني
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
Textbook Basics of an Expert System: – “Expert systems: Design and Development,” by: John Durkin, 1994, Chapters 1-4. Uncertainty (Probability, Certainty.
Data Mining and Decision Support
Spring, 2005 CSE391 – Lecture 1 1 Introduction to Artificial Intelligence Martha Palmer CSE391 Spring, 2005.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 24 of 41 Monday, 18 October.
- 1 - Outline Introduction to the Bayesian theory –Bayesian Probability –Bayes’ Rule –Bayesian Inference –Historical Note Coin trials example Bayes rule.
Some Thoughts to Consider 5 Take a look at some of the sophisticated toys being offered in stores, in catalogs, or in Sunday newspaper ads. Which ones.
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
CS Ensembles and Bayes1 Ensembles, Model Combination and Bayesian Combination.
Computational Intelligence: Methods and Applications Lecture 26 Density estimation, Expectation Maximization. Włodzisław Duch Dept. of Informatics, UMK.
Artificial Intelligence Knowledge Representation.
Outline Historical note about Bayes’ rule Bayesian updating for probability density functions –Salary offer estimate Coin trials example Reading material:
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Artificial Intelligence Logical Agents Chapter 7.
On triangular norms, metric spaces and a general formulation of the discrete inverse problem or starting to think logically about uncertainty On triangular.
Chapter 9 -Hypothesis Testing
Slides by JOHN LOUCKS St. Edward’s University.
Lecture 1.31 Criteria for optimal reception of radio signals.
CHAPTER 5 Handling Uncertainty BIC 3337 EXPERT SYSTEM.
Reasoning Under Uncertainty in Expert System
Data Mining Lecture 11.
28th September 2005 Dr Bogdan L. Vrusias
Presentation transcript:

Topic 11: Level 2 David L. Hall

Topic Objectives Introduce the concept of Level 2 processing Survey and introduce methods for approximate reasoning –Introduce concepts in probabilistic reasoning and fusion (e.g., Bayes, Dempster-Shafer, etc) Describe challenges and issues in automated reasoning Note: this topic will focus on report-level fusion; Topic 12 will introduce reasoning methods such as rule-based systems & intelligent agents

Level 2 Processing (Situation Refinement)

Level Two Processing: Situation Assessment LEVEL TWO PROCESSING SITUATION ASSESSMENT CONTEXTUAL INTERPRETATION/FUSION Environment Weather Doctrine Socio-political MULTI-PERSPECTIVE ASSESSMENT Red/Blue/White OBJECT AGGREGATION Time relationship Geometrical proximity Communications Functional dependence EVENT/ACTIVITY AGGREGATION

Shortfalls in L2/L3 Fusion Research From Nichols, M., “A Survey of the Current State of Data Fusion Systems”, OSD Decision Support Ctr. presentation at SPAWAR San Diego, CA, May (17 of 100 systems with any L2, L3 at all, mostly basic/simple techniques.) From Valet, L., et al: “A Statistical Overview of Recent Literature in Information Fusion”, FUSION2000, Paris, France, July 2000 (~85% of pubs reviewed at L1)

Hierarchy of Inference Techniques Type of InferenceApplicable Techniques - Threat Analysis - Situation Assessment - Behavior/Relationships of Entities - Identity, Attributes and Location of an Entity - Existence and Measurable Features of an Entity High Low - Knowledge-Based Techniques - Decision-Level Techniques - Estimation Techniques - Signal Processing Techniques - Expert Systems - Scripts, Frames, Templating - Case-Based Reasoning - Genetic Algorithms - Neural Nets - Cluster Algorithms - Fuzzy Logic - Bayesian Nets - Maximum A Posteriori Probability (e.g. Kalman Filters, Bayesian) - Evidential Reasoning INFERENCE LEVEL

Examples of Data Fusion Inferences

Comments on L-2 and L-3 Techniques Reasoning for level-2 and level-3 processing involves context-based reasoning and high level inferences Techniques are generally probabilistic and entail representation of uncertainty in data and inferential relationships Methods represent knowledge at the semantic level –Rule-based methods –Graphical representations –Logical templates, cases, plan hierarchies, agents & others

Elements of Artificial Intelligence SYMBOLIC PROCESSING Pattern MatchingInferenceSearchKnowledge Representation TECHNIQUES APPLICATION AREAS Knowledge Acquisition Automatic Programming Natural Language Processing Learning Expert Systems Intelligent Assistance Speech Understanding Text Understanding Machine Translation Computer Vision Robotics Planning

Challenges in Symbolic Reasoning Human Inferencing Capabilities –Continual access to multi-sensory information –Complex pattern recognition (visual, aural) –Semantic level reasoning –Knowledge of “real-world” facts, relationships, interactions –Use of heuristics for rapid assessment & decision-making –Context-based processing Computer Inferencing Challenges -Lack of real-world knowledge -Inability to deal with the perversity of English or other languages -Requirement for explicit knowledge representation & reasoning methods -Computer advantages -Processing speed & power (use of physics-based models) -Unaffected by fatigue, emotion, bias -Machine learning from large data sets

Categories of Representational, Decomposition Techniques

Major reasoning approaches Knowledge representation Rules Frames Scripts Semantic nets Parametric Templates Analogical methods Uncertainty representation Confidence factors Probability Dempster-Shafer evidential intervals Fuzzy membership functions Etc. Reasoning methods & architectures Implicit methods –Neural nets –Cluster algorithms Pattern templates –Templating methods –Case-based reasoning Process reasoning –Script interpreters –Plan-based reasoning Deductive methods –Decision-trees –Bayesian belief nets –D-S belief nets Hybrid architectures –Agent-based methods –Blackboard systems –Hybrid symbolic/numerical systems

Decision-level identity fusion Sensor A Sensor B Sensor N Decision-Level Identity Fusion Voting methods Bayes method Dempster- Shafer’s method Entity, target or activity being observed Declaration of Identity (sensor A) Declaration of Identity (sensor B) Declaration of Identity (sensor N) Fused Declaration of Identity In the last lecture we addressed the magic of pattern recognition! This represents a transition from Level-1 identity fusion to Level-2 fusion related to complex entities, activities, events; the reasoning is performed at the semantic (report) level.

Classical Statistical Inference Based on empirical probabilitiesBased on empirical probabilities Definitions:Definitions: –statistical hypothesis: statement about a population which based on information from a sample, one seeks to support or refute –statistical test: a set of rules whereby a decision on H is reached Measure of test accuracy:Measure of test accuracy: –a probability statement re: the decision when various conditions in population are true

Classical Statistical Inference Test logic:Test logic: Assume null hypothesis is true (H O ) Examine consequences of H O true in sampling distribution for the statistic If observations have high P of occurring, data do not contradict H O Otherwise data tend to contradict H O Level of Significance:Level of Significance: Define probability level that is considered too low to warrant support of H O if P (obs. data/ H O true) reject H O

Emitter Identification: Example Type 1 and Type 2 radars exist on a battlefield – These radars are known to have different PRI ability Problem: Problem: Given an observed PRI have we seen a Type 1 or Type 2 radar? ELINT COLLECTOR Forward Edge of Battle Area (FEBA) E2 E1 Note: During this presentation we will use an example of emitter identification, e. g. for situation assessment related to a DoD problem. However this can be translated directly into other applications such as medical tests, enfironmental monitoring, or monitoring complex machines;

Classical Inference for Identity Declaration: Example Probability density function Pulse repetition-interval (PRI) PRI N PRI N+1 E1 (Radar Class 1) E2 (Radar Class 2) A measure of the probability that radar class 2 will use a PRI in the interval PRI N ≤ PRI ≤ PRI N+1 Probability density function E1 (Radar Class 1) E2 (Radar Class 2) PRI PRI C

Issues with Classical Probability Requires knowledge of a priori probability density functions (Usually) applied to a hypothesis and it’s alternate Does not account for a priori information about the “likelihood in nature” of a hypothesis being true (Strictly speaking) classical probability can only be used with repeatable experiments

Bayesian Inference Can be used on subjective probabilities – D oes not necessarily require sampling, etc. Statement: If H 1, H H i represent mutually exclusive and exhaustive hypotheses which can explain an event, E, that has just occurred Then And Nomenclature – P(H i /E) = a posteriori probability of H i true given E – P(H i ) = a priori probability – P(E/H i ) = probability of E given H i true P(H i /E) = P(E/H i ) P(H i )  P(E/H i ) P(H i ) i  P(H i ) = 1 exhaustivity i

Bayes Form: Impact on the Examples of Emitter Identification P(EMITTER X  PRI 0 ) = P(PRI 0  EMITTER X) P(EMITTER X)  P(PRI 0 )  EMITTER i ) P(EMITTER i ) i In the case of multiple measurements, Does not require sampling distributions – Analyst can estimate P(PRI/E) – Analysts can include any knowledge they have pertaining to the relative numbers of emitters P(EMITTER X  PRI 0 and F 0 ) = P(EMITTER X ) P(PRI 0  EM X) P(F 0  EM and PRI 0 ) P(EMITTER X) P(EMITTER Y) P(EMITTER Z)

Bayes Form: Impact on the Examples of Emitter Identification P(EMITTER X  PRI 0 ) = P(PRI 0  EMITTER X) P(EMITTER X)  P(PRI 0 )  EMITTER i ) P(EMITTER i ) i In the case of multiple measurements, Does not require sampling distributions – Analyst can estimate P(PRI/E) – Analysts can include any knowledge they have pertaining to the relative numbers of emitters P(EMITTER X  PRI 0 ∩F 0 ) = P(EMITTER X ) P(PRI 0  EM X) P(F 0  EM ∩ PRI 0 ) P(EMITTER X) P(EMITTER Y) P(EMITTER Z)

Concept of Identity Declaration by a Sensor Major Development Issue: Major Development Issue: ability to model/establish P(D/T) as a function of range, SNR, etc. Note: Note: Columns of declaration matrix are mutually exclusive, exhaustive hypotheses that explain an observation

Identification, Friend, Foe Neutral (IFFN) Bayesian Example [P(D i /O j )]i = 1, n j = fixed Based on empirical probabilities derived from tests, we have for a sensor Then  j P(D i /O j ) = 1 Sensor 1 Sensor 2 Sensor 3 Then construct probability matrix for each sensor [P(Di/Oj)] D D1 D2 D3 n H NOTES: 1. Column SUM =1 Row SUM  1 2. n (not necessarily) = m

IFFN Bayesian Example continued Individual sensors provide (via a declaration matrix), P(D i O j ) –Bayes rule allows conversion of P(D i O j ) to P(O j D i ) –Multiple sensors provide: {P(O 1 D 1 ), P(O 2 D 1 ), … P(O j D 1 ) …} FROM SENSOR 1 {P(O 1 D 2 ), P(O 2 D 2 ), … P(O j D 2 ) …} FROM SENSOR 2 Given multiple evidence (observations) Bayes rule allows fusion of declarations for each object (i.e., hypothesis) P(O j D 1  D 2 ...) = P(O j )[P(D 1  O j ) P(D 2  O j ) … P(D k  O j ) … ]  P(O j )[P(D 1  O j ) P(D 2  O i ) P(D 2  O j ) … P(D n  O i ) ] i

Summary of Bayesian Fusion for Identity MAP Threshold MAP etc. Decision Logic SENSOR #1 Observables  Classifier  Declaration P(D 1  O j ) D1D1 SENSOR #2 ETC. P(D 2  O j ) D2D2 SENSOR #n ETC. P(D n  O j ) Transformation from observation space to declaration Uncertainty in declaration as expressed in a declaration matrix Fused probability of object j, given D1, D2 …, Dn Bayesian Combination Formula P(O j  D 1  D 2  D n ) j = 1, … M Fused Identity Declaration Select highest value of P(O j ) DnDn

Bayesian Inference The good news –Allows incorporation of a priori information about P(H i ) –Allows utilization of subjective probability –Allows iterative update –Intuitive formulation The bad news –Requires specification of “priors” –Requires identification of exhaustive set of alternative hypotheses, H –Can become complex for dependent evidence –May produce “idiot” Bayes result

Dempster-Shafer Theory Arthur Dempster (1968):Arthur Dempster (1968): Generalization of the Bayesian approach Glen Shafer (1976):Glen Shafer (1976): Mathematical theory of evidence Basic issue:Basic issue: Manner in which belief, derived evidence, is distributed over propositions (hypotheses) EVIDENCE  BELIEF  (OVER) DISTRIBUTION OF BELIEF PROPOSITIONS ABOUT THE EXHAUSTIVE POSSIBILITIES IN A DOMAIN

Distribution of Belief Operational ModeOperational Mode –Humans seemingly distribute belief (based on evidence) in a fragmentary way, thus, in general, for evidence, E, and propositions, A, B, C, we will have: M(A) = measure of belief that E supports A exactly M(AB) = measure of belief assigned to the disjunction, which includes A etc.

Probabilities for propositions are induced by the mass distributionProbabilities for propositions are induced by the mass distribution Bayesian mass distributions assign only to the set of single mutually exclusive and exhaustive propositions (in only).Bayesian mass distributions assign only to the set of single mutually exclusive and exhaustive propositions (in  only). With the D-S approach, belief can be assigned to a set of propositions that need not be mutually exclusive. This leads to the motion ofWith the D-S approach, belief can be assigned to a set of propositions that need not be mutually exclusive. This leads to the motion of evidential interval Probability and Belief [SPT(A), PLS(A)] PR(A) + PR(~A) = 1 PR(A) = M( , 2  )   A A

Example of probability mass assignment A single dice can show one of six observable faces (these are the mutually exclusive and exhaustive hypotheses) –The number showing on the dice is Propositions can include; –The number showing on the dice is even –The number showing on the dice is 1 or 3 –… –The number showing on the dice is 1 or 2 or 3 or 4 or 5 or 6 (the “I don’t know” proposition); The set of hypotheses is; Θ = {1,2,3,4,5,6} The set of propositions is; 2 Θ = {1, 2,3,4,5,6, 1 or 2, 1 or 3, 2 or 3, 3 or 4, ……. 1 or 2 or 3 or 4 or 5 or 6}

Probability and Belief Formulae PLS(A) = 1 - SPT(~A) = 1 - M( , 2  )    A  A SPT(A) = M( , 2  )    A  A andSPT(A) + SPT(~A)  1 – Uncertainty (A) = PLS(A) - SPT(A) – If for all A, U(A) = 0  Bayesian Adapted from Greer, Thomas H., “Artificial Intelligence: A New Dimension in EW”, Defense Electronics, October, 1985, pp

Support and Plausibility SupportSupport –The degree to which the evidence supports the proposition –The sum of the probability masses for a proposition and its subjects PlausibilityPlausibility –The extent to which the evidence fails to refute a proposition –P(A) = 1 - S(~A) = 1 ExamplesExamples –A(0,0)  S(A) = 0 no supporting evidence – P(A) = 0 S(~A) = 1 evidence refutes A –A(.25,.85)  S(A) =.25 S(~A) =.15 Plausibility Evidential Interval Refuting Evidence Supporting Evidence Adapted from Greer, Thomas H., “Artificial Intelligence: A New Dimension in EW”, Defense Electronics, October, 1985, pp

Dempster-Shafer Example D-S Threat Warning Sensor (TWS) RF PRF TWS Belief Distribution Belief Distribution SAM-X 0.3 SAM-X, TT R 0.4 SAM-X, ACQ 0.2 UNKNOWN 0.1 Then, the evidential intervals are; SPT (SAM-X, TT R ) = 0.4 PLS (SAM-X, TT R ) = 1-SPT (SAM-X, TT R ) = 1-SPT (SAM-X, ACQ) = = 0.8 (SAM-X, TT R ) = [0.4, 0.8] Similarly, (SAM-X) = [0.9, 1.0] (SAM-X, ACQ) = [0.2, 0.6] Adapted from Greer, Thomas H., “Artificial Intelligence: A New Dimension in EW”, Defense Electronics, October, 1985, pp

Composite Uncertainty: Two Source Example

Dempster Rules of Combination 1The product of mass assignments to two propositions that are consistent leads to another proposition contained within the original (e.g., m 1 (a 1 )m 2 (a 1 ) = m(a 1 )). 2Multiplying the mass assignment to uncertainty by the mass assignment to any other proposition leads to a contribution to that proposition (e.g., m 1 ()m 2 (a 2 ) = m(a 2 )). 3Multiplying uncertainty by uncertainty leads to a new assignment to uncertainty (e.g, m 1 ()m 2 () = m()). 4When inconsistency occurs between knowledge sources, assign a measure of inconsistency denoted k to their products (e.g., m 1 (a 1 )m 2 (a 1 ) = k).

Composite Uncertainty: Computing Belief Distributions for Pooled Evidence 1. Compute Credibility Intervals 2. Map the Mass of Belief Distribution 3. Compute Composite Beliefs SOURCE 1 Compute all Credibility Intervals SOURCE 2 Compute all Credibility Intervals ABCDABCD SOURCE 1 K = measure of all mass associated with conflicting reports K = (.2 x.2)+(.4 x.2)+(.3 x.2) = K = = 0.82 For each proposition, sum all of the masses that support the proposition and divide by 1-K: SAM-Y TT = A + B + C + D 1 - K = 0.49 SOURCE 2 Compute the credibility intervals for pooled evidence Apply Decision Rule

Pooled Evidence

Summary of Dempster-Shafer Fusion for Identity Fused probability mass for each object hypothesis, Oj General level of uncertainty  leading to  Combined evidential intervals Transformation from observation space to mass distributions mi(Oj) Mi/Oj SENSOR #1 Observables  Classifier  Declaration SENSOR #2 ETC. SENSOR #n ETC. Compute or enumerate mass distribution for given declaration ETC. Combine/Fuse Mass Distributions via Dempster’s Rules of Combination M(Oj) = F(mi(Oj)) Combine/Fuse Mass Distributions via Dempster’s Rules of Combination M(Oj) = F(mi(Oj)) Decision Logic Fused Identity Declaration Select best combined evidential interval

Dempster-Shafer Inference The good news –Allows incorporation of a priori information about hypotheses and propositions –Allows utilization of subjective evidence –Allows assignment of general level of uncertainty –Allows iterative update The bad news –Requires specification of “priors” –Not “intuitive” –May result in “weak” decisions –More computationally demanding than Bayes –Can become complex for dependent evidence –May produce “idiot” D-S result

Decision Logic SENSOR #1 Observables  Classifier  Declaration Weight for sensor, S 1 D1D1 SENSOR #2 ETC. Weight for sensor S 2 D2D2 SENSOR #n ETC. Weight for sensor S N Transform from observation space to declaration Uncertainty in declaration as expressed in a weight or confidence for each sensor Fused decision via weighted voting formula Voting Combination Formula Fused Identity Declaration Select highest vote (majority, plurality, etc) DnDn Summary of Voting for Identity Estimation V(O j ) =  w i δ i (O j )

Some Decision Fusion Techniques

Topic 11 Assignments Preview the on-line topic 11 materials Read chapter 7 of Hall and McMullen (2004) Writing assignment 8: One page description of how the level 2 process applies to your selected application Discussion 5: Discuss the challenges of designing and implementing a data fusion system both in general and for your selected application.

Data Fusion Tip of the Week Young researchers in automated reasoning and artificial intelligence wax enthusiastically about the power of computers and their potential to automate human-like reasoning processes (saying in effect “aren’t computers wonderful!”) Later in their careers these same researchers admit that it is a very difficult problem and believe they could make significant progress with increased computer speed and memory Still later, these researchers realize the complexities of the problem and praise human reasoning (saying in effect, “aren’t humans wonderful!”)