Explanation Facility دكترمحسن كاهاني

Slides:



Advertisements
Similar presentations
Writing constructed response items
Advertisements

Decision Support and Artificial Intelligence Jack G. Zheng May 21 st 2008 MIS Chapter 4.
SCIENCE LET’S INVESTIGATE.
Modelling with expert systems. Expert systems Modelling with expert systems Coaching modelling with expert systems Advantages and limitations of modelling.
Level 1 Recall Recall of a fact, information, or procedure. Level 2 Skill/Concept Use information or conceptual knowledge, two or more steps, etc. Level.
CHAPTER 13 Inference Techniques. Reasoning in Artificial Intelligence n Knowledge must be processed (reasoned with) n Computer program accesses knowledge.
CS 484 – Artificial Intelligence1 Announcements Choose Research Topic by today Project 1 is due Thursday, October 11 Midterm is Thursday, October 18 Book.
Supporting Business Decisions Expert Systems. Expert system definition Possible working definition of an expert system: –“A computer system with a knowledge.
Inferences The Reasoning Power of Expert Systems.
Scientific Method Chapter 1.
Scientific Method.
Mathematics in the MYP.
Minnesota State Community and Technical College Critical Thinking Assignment Example and Assessment.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Expert System Human expert level performance Limited application area Large component of task specific knowledge Knowledge based system Task specific knowledge.
The Architecture Design Process
CIS 430 ( Expert System ) Supervised By : Mr. Ashraf Yaseen Student name : Ziad N. Al-A’abed Student # : EXPERT SYSTEM.
1 Chapter 9 Rules and Expert Systems. 2 Chapter 9 Contents (1) l Rules for Knowledge Representation l Rule Based Production Systems l Forward Chaining.
© C. Kemke1Reasoning - Introduction COMP 4200: Expert Systems Dr. Christel Kemke Department of Computer Science University of Manitoba.
EXPERT SYSTEMS Part I.
Interpret Application Specifications
Principles of High Quality Assessment
Meaningful Learning in an Information Age
Building Knowledge-Driven DSS and Mining Data
Artificial Intelligence CSC 361
UNIT 9. CLIL THINKING SKILLS
Sepandar Sepehr McMaster University November 2008
Strong Method Problem Solving.
Expert Systems Infsy 540 Dr. Ocker. Expert Systems n computer systems which try to mimic human expertise n produce a decision that does not require judgment.
© Curriculum Foundation1 Section 2 The nature of the assessment task Section 2 The nature of the assessment task There are three key questions: What are.
1 Backward-Chaining Rule-Based Systems Elnaz Nouri December 2007.
Copyright © Cengage Learning. All rights reserved.
13: Inference Techniques
Transitioning to the Common Core: MDTP Written Response Items Bruce Arnold, MDTP Director California Mathematics Council – South Conference November 2,
11 C H A P T E R Artificial Intelligence and Expert Systems.
Final Idea: Working Drawing
IDDS: Rules-based Expert Systems
CHAPTER 1 Scientific Method. Scientific Method (yes, copy these steps!) The scientific method is a series of steps used to solve problems. Steps: 1. State.
Beyond Multiple Choice: Using Performance and Portfolio Assessments to Evaluate Student Learning.
 Architecture and Description Of Module Architecture and Description Of Module  KNOWLEDGE BASE KNOWLEDGE BASE  PRODUCTION RULES PRODUCTION RULES 
1 CHAPTER 13 Decision Support Systems and Intelligent Systems, Efraim Turban and Jay E. Aronson 6th ed, Copyright 2001, Prentice Hall, Upper Saddle River,
PROCESS STANDARDS FOR MATHEMATICS. PROBLEM SOLVING The Purpose of the Problem Solving Approach The problem solving approach fosters the development of.
Principles of Information Systems, Sixth Edition Specialized Business Information Systems Chapter 11.
Requirements Engineering Southern Methodist University CSE 7316 – Chapter 3.
Welcome Science 5 and Science 6 Implementation Workshop.
Causes and Explanations. We have already seen that causes depend on explanations –What we consider to be a cause depends on what we are trying to explain.
Expert System Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
MODES-650 Advanced System Simulation Presented by Olgun Karademirci VERIFICATION AND VALIDATION OF SIMULATION MODELS.
Chapter 10 Verification and Validation of Simulation Models
Reading Strategies To Improve Comprehension Empowering Gifted Children.
The Scientific Method. Objectives Explain how science is different from other forms of human endeavor. Identify the steps that make up scientific methods.
Fundamentals of Information Systems, Third Edition1 The Knowledge Base Stores all relevant information, data, rules, cases, and relationships used by the.
Problem Solving Heuristics ABLE Teacher Academy. Polya Check your solution. Substitute your answer or results back into the problem. Are all of the conditions.
Framework for Interactive Applications Matthew Korchinsky Advisor: Aaron Cass Senior Project – Computer Engineering – 2006 Abstract The Java language was.
This was developed as part of the Scottish Government’s Better Community Engagement Programme.
What is Artificial Intelligence?
What is the question? 1) What is the question? Are there any words you do not understand, or want to clarify? What are you asked to find or show? Can you.
Of An Expert System.  Introduction  What is AI?  Intelligent in Human & Machine? What is Expert System? How are Expert System used? Elements of ES.
Artificial Intelligence
ITEC 1010 Information and Organizations Chapter V Expert Systems.
1 INTRODUCTION TO COMPUTER GRAPHICS. Computer Graphics The computer is an information processing machine. It is a tool for storing, manipulating and correlating.
SCIENTIFIC METHOD SNC 1DI. WHAT IS THE SCIENTIFIC METHOD?  A process that is used to answer questions about the world around us  begins with a question.
Key Updates. What has changed? National Curriculum Early Years baseline assessment SATS Teacher Assessments Assessment without levels, expected standards.
AF1: Thinking Scientifically
Architecture Components
FEASIBILITY STUDY Feasibility study is a means to check whether the proposed system is correct or not. The results of this study arte used to make decision.
Chapter 10 Verification and Validation of Simulation Models
Lecture 09:Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Chapter 11 Expert system architecture, representation of knowledge, Knowledge Acquisition, and Reasoning.
Presentation transcript:

Explanation Facility دكترمحسن كاهاني

Explanation Facility  Most expert systems have explanation facilities that allow the user to ask  why it asked some question,  and how it reached some conclusion.  These questions are answered by referring to the system goals, the rules being used, and any existing problem data.

Explanation Facility:Example System: Is there gas in the fuel tank? User: Yes. System: Is it true that the engine turns over? User: No. System Is it true that the lights come on? User: Why? System: I am trying to prove: problem with battery. It has been established that it is not the case that the engine turns over. Therefore if it is not the case that the lights come on then there is a problem with battery. Is it true that the lights come on? User: No. System: I conclude that there is a problem with the battery. User: How? System: This follows from Rule 2: IF NOT engine_turns_over AND... NOT engine_turns_over was given by the user. NOT lights_come_onwas given by the user.

Explanation Facility  Providing such an explanation facility involves, at least,  recording what rules are used in drawing conclusions,  using these records to compose explanations like the ones before.  Giving simple explanations is not very difficult (the answers always have a simple set form), and is sometimes quite useful.  However, explanation facilities in expert systems are often not widely used, and where used not viewed as acceptable by their users.  Reasons:  the explanations just reference the ``surface'' knowledge encoded in the rules, rather than the ``deep'' knowledge about the domain  if the user fails to understand or accept the explanation, the system can't re-explain in another way.

Explanation Facility  Justifier  Makes system more understandable  Exposes shortcomings  Explains situations that the user did not anticipate  Satisfies user’s psychological and social needs  Clarifies underlying assumptions  Conducts sensitivity analysis  Types  Why  How  Journalism based  Who, what, where, when, why, how  Why not

Generating Explanations  Static explanation  Preinsertion of text  Dynamic explanation  Reconstruction by rule evaluation  Tracing records or line of reasoning  Justification based on empirical associations  Strategic use of metaknowledge

Causes and Explanations  We have already seen that causes depend on explanations  What we consider to be a cause depends on what we are trying to explain  So, how good are people at giving explanations?

Shallow explanation  Do you know how  A flush toilet works?  A derailleur system works?  A car engine works?  A computer works?

Shallow explanation  Rosenblit & Keil  Asked people whether they could generate a good explanation  Had them generate an explanation  Showed them a good explanation  Had them evaluate their own explanation against the good one.  People were not good at knowing when they could provide a good explanation?

Why?  People are usually good at predicting what they know.  Metacognition research in memory  Good at predicting whether they know the plots of movies  Factors that affect explanations  People have mental simulations of complex objects working  Causal explanations are recursive

Recursion in explanation  At the top level, we know the parts of an object and the spatial relations between them.  Then, we need to know the functional components, and how they are connected.  Then, we need to know how those functional components operate

Becoming better at explaining  Practice really does help  Teaching helps learning  Why? Teachers have to explain things.  Teaching lets people practice giving explanations  Study hint  Many students study for exams by doing recognition.  Study by producing explanations.

Distribution of expertise  Causal knowledge is distributed across people  We are very good at knowing who to go to for particular kinds of explanations  Children are also sensitive to domains of expertise  Studies by Keil with kids  Someone who know how refrigerators work is more likely to know how stereos work than about what makes people smart.

Explanation System  Can display rule being invoked at any point in consultation  Record rule invocation and associates them with questions asked and rules invoked  Use rule index to retrieve particular rules in answer to questions  Why and how questions answered using goal tree

Designing for Explanation An expert system should explain the decisions it makes: background information, causes, associations, WHY and HOW? This ability is important to enhance understanding, evaluation and system’s acceptance. An explanation should: Make sense Use terminology and structures that the user is familiar with. Sometimes include graphical information Be accurate and efficient

Designing for Explanation The ability to include explanations in a rule-based system is fairly simple, even when most systems simply state the rules that have been processed. User has to develop the true explanation. It is much more difficult to develop explanations in non-rule based systems: Complex models, “black box” approaches.

purposes of an explanation facility  To make the system more intelligible/ understandable to the user.  To uncover the shortcomings of the rules and the KB.  To describe the situations that were unanticipated by the user.  To satisfy psychological, social and safety requirements. A user should feel assured about the ES’ actions.

Designing for Explanation An explanation should address at least of the following questions: Why a decision was made? How? Why explanations It typically requires the ES to look through the rules to determine what higher goals the system is attempting to achieve. Example: ES: What is your annual income? User: Why? (Why do you need to know?) ES: In checking R2, I need to know whether your income is above 20,000. If this is true, I will conclude that because you have a college degree, you should invest in…

Designing for Explanation How explanations It requires the system to look down through the rules to determine what sub-goals were satisfied to achieve the goal. Example: ES: Invest in IBM stocks. User: How? (How was the conclusion reached?) ES: (Ideal solution) Given that you have $10,000 to invest and you are younger than 30, then according to R4 you have a college degree. If this is the case, then according to R1 you should invest in securities. For a young investor such as you, according to R3, you should.... Finally, according to R5, if you need to invest in growth stocks, then IBM is your best bet….

Why and How Explanations Limitations They typically show the rules as they were programmed and not in a natural language. This is usually not easy to understand and follow, especially if the chain of reasoning is long. Why & how do not cover all possible questions.

Explanation Generation Static explanation: The ES contains answer text for every question the user may ask. The text is provided by the expert. All questions and answers must be anticipated. The system really does not know what it is saying If the program is changed and the text is not changed, then the explanation may be incorrect.

Explanation Generation Dynamic explanation: An ES creates an explanation based upon the execution pattern of the rules uses. This is better than static explanation in that any changes to the system will be reflected in the explanation. Limitation: It may not contain enough understandable information for the user.

Multimedia Explanation Explanation is driven by human factors and user interface design. Text only explanations are not always useful. Not all users can read text and understand its meaning, they need diagrams, pictures, or audio to assist them. An ES may provide the user with a diagram showing the rule implementation: Relationship diagram between rules, graphs of data over time It may provide image, video or animations that support the explanation and the task to be completed. (Simulations for How Explanations)