Artificial Intelligence and Lisp Lecture 5 LiU Course TDDC65 Autumn Semester, 2010

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Advertisements

ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Bayesian Network and Influence Diagram A Guide to Construction And Analysis.
Artificial Intelligence and Lisp #3 Characterization of Actions [Chapter 5] Representation Language [Chapter 2] Lab Assignment 2.
RMIT University; Taylor's College This is a story about four people named Everybody, Somebody, Anybody and Nobody. There was an important job to be done.
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
CPSC 502, Lecture 11Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 11 Oct, 18, 2011.
Comp 205: Comparative Programming Languages Semantics of Imperative Programming Languages denotational semantics operational semantics logical semantics.
1 Basic abstract interpretation theory. 2 The general idea §a semantics l any definition style, from a denotational definition to a detailed interpreter.
CPSC 322, Lecture 11Slide 1 Constraint Satisfaction Problems (CSPs) Introduction Computer Science cpsc322, Lecture 11 (Textbook Chpt 4.0 – 4.2) January,
Artificial Intelligence and Lisp Lecture 4 LiU Course TDDC65 Autumn Semester, 2010
CPSC 322, Lecture 12Slide 1 CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12 (Textbook Chpt ) January, 29, 2010.
Artificial Intelligence and Lisp Lecture 13 Additional Topics in Artificial Intelligence LiU Course TDDC65 Autumn Semester, 2010
Artificial Intelligence and Lisp Lecture 7 LiU Course TDDC65 Autumn Semester, 2010
Artificial Intelligence and Lisp Lecture 6 LiU Course TDDC65 Autumn Semester, 2010
Artificial Intelligence and Lisp Lecture 12 Finish "Programming in Lisp, II" + Lab 5 a-b + Review and Synthesis of the Course LiU Course TDDC65 Autumn.
Artificial Intelligence and Lisp #5 Causal Nets (continued) Learning in Decision Trees and Causal Nets Lab Assignment 3.
C82MCP Diploma Statistics School of Psychology University of Nottingham 1 Overview of Lecture Independent and Dependent Variables Between and Within Designs.
Binary Degraders Often we want to control both scattering and energy loss in a beam line element. For instance, we might want a contoured scatterer with.
CS 103 Discrete Structures Lecture 01 Introduction to the Course
Fuzzy Logic. Lecture Outline Fuzzy Systems Fuzzy Sets Membership Functions Fuzzy Operators Fuzzy Set Characteristics Fuzziness and Probability.
Reminder Midterm Mar 7 Project 2 deadline Mar 18 midnight in-class
Computer Science CPSC 322 Lecture 3 AI Applications 1.
Slide 1 Constraint Satisfaction Problems (CSPs) Introduction Jim Little UBC CS 322 – CSP 1 September 27, 2014 Textbook §
Problem Solving and Control Statements. Using Exit to Terminate Repetition Statements There are many forms of the Exit statement, designed to terminate.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Compiler Chapter# 5 Intermediate code generation.
Constraint Satisfaction Problems (CSPs) CPSC 322 – CSP 1 Poole & Mackworth textbook: Sections § Lecturer: Alan Mackworth September 28, 2012.
1 Software Model Checking Andrey Rybalchenko Max Planck Institute for Software Systems Incomplete notes summarizing the 2 nd lecture, Nov 5 th, 2007.
Machine Learning Queens College Lecture 2: Decision Trees.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Pattern-directed inference systems
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
Copyright © Cengage Learning. All rights reserved. CHAPTER 3 THE LOGIC OF QUANTIFIED STATEMENTS THE LOGIC OF QUANTIFIED STATEMENTS.
1 Technical & Business Writing (ENG-315) Muhammad Bilal Bashir UIIT, Rawalpindi.
Graph Colouring L09: Oct 10. This Lecture Graph coloring is another important problem in graph theory. It also has many applications, including the famous.
Computational Intelligence: Methods and Applications Lecture 16 Model evaluation and ROC Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Programming Languages and Design Lecture 3 Semantic Specifications of Programming Languages Instructor: Li Ma Department of Computer Science Texas Southern.
ICS 253: Discrete Structures I Induction and Recursion King Fahd University of Petroleum & Minerals Information & Computer Science Department.
Data Structures and Algorithms Dr. Tehseen Zia Assistant Professor Dept. Computer Science and IT University of Sargodha Lecture 1.
Review I A student researcher obtains a random sample of UMD students and finds that 55% report using an illegally obtained stimulant to study in the past.
Computer Science CPSC 322 Lecture 22 Logical Consequences, Proof Procedures (Ch 5.2.2)
Bayesian networks and their application in circuit reliability estimation Erin Taylor.
DeepDive Model Dongfang Xu Ph.D student, School of Information, University of Arizona Dec 13, 2015.
Artificial Intelligence and Lisp Lecture 9 Reasoning about Actions and Planning, II LiU Course TDDC65 Autumn Semester, 2010
Special Topics in Educational Data Mining HUDK5199 Spring term, 2013 March 6, 2013.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Today’s Agenda  Quiz 4  Temporal Logic Formal Methods in Software Engineering1.
3.2 Relations And Functions. A relation is a set of ordered pairs. {(2,3), (-1,5), (4,-2), (9,9), (0,-6)} This is a relation The domain is the set of.
Windows Programming Lecture 06. Data Types Classification Data types are classified in two categories that is, – those data types which stores decimal.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Artificial Intelligence and Lisp #4 Decision Trees Causal Nets (beginning) Lab Assignment 2b.
Artificial Intelligence and Lisp Lecture 6 LiU Course TDDC65 Autumn Semester,
Howard Community College
Histograms CSE 6363 – Machine Learning Vassilis Athitsos
Artificial Intelligence and Lisp #4
Uniform Distributions and Random Variables
Artificial Intelligence and Lisp Lecture 5 LiU Course TDDC65 Autumn Semester,
Artificial Intelligence and Lisp Lecture 13 Additional Topics in Artificial Intelligence LiU Course TDDC65 Autumn Semester,
Structural testing, Path Testing
Artificial Intelligence and Lisp Lecture 9 Reasoning about Actions and Planning, II LiU Course TDDC65 Autumn Semester,
Markov ó Kalman Filter Localization
Relational Algebra 461 The slides for this text are organized into chapters. This lecture covers relational algebra, from Chapter 4. The relational calculus.
Probability Topics Random Variables Joint and Marginal Distributions
CS Fall 2016 (Shavlik©), Lecture 2
OMGT LECTURE 10: Elements of Hypothesis Testing
Probability and Time: Markov Models
basic probability and bayes' rule
Unit III – Chapter 3 Path Testing.
Presentation transcript:

Artificial Intelligence and Lisp Lecture 5 LiU Course TDDC65 Autumn Semester,

Today's lecture Decision Trees Message-passing between agents: Searle speech-act theory Discuss lab 2c

Uses of Decision Trees Making a choice of action (final or tentative) Classifying a given situation Identifying the likely effects of a given situation or action (Using inverse operation) Identifying possible causes of a given situation

A simple example ccc bb a c redgreen blue white redgreenblue

A simple example ccc bb a c redgreen blue white redgreenblue outcomes terms, features

Evaluation of the decision tree ccc bb a c redgreen blue white redgreenblue true false true There will be five variations on this simple theme

Notation for the decision tree ccc bb a c redgreen blue white redgreenblue [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]] {[: a true][: b false][: c true]}

Notation for the decision tree [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]] {[: a true][: b false][: c true]} R Range of a is, for b and c similarly. Range ordering may be specified explicitly like here, or implicitly e.g. if the range is a finite set of integers, but it must be specified somehow. Different terms may have different range. It is not necessary that decision elements on the same level use the same term. Continuous range is also possible (but little covered here).

1. Probabilities in term assignments [a? [b? [c? red green] [c? green white]] [b? [c? white red] [c? green blue]]] {[: a ] [: b ] [: c true]} -- same as R Range of a is, for b and c similarly. Evaluation: 0.65 * 0.90 red, 0.65 * 0.10 green, 0.35 * 0.90 white, 0.35 * 0.10 green red white green

2. Expected outcome [a? [b? [c? red green] [c? green white]] [b? [c? white red] [c? green blue]]] {[: a ] [: b ] [: c true]} -- same as R Range of a is, for b and c similarly. Evaluation: 0.65 * 0.90 red, 0.65 * 0.10 green, 0.35 * 0.90 white, 0.35 * 0.10 green red white green Assign values to outcomes: red , white 4.000, green (or put these values directly into the tree instead of the colors) Expected outcome: = 9.610

3. Probabilities in terminal elements [a? [b? [c? red green] [c? green white]] [b? [c? red] [c? green blue]]] {[: a ] [: b ] [: c true]} -- same as R Range of a is, for b and c similarly. Eval: 0.65 * 0.90 red, 0.65 * 0.10 green, 0.35 * 0.90 * 0.35 * 0.10 green red white green blue Ordering of value domain is

4. Hierarchical decision trees ccc bgrey subtree a c red greenblue white redgreenblue b ed true false true

Notation for hierarchical dec. trees [?[a? [b? [c? red blue] [c? blue white]] [b? [c? white red] [c? red blue] ]] [d? red-rose poppy pelargonia] [d? bluebell forget-me-not violet] [d? waterlily lily-of-the-valley white-rose] :range ] specifies range order for subtree

Expansion of hierarchical dec. trees [?[a? [b? [c? red blue] [c? blue white]] [b? [c? white red] [c? red blue] ]] [d? red-rose poppy pelargonia] [d? bluebell forget-me-not violet] [d? waterlily lily-of-the-valley white-rose] :range ] [?[a? [b? [c? [d? red-rose poppy pelargonia] blue] [c? blue white]] [b? [c? white [d? red-rose poppy pelargonia]] [c? [d? red-rose poppy pelargonia] blue] ]] [d? red-rose poppy pelargonia] [d? bluebell forget-me-not violet] [d? waterlily lily-of-the-valley white-rose] :range ]

5. Partial evaluation of decision tree [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]] {[: a true][: c true]} R Range of a is, for b and c similarly. Value of b is not available -- partial evaluation is a way out: [b? red blue]

Combo: Partial evaluation of decision tree with term probabilities and expected outcome [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]] {[: a ][: c true]} R Range of a is, for b and c similarly Value of b is not available -- partial evaluation is a way out: [b? :order ] Assign values to outcomes: red , white 4.000, green , blue obtaining [b? ]

Summary of variations Basic decision tree with definite (no probabilities) assignments of values Probabilistic assignments to terms (features) Continuous-valued outcome, expected outcome Probabilistic assignments to terminal nodes Hierarchical decision trees Incomplete assignments to terms, suggesting partial evaluation Combinations of these are also possible!

Operations on decision trees Plain evaluation Partial evaluation Inverse evaluation Reorganization (for more efficient interpretation) Acquisition: obtaining the discrete structure from reliable sources Learning: using a training set of expected outcomes to adjust probabilities in the tree Combining with other techniques, e.g. logic- based ones

Decision trees in real life In user manuals: error identification in cars, household machines, etc. 'User help' in software systems Telephone exchanges Botanic schemata Commercial decision making: insurance, finance

Decision trees in A.I. and robotics Current situation described as features/values From current situation to suggested action(s) (for immediate execution, or to be checked out) From current situation to extension of it (i.e., additional features/values) From current situation to predicted future situation (causal reasoning) From current situation to inferred earlier situation (reverse causal reasoning)(direct or inverse evaluation) From inferred future or past situation, to action(s) Learning is important for artificial intelligence

Decision trees and logic ccc bb a c redgreen blue white redgreenblue (a  b  c  red) ... (  a   b   c  red) ... (and (or -a -b -c red)(or -a -b c green)(or -a b -c blue)...)

Causal Nets A causal net consists of: A set of independent terms A partially ordered set of dependent terms An assignment of a dependency expression to each dependent term. (These may be decision trees) The dependency expression for a term may use independent terms, and also dependent terms that are lower than the term at hand. This means the dependency graph is not cyclic.

An example (due to Eugene Charniak) When my wife leaves home, she often (not always) turns on the outside light She may also turn it on when she expects a guest When nobody is home, the dog is often outside If the dog has stomach troubles, it is also often left outside If the dog is outside, I will probably hear it barking when I approach home However, possibly it does not bark, and possibly I hear another dog and think it's mine Problem: given the information I obtain when I approach the house, what is the likelyhood of my wife being at home?

Decision trees for dependent terms lights-are-on [noone-home? ] dog-outside [noone-home? [dog-sick? ] [dog-sick? ] ] I-hear-dog [dog-outside? ] Independent terms: noone-home, dog-sick Dependent terms: ligths-are-on, dog-outside < I-hear-dog Notation: integers represent percentages, 70 ~ 0.70 Interpretation: if no-one is home, then 70% chance that outside lights are on, 30% that they are not. If someone is home then 20% and 80% chance, respectively.

Decision trees, concise notation lights-are-on [noone-home? ] dog-outside [noone-home? [dog-sick? ] [dog-sick? ] ] I-hear-dog [dog-outside? ] lights-are-on [noone-home? 70% 20%] dog-outside [noone-home? [dog-sick? 80% 70%] [dog-sick? 70% 30%] ] I-hear-dog [dog-outside? 80% 10%]

Causal net using decision trees lights-are-on [noone-home? 70% 20%] dog-outside [noone-home? [dog-sick? 80% 70%] [dog-sick? 70% 30%] ] I-hear-dog [dog-outside? 80% 10%] This is simply a hierarchical causal net with probabilities in the terminal nodes! If the value assignments for noone-home and dog-sick are given, we can calculate the probabilities for the dependent variables. However, it is the inverse operation that we want.

Inverse operation Consider this simple case first: lights-are-on [noone-home? ] If it is known that lights-are-on is true, what is the probability for noone-home ? Possible combinations: lights-are-on noone-home false Suppose noone-home is true in 20% of overall cases, obtain: lights-are-on noone-home false Given lights-are-on, noone-home has 14/30 = 46.7% probability.

Inverse operation This will be continued at the next lecture Read these slides (from the course webpage) and the associated lecture note before that lecture (especially if you are not so familiar with probability theory)