CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 17, 2012.

Slides:



Advertisements
Similar presentations
Russell and Norvig Chapter 7
Advertisements

Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Logic.
Artificial Intelligence Knowledge-based Agents Russell and Norvig, Ch. 6, 7.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Knowledge Representation & Reasoning.  Introduction How can we formalize our knowledge about the world so that:  We can reason about it?  We can do.
1 Problem Solving CS 331 Dr M M Awais Representational Methods Formal Methods Propositional Logic Predicate Logic.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Computing & Information Sciences Kansas State University Lecture 11 of 42 CIS 530 / 730 Artificial Intelligence Lecture 11 of 42 William H. Hsu Department.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 6 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge in intelligent systems So far, we’ve used relatively specialized, naïve agents. How can we build agents that incorporate knowledge and a memory?
Logical Agents Chapter 7.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2005.
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter (this lecture, Part I) Chapter 7.5.
Logical Agents Chapter 7 (based on slides from Stuart Russell and Hwee Tou Ng)
Logical Agents. Knowledge bases Knowledge base = set of sentences in a formal language Declarative approach to building an agent (or other system): 
Artificial Intelligence Lecture No. 9 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Logical Agents (NUS) Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 19, 2012.
Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter (this lecture, Part I) Chapter 7.5.
‘In which we introduce a logic that is sufficent for building knowledge- based agents!’
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part A Knowledge Representation and Reasoning.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
1 Problems in AI Problem Formulation Uninformed Search Heuristic Search Adversarial Search (Multi-agents) Knowledge RepresentationKnowledge Representation.
Class Project Due at end of finals week Essentially anything you want, so long as its AI related and I approve Any programming language you want In pairs.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
Knowledge Representation Lecture # 17, 18 & 19. Motivation (1) Up to now, we concentrated on search methods in worlds that can be relatively easily represented.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 24, 2012.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 10 of 42 Wednesday, 13 September.
1 Logical Agents Chapter 7. 2 Logical Agents What are we talking about, “logical?” –Aren’t search-based chess programs logical Yes, but knowledge is used.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Logical Agents Russell & Norvig Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean)
1 UNIT-3 KNOWLEDGE REPRESENTATION. 2 Agents that reason logically(Logical agents) A Knowledge based Agent The Wumpus world environment Representation,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
1 Knowledge Representation Logic and Inference Propositional Logic Vumpus World Knowledge Representation Logic and Inference Propositional Logic Vumpus.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
LOGICAL AGENTS CHAPTER 7 AIMA 3. OUTLINE  Knowledge-based agents  Wumpus world  Logic in general - models and entailment  Propositional (Boolean)
CS666 AI P. T. Chung Logic Logical Agents Chapter 7.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Propositional Logic: Logical Agents (Part I)
EA C461 – Artificial Intelligence Logical Agent
Logical Agents Chapter 7.
Logical Agents Chapter 7 Selected and slightly modified slides from
Logical Agents Reading: Russell’s Chapter 7
Logical Agents Chapter 7.
Artificial Intelligence
Artificial Intelligence: Logic agents
Logical Agents Chapter 7.
Knowledge Representation I (Propositional Logic)
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 17, 2012

Outline Finish python examples Finish reflex agent discussion Knowledge-based agent – Requirements KB agent as a computer program Example of KB agent: the wumpus world Entailment and Inference Propositional logic (review) Propositional KB for the wumpus world

Vacuum-cleaner world (review) Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp Production rules: [A, Dirty]  Suck [B, Dirty]  Suck [A, Clean]  Right [B, Clean]  Left

Reflex Agent’s Abilities Consider the “reflex” vacuum agent – Lacks explicit knowledge of the environment – Lack memory of its own past behavior – Lacks knowledge of the effects of what its actions The environment may behave reasonably, but the agent does not have any machinery for understanding that concept & making predictions Note: what do we mean by an environment tat behaves reasonably??? – According to OUR COMMON SENSE !!!!

“Reasonable” behavior of the environment ? 1. If dirty and action ≠ suck, that square will be dirty the next time it is perceived 2. If action is suck or nop then the next percept is the same location 3. If action is left or right then the next percept is the expected location (note: what if we are at B and go right??) What kind of agent could include this kind of common sense knowledge and take advantage of it?

Consider extensions of the vacuum agent Consider an environment with three squares: A, B, C, and the same percepts and actions – specify the production rules What about a grid-shaped environment (say, 2 by 2)? – Would need additional actions: (up down left right). Define a declarative representation of: – The world – The agent’s history Consider how that knowledge would be used

Evaluating An Agent’s Performance Consider a vacuum agent trying to be rational under a performance measure that assigns a cost to actions and a reward for keeping the room clean – How to represent this knowledge ? How it use it? – Would it need knowledge of its own prior actions (?) Consider a vacuum agent with parameters: Ci = cost of each action i P = penalty for each dirty square at each time step Strategy to minimize total cost + total penalties NOTE: what else would we want to know ?????

Elements of a Knowledge-based agent (the knowledge maintenance part) World Knowledge Base Percept Updated World Knowledge Base We need 3 things: A formal language for expressing knowledge declaratively “knowledge representation” A knowledge base design to express what is known “knowledge engineering” or “ontology” Algorithms to use and update the KB “automated inference”

Ontology Design (“Knowledge Engineering”) General world knowledge: how someone becomes POTUS Facts that happen to be true In current world state (dynamic) Obama is POTUS New percepts are added to lower box

Example World Knowledge Base General knowledge includes family relations such as: a female with the same parents is a sister a male with the same parents is a brother parents of your parents are your grandparents a male child of your brother or sister is your nephew Facts include: Sam and Mary have a male child named Max New Percept: Sam and Mary have a female child named Sarah Updated Knowledge Sam and Mary have a female child named Sarah Max has a sister named Sarah Sarah has a brother named Max Percept: Mary has a father named Tom -- ?? Percept: Max has a son named Simon -- ??

KB agent as a computer program The declarative approach: beliefs are represented as a set of sentences in a formal logic A set of sentences are stored in a database (called a knowledge base). The sentences are believed to be true by the agent. Other beliefs of the agent can be inferred from these. Trade-off in choosing a representation language: – Expressiveness of language v. Tractability of inference – Horn clauses  logic sentences

Formal logic basis of automated reasoning If beliefs include both p and p  q, then the agent can infer q. (This logical inference rule is called modus ponens). A generalized version of this is the Resolution Rule. Example: – T1. raining  ground is wet – T2. ground is wet  ground is slippery – P1  raining Infer: ground is wet Infer: ground is slippery This is called “chaining” and by chaining we can produce complex reasoning sequences

Wumpus World PEAS description Performance measure – gold +1000, death – -1 per step, -10 for using the arrow Environment – Squares adjacent to wumpus are smelly – Squares adjacent to pit are breezy – Glitter iff gold is in the same square – Shooting kills wumpus if you are facing it – Shooting uses up the only arrow – Grabbing picks up gold if in same square – Releasing drops the gold in same square Actions: Left turn, Right turn, Forward, Grab, Release, Shoot Sensors: Stench, Breeze, Glitter, Bump, Scream

Exploring a wumpus world 1

Exploring a wumpus world 2

Exploring a wumpus world 2a

Exploring a wumpus world 3,4

Exploring a wumpus world 4a

Exploring a wumpus world 5

Exploring a wumpus world 5a

Exploring a wumpus world 6

What did we just do? Analyze and describe graphically what knowledge the agent needs to represent and how it evolves. The first step in building a formal framework for this domain.

Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the acceptable sentences in the language (called wffs or well formed formulas) Semantics define the "meaning" of sentences – i.e., how to decide truth of a sentence in a world model E.g., the language of arithmetic – x+2 > y is a sentence; x2+y > is not a sentence (syntax) – x+2 > y is true iff the number x+2 is greater than the number y – x+2 > y is true in a world where x = 7, y = 1 – x+2 > y is false in a world where x = 1, y = 7

Entailment and Inference Entailment means that one thing follows from another: KB ╞ α Knowledge base KB entails sentence α if and only if α is true in all world models where KB is true – E.g., the KB containing “the Giants won” and “the Reds won” entails “The Giants won and the Reds won” – E.g., x+y = 4 entails 4 = x+y – Entailment is a relationship between sentences that is based on their meaning (semantics) Remember: KB is a database of what the agent knows

Inference KB ├ i α = sentence α can be derived from KB by an algorithmic procedure i. It is a relationship between sentences based on their syntax. Soundness: i is sound if whenever KB ├ i α, it is also true that KB ╞ α Completeness: i is complete if whenever KB ╞ α, it is also true that KB ├ i α Preview: later we will define a logic (first-order logic) which is expressive enough to say many things of interest, and for which there exists a sound and complete inference procedure. That is, the procedure will answer any question whose answer follows from what is known by the KB.

Proof Theory and Model Theory for Logic Sentences Facts Symbolic Expressions World (Model) Denote Inference Entailment

Entailment and Inference Entailment means that one thing follows from another: KB ╞ α Knowledge base KB entails sentence α if and only if α is true in all world models where KB is true – E.g., the KB containing “the Giants won” and “the Reds won” entails “The Giants won and the Reds won” – E.g., x+y = 4 entails 4 = x+y – Entailment is a relationship between sentences that is based on their meaning (semantics) Remember: KB is a database of what the agent knows

Inference KB ├ i α = sentence α can be derived from KB by an algorithmic procedure I. It is a relationship between sentences based on their syntax. Soundness: i is sound if whenever KB ├ i α, it is also true that KB ╞ α Completeness: i is complete if whenever KB ╞ α, it is also true that KB ├ i α Preview: later we will define a logic (first-order logic) which is expressive enough to say many things of interest, and for which there exists a sound and complete inference procedure. That is, the procedure will answer any question whose answer follows from what is known by the KB.

Propositional logic: Syntax Propositional logic is the simplest logic – illustrates automated reasoning algorithms – The proposition symbols P1, P2.. etc are (atomic) sentences – Compound sentences: – If S is a sentence,  S is a sentence (negation) – If S 1 and S 2 are sentences, S 1  S 2 is a sentence (conjunction) – If S 1 and S 2 are sentences, S 1  S 2 is a sentence (disjunction) – If S 1 and S 2 are sentences, S 1  S 2 is a sentence (implication) – If S 1 and S 2 are sentences, S 1  S 2 is a sentence (biconditional )

Examples Propositions: Red, Yellow, Round, Oblong, Apple, Banana Red ^ Round Yellow ^ ~Round Oblong v Apple

Propositional logic: Semantics A model assigns true/false for each proposition symbol E.g. P 1,2 P 2,2 P 3,1 falsetruefalse With 3 symbols, 8 possible models can be enumerated Rules for evaluating truth of compound sentences in model m:  Sis true iff S is false S 1  S 2 is true iff S 1 is true and S 2 is true S 1  S 2 is true iff S 1 is true or S 2 is true S 1  S 2 is true iffS 1 is false or S 2 is true i.e., is false iffS 1 is true and S 2 is false S 1  S 2 is true iffS 1  S 2 is true andS 2  S 1 is true Simple recursive process evaluates an arbitrary sentence, e.g.,  P 1,2  (P 2,2  P 3,1 ) = true  (true  false) = true  true = true

Semantic Reasoning: Truth Tables Discuss “material implication” [the == > connective] For a given set of proposition symbols P1... PN, the maximum number of models is 2 N The number of models of a given set of sentences is at most equal to 2 number of proposition symbols. (However, it may be less – why???)

Logical equivalence (can be proved using truth tables) Two sentences are logically equivalent} iff true in same models: α ≡ ß iff α ╞ β and β ╞ α