CSC 423 ARTIFICIAL INTELLIGENCE LOGICAL AGENTS. 2 Introduction This chapter introduces knowledge-based agents. The concepts that we discuss—the representation.

Slides:



Advertisements
Similar presentations
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Advertisements

Logic.
3/30/00 Agents that Reason Logically by Chris Horn Jiansui Yang Xiaojing Wu.
Artificial Intelligence Knowledge-based Agents Russell and Norvig, Ch. 6, 7.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Fall 2005.
Knowledge Representation & Reasoning.  Introduction How can we formalize our knowledge about the world so that:  We can reason about it?  We can do.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 9 Jim Martin.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 6 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge in intelligent systems So far, we’ve used relatively specialized, naïve agents. How can we build agents that incorporate knowledge and a memory?
LOGICAL AGENTS Yılmaz KILIÇASLAN. Definitions Logical agents are those that can:  form representations of the world,  use a process of inference to.
Let remember from the previous lesson what is Knowledge representation
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2005.
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
Agents that Reason Logically Logical agents have knowledge base, from which they draw conclusions TELL: provide new facts to agent ASK: decide on appropriate.
Logical Agents Chapter 7 (based on slides from Stuart Russell and Hwee Tou Ng)
Logical Agents. Knowledge bases Knowledge base = set of sentences in a formal language Declarative approach to building an agent (or other system): 
Artificial Intelligence Lecture No. 9 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
February 20, 2006AI: Chapter 7: Logical Agents1 Artificial Intelligence Chapter 7: Logical Agents Michael Scherger Department of Computer Science Kent.
Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter (this lecture, Part I) Chapter 7.5.
‘In which we introduce a logic that is sufficent for building knowledge- based agents!’
Pattern-directed inference systems
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part A Knowledge Representation and Reasoning.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Class Project Due at end of finals week Essentially anything you want, so long as its AI related and I approve Any programming language you want In pairs.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
Knowledge Representation Lecture # 17, 18 & 19. Motivation (1) Up to now, we concentrated on search methods in worlds that can be relatively easily represented.
For Friday Read chapter 8 Homework: –Chapter 7, exercise 1.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 10 of 42 Wednesday, 13 September.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
1 UNIT-3 KNOWLEDGE REPRESENTATION. 2 Agents that reason logically(Logical agents) A Knowledge based Agent The Wumpus world environment Representation,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
1 Knowledge Representation Logic and Inference Propositional Logic Vumpus World Knowledge Representation Logic and Inference Propositional Logic Vumpus.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
LOGICAL AGENTS CHAPTER 7 AIMA 3. OUTLINE  Knowledge-based agents  Wumpus world  Logic in general - models and entailment  Propositional (Boolean)
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Propositional Logic: Logical Agents (Part I)
Knowledge and reasoning – second part
EA C461 – Artificial Intelligence Logical Agent
Learning and Knowledge Acquisition
Artificial Intelli-gence 1: logic agents
Logical Agents Chapter 7 Selected and slightly modified slides from
Logical Agents Chapter 7.
Artificial Intelligence
Artificial Intelligence: Logic agents
Class #9– Thursday, September 29
Knowledge and reasoning – second part
Logical Agents Chapter 7.
CS 416 Artificial Intelligence
Knowledge Representation I (Propositional Logic)
CMSC 471 Fall 2011 Class #10 Tuesday, October 4 Knowledge-Based Agents
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

CSC 423 ARTIFICIAL INTELLIGENCE LOGICAL AGENTS

2 Introduction This chapter introduces knowledge-based agents. The concepts that we discuss—the representation of knowledge and the reasoning processes that bring knowledge to life—are central to the entire field of artificial intelligence. Humans, it seems, know things and do reasoning. Knowledge and reasoning are also important for artificial agents because they enable successful behaviors that would be very hard to achieve otherwise. We have seen that knowledge of action outcomes enables problemsolving agents to perform well in complex environments.

3 Introduction (Cont) Knowledge and reasoning also play a crucial role in dealing with partially observable environments. A knowledge-based agent can combine general knowledge with current percepts to infer hidden aspects of the current state prior to selecting actions. For example, a physician diagnoses a patient—that is, infers a disease state that is not directly observable—prior to choosing a treatment. Some of the knowledge that the physician uses in the form of rules learned from textbooks and teachers, and some is in the form of patterns of association that the physician may not be able to consciously describe. If its inside the physician’s head, it counts as knowledge.

4 KNOWLEDGE-BASED AGENTS The central component of a knowledge-based agent is its knowledge base, or KB. Informally, a knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is related but is not identical to the sentences of English and other natural languages.) Each sentence is expressed in a language called a knowledge representation language and represents some assertion about the world. There must be a way to add new sentences to the knowledge base and a way to query what is known. The standard names for these tasks are TELL and ASK, respectively. Both tasks may involve inference—that is, deriving new sentences from old. In logical agents, which are the main subject of study in this chapter, inference must obey the fundamental requirement that when one ASKs a question of the knowledge base, the answer should follow from what has been told (or rather, TELLed) to the knowledge base previously.

5 A GENETIC KNOWLEDGE-BASED AGENTS Shows the outline of a knowledge-based agent program. Like all our agents, it takes a percept as input and returns an action. The agent maintains a knowledge base, KB, which may initially contain some background knowledge. Each time the agent program is called, it does two things. First, it TELLs the knowledge base what it perceives. Second, it ASKs the knowledge base what action it should perform. In the process of answering this query, extensive reasoning may be done about the current state of the world, about the outcomes of possible action sequences, and so on. Once the action is chosen, the agent records its choice with TELL and executes the action. The second TELL is necessary to let the knowledge base know that the hypothetical action has actually been executed.

6 A GENETIC KNOWLEDGE-BASED AGENTS (Cont) The details of the representation language are hidden inside two functions that implement the interface between the sensors and actuators and the core representation and reasoning system. MAKE-PERCEPT-SENTENCE takes a percept and a time and returns a sentence asserting that the agent perceived the percept at the given time. MAKE-ACTION-QUERY takes a time as input and returns a sentence that asks what action should be performed at that time. The details of the inference mechanisms are hidden inside TELL and ASK.

7 The Wumpus World The wumpus world is a cave consisting of rooms connected by passageways. Lurking somewhere in the cave is the wumpus, a beast that eats anyone who enters its room. The wumpus can be shot by an agent, but the agent has only one arrow. Although the wumpus world is rather tame by modern computer game standards, it makes an excellent testbed environment for intelligent agents.

8 The Wumpus World (Cont) In the cave of rooms there is also one static wumpus (a beast) and a heap of gold, in addition rooms can contain bottomless pits The agent’s possible actions are Move forward one square at a time, Turn left or right by 90°, Pick up the gold by grabbing, and Shooting one single arrow in the direction it is facing

9 Logic A logic must also define the semantics of the language. Loosely speaking, semantics has to do with the “meaning” of sentences. In logic, the definition is more precise. The semantics of the language defines the truth of each sentence with respect to each possible. For example, the usual semantics adopted for arithmetic specifies that the sentence “x + y =4” is true in a world where x is 2 and y is 2, but false in a world where x is 1 and y is 1. In standard logics, every sentence must be either true or false in each possibleworld—there is no “in between.”

10 Sementic Draw a semantic net representing the following information. Mr. Bloggs collects cars. Cars are, of course, a type of vehicle and are normally propelled by an engine. All those that Mr. Bloggs owns were made before He has a smart black Austin A4, with polished chromework, and a rather battered blue Mini, with very rusty chromework. He also has a sleek black Jaguar, with leather seats.

11 Sementic (Cont)

12 PROPOSITIONAL LOGIC: A VERY SIMPLE LOGIC Propositional logic is also called Boolean logic, after the logician George Boole. The syntax of propositional logic defines the allowable sentences. The atomic sentences—the indivisible syntactic elements—consist of a single proposition symbol. Each such symbol stands for a proposition that can be true or false.

13 PROPOSITIONAL LOGIC: A VERY SIMPLE LOGIC (Cont) NOT: If S1 is a sentence, then ¬S1 is a sentence (negation) AND: If S1, S2 are sentences, then S1  S2 is a sentence (conjunction) OR: If S1, S2 are sentences, then S1  S2 is a sentence (disjunction) IMPLIES: If S1, S2 are sentences, then S1  S2 is a sentence (implication) IF and only F: If S1, S2 are sentences, then S1  S2 is a sentence (biconditional)

14 A BNF (Backus–Naur Form) grammar of sentences in propositional logic

15 Truth Table

16 Equivalence The first concept is logical equivalence: two sentences and are logically equivalent f they are true in the same set of models. We write this as α  β. For example, we can easily show (using truth tables) that P ^ Q and Q ^ P are logically equivalent. They play much the same role in logic as arithmetic identities do in ordinary mathematics. An alternative definition of equivalence is as follows: for any two sentences and α and β (Recall that |= means entailment.)

17 Validity

18 Satisfiability Many problems in computer science are really satisfiability problems. For example, all the constraint satisfaction problems are essentially asking whether the constraints are satisfiable by some assignment. With appropriate transformations, search problems can also be solved by checking satisfiability. Validity and satisfiability are of course connected:

19 Satisfiability (Cont)

20 REASONING PATTERNS IN PROPOSITIONAL LOGIC

21 Conjunctive Normal Form (CNF) The resolution rule applies only to disjunctions of literals, so it would seem to be relevant only to knowledge bases and queries consisting of such disjunctions. How, then, can it lead to a complete inference procedure for all of propositional logic? The answer is that every sentence of propositional logic is logically equivalent to a conjunction of disjunctions of literals. A sentence expressed as a conjunction of disjunctions of literals is said to be in conjunctive normal form or CNF.

22 Inference-Based Agent and the Circuit- Based Agent Comparison The inference-based agent and the circuit-based agent represent the declarative and procedural extremes in agent design. They can be compared along several dimensions: Conciseness: The circuit-based agent, unlike the inference-based agent, need not have separate copies of its “knowledge” for every time step. Instead, it refers only to the current and previous time steps. Both agents need copies of the “physics” (expressed as sentences or circuits) for every square and therefore do not scale well to larger environments. In environments with many objects related in complex ways, the number of propositions will swamp any propositional agent. Such environments require the expressive power of first-order logic. Propositional agents of both kinds are also poorly suited for expressing or solving the problem of finding a path to a nearby safe square. Computational efficiency: In the worst case, inference can take time exponential in the number of symbols, whereas evaluating a circuit takes time linear in the size of the circuit.

23 Inference-Based Agent and the Circuit- Based Agent Comparison (Cont) Completeness: We suggested earlier that the circuit-based agent might be incomplete because of the acyclicity restriction. The reasons for incompleteness are actually more fundamental. First, remember that a circuit executes in time linear in the circuit size. This means that, for some environments, a circuit that is complete (i.e., one that computes the truth value of every determinable proposition) must be exponentially larger than the inference-based agent’s KB. Otherwise, we would have a way to solve the propositional entailment problem in less than exponential time, which is very unlikely. A second reason is the nature of the internal state of the agent. The inference-based agent remembers every percept and knows, either implicitly or explicitly, every sentence that follows from the percepts and initial KB. The circuit-based agent, on the other hand, forgets all previous percepts and remembers just the individual propositions stored in registers.

24 Inference-Based Agent and the Circuit- Based Agent Comparison (Cont) Ease of construction: This is a very important issue about which it is hard to be precise. Certainly, this author found it much easier to state the “physics” declaratively, whereas devising small, acyclic, not-too-incomplete circuits for direct detection of pits seemed quite difficult.