Logical Agents.

Slides:



Advertisements
Similar presentations
Knowledge Representation using First-Order Logic
Advertisements

First-Order Logic Chapter 8.
First-Order Logic: Better choice for Wumpus World Propositional logic represents facts First-order logic gives us Objects Relations: how objects relate.
First-Order Logic Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 8 Spring 2004.
Artificial Intelligence Knowledge-based Agents Russell and Norvig, Ch. 6, 7.
Knowledge Representation & Reasoning.  Introduction How can we formalize our knowledge about the world so that:  We can reason about it?  We can do.
First-Order Logic Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 8 Spring 2007.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 8 Jim Martin.
First-Order Logic: Better choice for Wumpus World Propositional logic represents facts First-order logic gives us Objects Relations: how objects relate.
LOGICAL AGENTS Yılmaz KILIÇASLAN. Definitions Logical agents are those that can:  form representations of the world,  use a process of inference to.
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Knowledge Representation using First-Order Logic (Part III) This lecture: R&N Chapters 8, 9 Next lecture: Chapter 13; Chapter (Please read lecture.
First-Order Logic Knowledge Representation Reading: Chapter 8, , FOL Syntax and Semantics read: FOL Knowledge Engineering read:
1 Knowledge Representation and Reasoning Focus on Sections , 10.6 Guest Lecturer: Eric Eaton University of Maryland Baltimore County Lockheed.
CMSC 471 Fall 2002 Class #12/14 – Wednesday, October 9 / Wednesday, October 16.
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part A Knowledge Representation and Reasoning.
Class Project Due at end of finals week Essentially anything you want, so long as its AI related and I approve Any programming language you want In pairs.
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents. Logical agents for the Wumpus World Three (non-exclusive) agent architectures: Reflex agents –Have rules that classify situations based.
Logical Agents. Logical agents for the Wumpus World Three (non-exclusive) agent architectures: Reflex agents –Have rules that classify situations based.
For Wednesday Read chapter 9, sections 1-4 Homework: –Chapter 8, exercise 6 May be done in groups.
1 Propositional logic (cont…) 命題論理 Syntax&Semantics of first-order logic 構文論と意味論 Deducing hidden properties Describing actions Propositional logic (cont…)
Lecture 8-1CS250: Intro to AI/Lisp FOPL, Part Deux Lecture 8-1 November 16 th, 1999 CS250.
CMSC 671 Fall 2003 Class #12 – Wednesday, October 8.
1 UNIT-3 KNOWLEDGE REPRESENTATION. 2 Agents that reason logically(Logical agents) A Knowledge based Agent The Wumpus world environment Representation,
First-Order Logic Chapter 8. Problem of Propositional Logic  Propositional logic has very limited expressive power –E.g., cannot say "pits cause breezes.
CMSC 671 Fall 2001 Class #12/13 – Thursday, October 11 / Tuesday, October 16.
1 Knowledge Representation Logic and Inference Propositional Logic Vumpus World Knowledge Representation Logic and Inference Propositional Logic Vumpus.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
LOGICAL AGENTS CHAPTER 7 AIMA 3. OUTLINE  Knowledge-based agents  Wumpus world  Logic in general - models and entailment  Propositional (Boolean)
Propositional Logic: Logical Agents (Part I)
First-Order Logic Knowledge Representation
Knowledge and reasoning – second part
First-Order Logic Chapter 8.
Class #12 – Thursday, October 8
Logical Agents.
EA C461 – Artificial Intelligence Logical Agent
Introduction Defining the Problem as a State Space Search.
First-Order Logic Knowledge Representation
Knowledge Representation
Learning and Knowledge Acquisition
Artificial Intelli-gence 1: logic agents
Class #12 – Tuesday, October 12
First-Order Logic Knowledge Representation
First-Order Logic Knowledge Representation
Logical Agents Chapter 7.
First-Order Logic Chapter 8.
Logical Agents.
Introduction to Artificial Intelligence
Artificial Intelligence
Artificial Intelligence: Logic agents
Class #9– Thursday, September 29
Knowledge and reasoning – second part
Logical Agents Chapter 7.
First-order Logic Propositional logic (cont…)
Knowledge Representation using First-Order Logic (Part III)
First-Order Logic Knowledge Representation
CS 416 Artificial Intelligence
Artificial Intelligence Lecture 10: Logical Agents
Knowledge Representation I (Propositional Logic)
Logical Agents.
First-Order Logic Chapter 8.
CMSC 471 Fall 2011 Class #10 Tuesday, October 4 Knowledge-Based Agents
Lecture 5-2 February 4th, 1999 CS250
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

Logical Agents

Logical agents for Wumpus World We’ll use the Wumpus World domain to explore three (non-exclusive) agent architectures: Reflex agents Rules classify situations based on percepts and specify how to react to each possible situation Model-based agents Construct an internal model of their world Goal-based agents Form goals and try to achieve them

AIMA’s Wumpus World The agent always starts in the field [1,1] Agent’s task is to find the gold, return to the field [1,1] and climb out of the cave

simple reflex agent: if-then rules Rules to map percepts into observations: b,g,u,c,t Percept([Stench, b, g, u, c], t)  Stench(t) s,g,u,c,t Percept([s, Breeze, g, u, c], t)  Breeze(t) s,b,u,c,t Percept([s, b, Glitter, u, c], t)  AtGold(t) Rules to select action given observations: t AtGold(t)  Action(Grab, t); Difficulties: Consider Climb: No percept indicates agent should climb out; position & holding gold not part of percept sequence Loops: percepts repeated when you return to a square, causing same response (unless we maintain some internal model of the world)

Model based agents Acquire and use a model of their environment Model must change over time Agent should, in general, remember past environments Agent must be able to predict how an action will/may change the environment

Representing change in Logic Representing changing world in logic is tricky One way: just change the KB Add and delete sentences from KB to reflect changes How do we remember past, or reason about changes? Situation calculus is another way Situation: snapshot of the world at some instant in time When the agent performs action A in situation S1, result is new situation S2

Situation calculus A situation is a snapshot of the world at an interval of time during which nothing changes Need way to associate assertions with a situation Add situation variables to every predicate e.g., at(Agent, L) becomes at(Agent, L, s0): at(Agent, L) true in situation (i.e., state) s0 Add a special 2nd order predicate, holds(f, s), meaning “f is true in situation s” e.g., holds(at(Agent, L), s0)

Situation calculus Add new function, result(a, s), mapping situation s to new situation as result of performing action a i.e., result(forward, s) is a function returning next situation Example: The action agent-walks-to-location-y could be represented by (x)(y)(s) (at(Agent, L1, S)  onbox(S))  at(Agent, L2, result(walk(L2), S))

Deducing hidden properties From the perceptual information we obtain in situations, we can infer properties of locations l,s at(Agent, L, s)  Breeze(s)  Breezy(L) l,s at(Agent, L s)  Stench(s)  Smelly(L) Neither Breezy nor Smelly need situation arguments because pits and the Wumpus do not move around

Deducing hidden properties II Need rules relating aspects of single world state (as opposed to between states) Two main kinds of such rules: Causal rules reflect assumed direction of causality (L1,L2,S) at(Wumpus,L1,S)  adjacent(L1,L2)  Smelly(L2) ( L1,L2,S) at(Pit,L1,S)  adjacent(L1,L2)  Breezy(L2) Systems with causal rules are model-based reasoning systems Diagnostic rules infer presence of hidden properties directly from percept-derived information, e.g. ( L,S) at(Agent,L,S)  Breeze(S)  Breezy(L) ( L,S) at(Agent,L,S)  Stench(S)  Smelly(L)

Blocks world The blocks world is a micro-world consisting of a table, a set of blocks and a robot hand Some domain constraints: Only one block can be on another block Any number of blocks can be on table The hand can only hold one block Typical representation: ontable(b) ontable(d) on(c,d) holding(a) clear(b) clear(c) Meant to be a simple model!

Representing change blocks world Frame axioms encode what’s not changed by an action E.g., moving a clear block to the table doesn’t change the location of any other blocks On(x, z, s)  Clear(x, s)  On(x, table, Result(Move(x, table), s))  On(x, z, Result(Move (x, table), s)) On(y, z, s)  y x  On(y, z, Result(Move(x, table), s)) Proliferation of frame axioms becomes very cumbersome in complex domains What about color, size, shape, ownership, etc. The first frame axiom says that when we move a clear clock to the table it’s not on what It used to be on and is now on the table The second says that the on relationship of any other block has not changed

Qualification problem How can you characterize every effect of an action, or every exception that might occur? Putting my bread into the toaster, & pushing the button, it will become toasted after two minutes, unless… The toaster is broken, or… The power is out, or… I blow a fuse, or… A neutron bomb explodes nearby and fries all electrical components, or… A meteor strikes the earth, and the world we know it ceases to exist, or…

Ramification problem Nearly impossible to characterize every side effect of every action, at every level of detail When I put my bread into the toaster, and push the button, the bread will become toasted after two minutes, and… The crumbs that fall off the bread onto the bottom of the toaster over tray will also become toasted, and… Some of the those crumbs will become burnt, and… The outside molecules of the bread will become “toasted,” and… The inside molecules of the bread will remain more “breadlike,” and… The toasting process will release a small amount of humidity into the air because of evaporation, and… The heating elements will become a tiny fraction more likely to burn out the next time I use the toaster, and… The electricity meter in the house will move up slightly, and…

Knowledge engineering! Modeling the right conditions and the right effects at the right level of abstraction is difficult Knowledge engineering (creating & maintaining KBs for intelligent reasoning) is field unto itself We hope automated knowledge acquisition and machine learning tools can fill the gap: Intelligent systems should learn about conditions and effects, just like we do! Intelligent systems should learn when to pay attention to, or reason about, certain aspects of processes, depending on context. (metacognition?)

Goal-based agents Goal based agents model their goals And how their actions move them closer or father than their goals

Preferences among actions Problem: how to decide which of several actions is best E.g., in choosing between forward and grab, axioms describing when it is OK to move to a square would have to mention glitter Not modular! We can solve this problem by separating facts about actions from facts about goals This way our agent can be reprogrammed just by asking it to achieve different goals

Preferences among actions First step: describe desirability of actions independent of each other We can use a simple scale: actions can be Great, Good, Medium, Risky or Deadly Agent should always do best action it can find: (a,s) Great(a,s)  Action(a,s) (a,s) Good(a,s)  (b) Great(b,s)  Action(a,s) (a,s) Medium(a,s)  ((b) Great(b,s)  Good(b,s))  Action(a,s) ...

Preferences among actions Until it finds gold, basic agent strategy can be: Great actions: picking up gold when found; climbing out of cave with the gold Good actions: moving to square that’s OK and hasn't been visited yet Medium actions: moving to a square that is OK and has already been visited Risky actions: moving to square that’s not known to be deadly or OK Deadly actions: moving into a square that is known to have pit or Wumpus

Achieving one goal uncovers another Once gold is found, we must change strategies, requiring new set of action values We could encode this as a rule: (s) Holding(Gold,s)  GoalLocation([1,1]),s) We must decide how the agent will work out a sequence of actions to accomplish the goal Three possible approaches: Inference: good versus wasteful solutions Search: a problem with operators and set of states Planning: to be discussed later

Coming up next Logical inference Knowledge representation Planning