CS 8520: Artificial Intelligence

Slides:



Advertisements
Similar presentations
First-Order Logic Chapter 8.
Advertisements

Russell and Norvig Chapter 7
Inference in first-order logic Chapter 9. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
Logic.
Artificial Intelligence Knowledge-based Agents Russell and Norvig, Ch. 6, 7.
1 Problem Solving CS 331 Dr M M Awais Representational Methods Formal Methods Propositional Logic Predicate Logic.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
First-Order Logic Chapter 8. Outline Why FOL? Syntax and semantics of FOL Using FOL Wumpus world in FOL Knowledge engineering in FOL.
Logical Agents Chapter 7 (based on slides from Stuart Russell and Hwee Tou Ng)
Logical Agents. Knowledge bases Knowledge base = set of sentences in a formal language Declarative approach to building an agent (or other system): 
February 20, 2006AI: Chapter 7: Logical Agents1 Artificial Intelligence Chapter 7: Logical Agents Michael Scherger Department of Computer Science Kent.
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
First-Order Logic Chapter 8. Outline Why FOL? Syntax and semantics of FOL Using FOL Wumpus world in FOL Knowledge engineering in FOL.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Inference in First Order Logic. Outline Reducing first order inference to propositional inference Unification Generalized Modus Ponens Forward and backward.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 10 of 42 Wednesday, 13 September.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Logical Agents Russell & Norvig Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean)
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
1 Chaining in First Order Logic CS 171/271 (Chapter 9, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
LOGICAL AGENTS CHAPTER 7 AIMA 3. OUTLINE  Knowledge-based agents  Wumpus world  Logic in general - models and entailment  Propositional (Boolean)
CS666 AI P. T. Chung Logic Logical Agents Chapter 7.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Propositional Logic: Logical Agents (Part I)
First-Order Logic Chapter 8.
First-Order Logic.
Inference in first-order logic
Logical Agents Chapter 7.
Knowledge and reasoning – second part
First-Order Logic Chapter 8.
Artificial Intelligence First Order Logic
EA C461 – Artificial Intelligence Logical Agent
Logical Agents Chapter 7.
Inference in first-order logic part 1
Artificial Intelli-gence 1: logic agents
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Logical Agents Chapter 7 Selected and slightly modified slides from
Logical Agents Reading: Russell’s Chapter 7
Logical Agents Chapter 7.
First-Order Logic Chapter 8.
Inference in first-order logic part 1
Artificial Intelligence
Artificial Intelligence: Agents and Propositional Logic.
Artificial Intelligence: Logic agents
First-Order Logic Chapter 8.
Knowledge and reasoning – second part
Logical Agents Chapter 7.
Knowledge Representation I (Propositional Logic)
Methods of Proof Chapter 7, second half.
First-Order Logic Chapter 8.
First-Order Logic Chapter 8.
Problems in AI Problem Formulation Uninformed Search Heuristic Search
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

CS 8520: Artificial Intelligence Logical Agents and First Order Logic Paula Matuszek Fall, 2008 CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem proving forward chaining backward chaining resolution CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Knowledge bases Knowledge base = set of sentences in a formal languageDeclarative approach to building an agent (or other system): Tell it what it needs to know Then it can Ask itself what to do - answers should follow from the KB Agents can be viewed at the knowledge level i.e., what they know, regardless of how implemented Or at the implementation level i.e., data structures in KB and algorithms that manipulate them CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

A simple knowledge-based agent This agent tells the KB what it sees, asks the KB what to do, tells the KB what it has done (or is about to do). The agent must be able to: Represent states, actions, etc. Incorporate new percepts Update internal representations of the world Deduce hidden properties of the world Deduce appropriate actions CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Wumpus World PEAS Description Performance measure gold +1000, death -1000 -1 per step, -10 for using the arrow Environment Squares adjacent to wumpus are smelly Squares adjacent to pit are breezy Glitter iff gold is in the same square Shooting kills wumpus if you are facing it Shooting uses up the only arrow Grabbing picks up gold if in same square Releasing drops the gold in same square Sensors: Stench, Breeze, Glitter, Bump, Scream Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Wumpus world characterization Fully Observable No – only local perception Deterministic Yes – outcomes exactly specified Episodic No – sequential at the level of actions Static Yes – Wumpus and Pits do not move Discrete Yes Single-agent? Yes – The wumpus itself is essentially a natural feature, not another agent CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Exploring a wumpus world Directly observed: S: stench B: breeze G: glitter A: agent V: visited Inferred (mostly): OK: safe square P: pit W: wumpus 1,1 2,1 3,1 4,1 CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Exploring a wumpus world In 1,1 we don’t get B or S, so we know 1,2 and 2,1 are safe. Move to 1,2. In 1,2 we feel a breeze. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Exploring a wumpus world In 1,2 we feel a breeze. So we know there is a pit in 1,3 or 2,2. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Exploring a wumpus world So go back to 1,1, then to 1,2, where we smell a stench. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Exploring a wumpus world We don't feel a breeze in 2,1, so 2,2 can't be a pit, so 1,3 must be a pit. We don't smell a stench in 1,2, so 2,2 can't be the wumpus, so 1,3 must be the wumpus. 2,2 has neither pit nor wumpus and is therefore okay. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Exploring a wumpus world We move to 2,2. We don’t get any sensory input. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Exploring a wumpus world So we know that 2,3 and 3,2 are ok. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Exploring a wumpus world Move to 3,2, where we observe stench, breeze and glitter! At this point we could infer the existence of another pit (where?), but since we have found the gold we don't bother. We have won. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language Semantics define the "meaning" of sentences; i.e., define truth of a sentence in a world E.g., the language of arithmetic x+2 ≥ y is a sentence; x2+y > {} is not a sentence x+2 ≥ y is true iff the number x+2 is no less than the number yx+2 ≥ y is true in a world where x = 7, y = 1 x+2 ≥ y is false in a world where x = 0, y = 6 CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Entailment Entailment means that one thing follows from another: Knowledge base KB entails sentence S if and only if S is true in all worlds where KB is true E.g., the KB containing “the Giants won” and “the Reds won” entails “Either the Giants won or the Reds won” E.g., x+y = 4 entails 4 = x+y Entailment is a relationship between sentences (i.e., syntax) that is based on semantics CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Entailment in the wumpus world Situation after detecting nothing in [1,1], moving right, breeze in [2,1] Consider possible models for KB assuming only pits: there are 3 Boolean choices  8 possible models (ignoring sensory data) CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Wumpus models for pits CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Wumpus models KB = wumpus-world rules + observations. Only the three models in red are consistent with KB. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Wumpus Models KB plus hypothesis S1 that pit is not in 1.2. KB is solid red boundary. S1 is dotted yellow boundary. KB is contained within S1, so KB entails S; in every model in which KB is true, so is S. We can conclude that the pit is not in1.2. KB plus hypothesis S2 that pit is not in 2.2 . KB is solid red boundary. S2 is dotted brown boundary. KB is not within S1, so KB does not entail S2; nor does S2 entail KB. So we can't conclude anything about the truth of S2 given KB. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Inference KB entailsi S = sentence S can be derived from KB by procedure i Soundness: i is sound if it derives only sentences S that are entailed by KB Completeness: i is complete if it derives all sentences S that are entailed by KB. First-order logic: Has a sound and complete inference procedure Which will answer any question whose answer follows from what is known by the KB. And is richly expressive We must also be aware of the issue of grounding: the connection between our KB and the real world. Straightforward for Wumpus or our adventure games Much more difficult if we are reasoning about real situations Real problems seldom perfectly grounded, because we ignore details. Is the connection good enough to get useful answers? CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Propositional logic: Syntax Propositional logic is the simplest logic – illustrates basic ideas The proposition symbols P1, P2 etc are sentences If S is a sentence, S is a sentence (negation) If S1 and S2 are sentences, S1  S2 is a sentence (conjunction) If S1 and S2 are sentences, S1  S2 is a sentence (disjunction) If S1 and S2 are sentences, S1  S2 is a sentence (implication) If S1 and S2 are sentences, S1  S2 is a sentence (biconditional) CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Wumpus world sentences Let Pi,j be true if there is a pit in [i, j]. Let Bi,j be true if there is a breeze in [i, j].  P1,1 B1,1 B2,1 "Pits cause breezes in adjacent squares" B1,1  (P1,2  P2,1) B2,1  (P1,1  P2,2  P3,1) CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Inference by enumeration Depth-first enumeration of all models is sound and complete For n symbols, time complexity is O(2n), space complexity is O(n) CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Inference-based agents in the wumpus world A wumpus-world agent using propositional logic: P1,1 W1,1 Bx,y  (Px,y+1  Px,y-1  Px+1,y  Px-1,y) Sx,y  (Wx,y+1  Wx,y-1  Wx+1,y  Wx-1,y) W1,1  W1,2  …  W4,4 W1,1  W1,2 W1,1  W1,3 …  64 distinct proposition symbols, 155 sentences CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima. eecs CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Expressiveness limitation of propositional logic Rapid proliferation of clauses. For instance, Wumpus KB contains "physics" sentences for every single square Very bushy inference, especially if forward chaining. Not trivial to express complex relationships; people don't naturally think in logical terms. Monotonic: if something is true it stays true Binary: something is either true or false, never maybe or unknown CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Pros and cons of propositional logic Propositional logic is declarative Propositional logic allows partial/disjunctive/negated information (unlike most data structures and databases) Propositional logic is compositional: meaning of B1,1  P1,2 is derived from meaning of B1,1 and of P1,2  Meaning in propositional logic is context-independent (unlike natural language, where meaning depends on context)  Propositional logic has very limited expressive power (unlike natural language) E.g., cannot say "pits cause breezes in adjacent squares“ except by writing one sentence for each square CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Summary Logical agents apply inference to a knowledge base to derive new information and make decisions Basic concepts of logic: syntax: formal structure of sentences semantics: truth of sentences wrt models entailment: necessary truth of one sentence given another inference: deriving sentences from other sentences soundness: derivations produce only entailed sentences completeness: derivations can produce all entailed sentences Wumpus world requires the ability to represent partial and negated information, reason by cases, etc. Resolution is complete for propositional logic Forward, backward chaining are linear-time, complete for Horn clauses Propositional logic lacks expressive power CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

First-order logic Whereas propositional logic assumes the world contains facts, first-order logic (like natural language) assumes the world contains Objects: people, houses, numbers, colors, baseball games, wars, … Relations: red, round, prime, brother of, bigger than, part of, comes between, … Functions: father of, best friend, one more than, plus, … CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Syntax of FOL: Basic elements Constants KingJohn, 2, Villanova,... Predicates Brother, >,... Functions Sqrt, LeftLegOf,... Variables x, y, a, b,... Connectives , , , ,  Equality = Quantifiers ,  CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Terms and Atomic sentences A term is a logical expression that refers to an object. Book(Naomi). Naomi's book. Textbook(8520). Textbook for 8520. An atomic sentence states a fact. Student(Naomi). Student(Naomi, Paula). Student(Naomi, AI). Note that the interpretation of these is different; it depends on how we consider them to be grounded. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Complex sentences Complex sentences are made from atomic sentences using connectives S, S1  S2, S1  S2, S1  S2, S1  S2, E.g. Sibling(KingJohn,Richard)  Sibling(Richard,KingJohn) CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Truth in first-order logic Sentences are true with respect to a model and an interpretation Model contains objects (domain elements) and relations among them Interpretation specifies referents for constant symbols → objects predicate symbols → relations function symbols → functional relations An atomic sentence predicate(term1,...,termn) is true iff the objects referred to by term1,...,termn are in the relation referred to by predicate CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Knowledge base for the wumpus world Perception t,s,b Percept([s,b,Glitter],t)  Glitter(t) Reflex t Glitter(t)  BestAction(Grab,t) CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Deducing hidden properties x,y,a,b Adjacent([x,y],[a,b])  [a,b]  {[x+1,y], [x-1,y],[x,y+1],[x,y-1]} Properties of squares: s,t At(Agent,s,t)  Breeze(t)  Breezy(s) Squares are breezy near a pit: Diagnostic rule---infer cause from effect s Breezy(s)   r Adjacent(r,s)  Pit(r) Causal rule---infer effect from cause r Pit(r)  [s Adjacent(r,s)  Breezy(s)] CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Knowledge engineering in FOL Identify the task Assemble the relevant knowledge Decide on a vocabulary of predicates, functions, and constants Encode general knowledge about the domain Encode a description of the specific problem instance Pose queries to the inference procedure and get answers Debug the knowledge base CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Summary First-order logic: objects and relations are semantic primitives syntax: constants, functions, predicates, equality, quantifiers Increased expressive power: sufficient to define wumpus world CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Inference When we have all this knowledge we want to DO SOMETHING with it Typically, we want to infer new knowledge An appropriate action to take Additional information for the Knowledge Base Some typical forms of inference include Forward chaining Backward chaining Resolution CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Example knowledge base The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American. Prove that Col. West is a criminal CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Inference We need to DO SOMETHING with our knowledge. Forward chaining Backward chaining Resolution CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Example knowledge base contd. ... it is a crime for an American to sell weapons to hostile nations: American(x)  Weapon(y)  Sells(x,y,z)  Hostile(z)  Criminal(x) Nono … has some missiles, i.e., x Owns(Nono,x)  Missile(x): Owns(Nono,M1) and Missile(M1) … all of its missiles were sold to it by Colonel West Missile(x)  Owns(Nono,x)  Sells(West,x,Nono) Missiles are weapons: Missile(x)  Weapon(x) An enemy of America counts as "hostile“: Enemy(x,America)  Hostile(x) West, who is American … American(West) The country Nono, an enemy of America … Enemy(Nono,America) CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Forward chaining algorithm CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Forward chaining proof CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Forward chaining proof CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Forward chaining proof CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Properties of forward chaining Sound and complete for first-order definite clauses Datalog = first-order definite clauses + no functions FC terminates for Datalog in finite number of iterations May not terminate in general if α is not entailed This is unavoidable: entailment with definite clauses is semidecidable CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Efficiency of forward chaining Incremental forward chaining: no need to match a rule on iteration k if a premise wasn't added on iteration k-1  match each rule whose premise contains a newly added positive literal Matching itself can be expensive: Database indexing allows O(1) retrieval of known facts e.g., query Missile(x) retrieves Missile(M1) Forward chaining is widely used in deductive databases CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Backward chaining algorithm CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Backward chaining example CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Backward chaining example CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Backward chaining example CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Backward chaining example CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Backward chaining example CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Backward chaining example CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Backward chaining example CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Backward chaining example CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Properties of backward chaining Depth-first recursive proof search: space is linear in size of proof Incomplete due to infinite loops  fix by checking current goal against every goal on stack Inefficient due to repeated subgoals (both success and failure)  fix using caching of previous results (extra space) Widely used for logic programming CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9

Forward vs. backward chaining FC is data-driven, automatic, unconscious processing, e.g., object recognition, routine decisions May do lots of work that is irrelevant to the goal BC is goal-driven, appropriate for problem-solving, e.g., Where are my keys? How do I get into a PhD program? Complexity of BC can be much less than linear in size of KB Choice may depend on whether you are likely to have many goals or lots of data. CSC 8520 Fall, 2008. Paula Matuszek. Slides in part from aima.eecs.berkeley.edu/slides-ppt, chs 7-9