Logic programming  Combining declarative and procedural representations: Emergencies on the London Underground  Logic programming for proactive rather.

Slides:



Advertisements
Similar presentations
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
Advertisements

Russell and Norvig Chapter 7
1 Inductive Equivalence of Logic Programs Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics ILP
Automated Reasoning Systems For first order Predicate Logic.
1 Computational Logic in Human Reasoning Robert Kowalski (Imperial College, United Kingdom) Formal logic was originally developed as a normative model.
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Logic.
Consciousness as awareness Levels of consciousness can be compiled and sometimes decompiled from one to another Compiling conscious into subconscious thought.
Everything You Need to Know (since the midterm). Diagnosis Abductive diagnosis: a minimal set of (positive and negative) assumptions that entails the.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.5 [P]: Propositions and Inference Sections.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Methods of Proof Chapter 7, second half.
Knoweldge Representation & Reasoning
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2005.
Logical and Rule-Based Reasoning Part I. Logical Models and Reasoning Big Question: Do people think logically?
Chapter 3: Methods of Inference
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 19, 2012.
Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter (this lecture, Part I) Chapter 7.5.
Knowledge Representation Use of logic. Artificial agents need Knowledge and reasoning power Can combine GK with current percepts Build up KB incrementally.
Pattern-directed inference systems
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Propositional Logic: Methods of Proof (Part II) This lecture topic: Propositional Logic (two lectures) Chapter (previous lecture, Part I) Chapter.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 17 Wednesday, 01 October.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 17, 2012.
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
Knowledge Representation Lecture # 17, 18 & 19. Motivation (1) Up to now, we concentrated on search methods in worlds that can be relatively easily represented.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 24, 2012.
Automated Reasoning Systems For first order Predicate Logic.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Chapter 7. Propositional and Predicate Logic Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
First-Order Logic Reading: C. 8 and C. 9 Pente specifications handed back at end of class.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 10 of 42 Wednesday, 13 September.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Lecture 8-2CS250: Intro to AI/Lisp What do you mean, “What do I mean?” Lecture 8-2 November 18 th, 1999 CS250.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
On Abductive Equivalence Katsumi Inoue National Institute of Informatics Chiaki Sakama Wakayama University MBR
Knowledge Repn. & Reasoning Lecture #9: Propositional Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005.
Computing & Information Sciences Kansas State University Wednesday, 04 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 17 of 42 Wednesday, 04 October.
Production systems The Production System Cycle Conflict resolution Thermostat’s input-output behaviour Passenger input-output behaviour on the underground.
5 Lecture in math Predicates Induction Combinatorics.
Conditionals in Computational Logic Bob Kowalski Imperial College London with acknowledgements to Fariba Sadri Keith Stenning Michiel van Lambalgen.
Artificial Intelligence Knowledge Representation.
Computing & Information Sciences Kansas State University Friday, 13 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 21 of 42 Friday, 13 October.
Artificial Intelligence Logical Agents Chapter 7.
CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 4- Logic.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Richard Dedekind ( ) The grandfather of mathematical structuralism
Chapter 7. Propositional and Predicate Logic
EA C461 – Artificial Intelligence Logical Agent
Logical Agents Chapter 7 Selected and slightly modified slides from
Logical Agents Chapter 7.
Artificial Intelligence
Back to “Serious” Topics…
Logical Agents Chapter 7.
Chapter 7. Propositional and Predicate Logic
Knowledge Representation I (Propositional Logic)
Logical and Rule-Based Reasoning Part I
Presentation transcript:

Logic programming  Combining declarative and procedural representations: Emergencies on the London Underground  Logic programming for proactive rather than reactive behaviour: The fox and the crow  Two classes of semantics: if-and-only-if versus minimal model semantics  Logic programs for representing “strong” versus “weak” knowledge

Logic needs to be put in its place: In the thinking component of an agent cycle observe act An agent Perceptual processing Motor processing The world think decide

Logic needs to be put in its place: in the thinking component of the agent cycle: To cycle, observe the world, think, decide what actions to perform, act, cycle again. Logic and logic programming are one way of thinking. Production systems are another way of thinking. Decision theory is one way of deciding what to do.

What to do in an emergency Emergencies Press the alarm signal button to alert the driver. The driver will stop if any part of the train is in a station. If not, the train will continue to the next station, where help can more easily be given. There is a 50 pound penalty for improper use.

The London Underground Emergency Notice as a program Press the alarm signal button to alert the driver. This has the form of a goal-reduction procedure: Reduce the goal of alerting the driver to the sub-goal of pressing the alarm signal button.

The first sentence of the Emergency Notice as a Logic Program In general, a goal-reduction procedure of the form: Reduce goal to sub-goals hides a logical implication: Goal if sub-goals. The goal-reduction behaviour can be obtained by backward reasoning: To conclude that the goal can be solved, show that the sub-goals can be solved. The first sentence of the Emergency Notice has the hidden logical form: You alert the driver, if you press the alarm signal button.

The second and third sentences of the Emergency Notice in logic programming form The driver will stop the train in a station if the driver is alerted to an emergency and any part of the train is in the station. The driver will stop the train at the next station and help can be given there better than between stations if the driver is alerted to an emergency and not any part of the train is in a station.

The fourth sentence of the Emergency Notice The last sentence of the Notice has the underlying logic programming form: You get a 50 pound penalty if you press the alarm signal button improperly. Backwards reasoning turns this into a goal-reduction procedure: To get a 50 pound penalty, press the alarm signal button improperly. Forward reasoning in abductive logic programming turns this into an inference to monitor the generation of a candidate action, to evaluate its possible consequences.

Goal The fox has cheese. Beliefs The crow has cheese. An animal has an object if the animal is near the object and the animal picks up the object. The fox is near cheese if the crow sings. The crow sings if the fox praises the crow. ?

The fox’s beliefs as goal-reduction procedures To have an object, be near the object and pick up the object. To be near the cheese, make the crow sing. To make the crow sing, praise the crow. To show that the crow has the cheese, do nothing.

I have the cheese. I am near the cheese. I pick up the cheese. The crow sings. I praise the crow. In general, the search space for backward reasoning with logic programs can be represented as an and-or tree or graph.

I have the cheese. I am near the cheese and I pick up the cheese The crow has the cheese and the crow sings and I pick up the cheese The crow sings and I pick up the cheese I praise the crow and I pick up the cheese The fox The world The fox reasons proactively

The fox praises me. I sing The crow Perceptual processing Motor processing The world If an animal praises me, then I sing. The crow behaves reactively

The moral of the story : Think before you act (pre-actively, lecture 3) If the crow knew what the fox knows and the crow could reason pre-actively, then the crow would be able to reason as follows: I want to sing. But if I sing, then the fox will be near the cheese. Perhaps the fox will pick up the cheese. Then the fox will have the cheese. Then I will not have the cheese. Since I want to have the cheese, I will not sing.

Logic Programming – two classes of semantics  At the object level, all the clauses with a given predicate P in the conclusion are interpreted as the definition of P, e.g. you get help if you press the alarm signal button. you get help if you shout loudly. are interpreted as: you get help if and only if you press the alarm signal button or you shout loudly.  At the meta-level, the set of clauses with P in the conclusion are interpreted as the only clauses that conclude P. A set of Horn clauses “defines” a minimal model. E.g. the natural numbers are the smallest set N such that 0 is in N X + 1 is in N if X is in N

Negation as Failure can be understood in two ways At the object level, not P holds iff the definition of P in if-and-only-if form implies not P At the meta-level, not P holds iff it is not possible to conclude P, using the uncompleted program. not P means P is not believed (as in auto-epistemic logic) The two semantics are equivalent in most cases.

Negation in logic programming – two classes of proof procedures –At the object level, show not P by using the definition of P in if-and-only-if form. (This is the basis for the IFF proof procedure for abductive logic programming, Fung-Kowalski 1997, based on Console-Torasso 1991 and Denecker-De Schreye 1992.) –At the meta-level, show not P by showing it is not possible to conclude P, using the uncompleted program. The relationship between object level and meta-level here is like the relationship between the first-order axioms of Peano arithmetic (without induction) and the intended model of arithmetic.

Logical problem-solving methods are weak and general-purpose. Oaksford, M. & Chater, N. (2002). Commonsense reasoning, logic and human rationality.

But logic and logic programming can be used to represent strong, domain-specific knowledge. Examples –Planning from second principles, using plan schemata, rather than planning from first principles –Quicksort rather than ordered permutation –Theorems rather than axioms But in some domains only weak knowledge may be available –Database queries –Combinatorial problem-solving –Inductive logic programming

The logic programming view of the relationship between an agent and the world Goals Backward reasoning Intermediate level sub-goals Backward reasoning Actions ? The world