Presentation is loading. Please wait.

Presentation is loading. Please wait.

‘In which we introduce a logic that is sufficent for building knowledge- based agents!’

Similar presentations


Presentation on theme: "‘In which we introduce a logic that is sufficent for building knowledge- based agents!’"— Presentation transcript:

1 ‘In which we introduce a logic that is sufficent for building knowledge- based agents!’

2  Introduction  Knowledge-Based Agents  Syntax and Semantics  Entailment  Logical Agents for the Wumpus World  Inference  Propositional Logic  Wumpus World Sentences  Logical Eqivalence  Important Equivalence  Validity and Satisfiability  Resolution  Normal Forms  Forward and Backward Chaining  Conclusion

3  The concept of this chapter is the representation of knowledge and the reasoning process that bring knowledge to life.  Humans, it seems, know things and do reasoning. Knowledge and reasoning are also important for artificial agents because they enable successful behaviors that would be very hard to achieve.  The knowledge of problem-solving agents is very specific and inflexible.  Logic will be the primary vehicle for the representing knowledge.The knowledge of logical agents is always definite although each proposition is either true or false in the world.

4  Humans can know “things” and “reason” › Representation: How are the things stored? › Reasoning: How is the knowledge used?  To solve a problem…  To generate more knowledge…  Knowledge and reasoning are important to artificial agents because they enable successful behaviors difficult to achieve otherwise › Useful in partially observable environments  Can benefit from knowledge in very general forms, combining and recombining information

5  Central component of a Knowledge-Based Agent is a Knowledge-Base › A set of sentences in a formal language  Sentences are expressed using a knowledge representation language  Two generic functions: › TELL - add new sentences (facts) to the KB  “Tell it what it needs to know” › ASK - query what is known from the KB  “Ask what to do next”

6  The agent must be able to: › Represent states and actions › Incorporate new percepts › Update internal representations of the world › Deduce hidden properties of the world › Deduce appropriate actions Inferene Engine Knowledge-Base Domain- Independent Algorithms Domain- Specific Content

7  Declarative › You can build a knowledge-based agent simply by “TELLing” it what it needs to know  Procedural › Encode desired behaviors directly as program code  Minimizing the role of explicit representation and reasoning can result in a much more efficient system

8  Logics are formal languages for representing information such that conclusions can be drawn  Syntax defines the sentences in the language  Semantics define the "meaning" of sentences  Term is a logical expression that refers to an object  Atomic sentence is formed from a predicate symbol followed by a parenthesized list of terms.

9 Example; -Syntax;  x+2 ≥ y is a sentence;  x2+y > {} is not a sentence -Semantics;  x+2 ≥ y is true iff the number x+2 is no less than the number y  x+2 ≥ y is true in a world where x = 7, y = 1  x+2 ≥ y is false in a world where x = 0, y = 6

10  Definition:Knowledge base (KB) entails sentence a(alpha) if and only if a(alpha) is true in all worlds where KB is true  Notation: KB ╞ a (alpha) ‘Entailment is a relationship between sentences that is based on semantics’

11  Example; The KB containing ‘the shirt is green’ and ‘the shirt is striped’entails ‘the shirt is green or the shirt is striped’.  Example; x+y=4 entails 4=x+y  Models: Models are formally structured worlds,with respect to which truth can be evaluated.  m is a model of a sentence a(alpha) if a(alpha) is true in m  M(a) is the set of all models of a(alpha)  KB ╞ a(alpha) if and only if M(KB) M(a)  Example; KB = The shirt is green and striped a(alpha) = The shirt is green

12  Performance Measure › Gold +1000, Death – 1000 › Step -1, Use arrow -10  Environment › Square adjacent to the Wumpus are smelly › Squares adjacent to the pit are breezy › Glitter iff gold is in the same square › Shooting kills Wumpus if you are facing it › Shooting uses up the only arrow › Grabbing picks up the gold if in the same square › Releasing drops the gold in the same square  Actuators › Left turn, right turn, forward, grab, release, shoot  Sensors › Breeze, glitter, and smell

13  Characterization of Wumpus World › Observable  partial, only local perception › Deterministic  Yes, outcomes are specified › Episodic  No, sequential at the level of actions › Static  Yes, Wumpus and pits do not move › Discrete  Yes › Single Agent  Yes

14  KB=wumpus-world rules+observations

15 KB=wumpus-world rules+observations

16

17  KB ├ i α = sentence α can be derived from KB by procedure i (i is an algorithm that derives α from KB )  Soundness: i is sound if whenever KB ├ i α, it is also true that KB╞ α  Completeness: i is complete if whenever KB╞ α, it is also true that KB ├ i α

18  Propositional Symbols; A,B,P 1, P 2,ShirtisGreen are atomic sentences. o If S,S 1,S 2 are sentences, then Propositional models; each model specifies true/false for each proposition symbol.

19 PQ¬P¬P PQPQPQPQPQPQPQPQ False TrueFalse True FalseTrue FalseTrue False TrueFalse TrueFalse True FalseTrue

20  Propositional Symbols; P i,j means; ‘there is a pit in [i,j]’ B i,j means; ‘there is a breeze in [i,j]’ Sentences; ‘Pits cause breezes in adjacent squares A square is breezy if and only if there is an adjacent pit

21  Tw o sentences are logically equivalent, denoted by;  If they are true in the same models;

22

23  A sentence is valid if it is true in all models  A sentence is satisfiable if it is true in some models;  A sentence is unsatisfiable if it is true in no models

24 Connects validity and unsatisfiability is valid if and only if is unsatisfiable Connects inference and unsatisfiablity if and only if is unsatisfiable

25  There are two kinds of proof methods. These are application of inference rules and model checking.  Application of inference rules; legitimate (sound) generation of new sentences from old. Proof; a sequence of inference rule applications can use inference rules as operators in a standard search algorithm. Typically (in algorithms) require transformation of sentences into a normal form. -Model Checking; KB ├ i α  truth table enumeration (always exponential in n)  backtracking & improved backtracking,  heuristic search in model space (sound but incomplete)

26  Literal is an atomic sentence (propositional symbol), or the negation of an atomic sentence  Clause a disjunction of literals  Conjunctive Normal Form (CNF):a conjunction of disjunctions of literals

27  In mathematical logic and resolution is a rule of inference leading to a refutation theorem-proving technique for sentences in propositional logic.  In other words, iteratively applying the resolution rule in a suitable way allows for telling whether a propositional formula is satisfiable and for proving that a first-order formula is unsatisfiable.  This method may prove the satisfiability but not always, as it is the case for all methods for first-order logic.

28  Example;  Wetness is high and weather is cloudy.  If weather is cloudy, it means that it will rain,  If the wetness is high,weather is hot.  Weather is not hot. CNF

29  Forward chaining is one of the two main methods of reasoning when using inference rules in artificial intelligence and can be described logically.Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining.  Forward chaining starts with the available data and uses inference rules to extract more data until a goal is reached. An inference engineusing forward chaining searches the inference rules until it finds one where the ‘If clause’ is known to be true. When such a rule is found, the engine can conclude, or infer ‘ Then clause ’, resulting in the addition of new information to its data.  Inference engines will iterate through this process until a goal is reached.

30  Suppose that the goal is to conclude the color of a pet named Fritz,  given that he croaks and eats flies, and that the rule base contains the following four rules:  If X croaks and eats flies - Then X is a frog  If X chirps and sings - Then X is a canary  If X is a frog - Then X is green  If X is a canary - Then X is yellow

31  Let us illustrate forward chaining by following the pattern of a computer as it evaluates the rules. Assume the following facts:  Fritz croaks  Fritz eats flies  Tweety eats flies  Tweety chirps  Tweety is yellow

32 With forward reasoning, the computer can derive that Fritz is green in four steps: 1. Fritz croaks and Fritz eats flies Based on logic, the computer can derive: 2. Fritz croaks and eats flies Based on rule 1, the computer can derive: 3. Fritz is a frog Based on rule 3, the computer can derive: 4. Fritz is green.

33  The name "forward chaining" comes from the fact that the computer starts with the data and reasons its way to the answer, as opposed to backward chaining, which works the other way around.  In the derivation, the rules are used in the reverse order as compared to backward chaining.  The data determines which rules are selected and used, this method is called data-driven, in contrast to goal-driven backward chaining inference.  One of the advantages of forward-chaining over backward-chaining is that the reception of new data can trigger new inferences, which makes the engine better suited to dynamic situations in which conditions are likely to change

34 Example; suppose that the goal is to conclude whether Tweety or Fritz is a frog, given information about each of them, and that the rule base contains the following four rules:  If X croaks and eats flies – Then X is a frog  If X chirps and sings – Then X is a canary  If X is a frog – Then X is green  If X is a canary – Then X is yellow Let us illustrate backward chaining by following the pattern of a computer as it evaluates the rules. Assume the following facts:  Fritz croaks  Fritz eats flies  Tweety eats flies  Tweety chirps  Tweety is yellow

35  With backward reasoning, the computer can answer the question "Who is a frog?" in four steps: In its reasoning, the computer uses a placeholder 1. ? is a frog Based on rule 1, the computer can derive: 2. ? croaks and eats flies Based on logic, the computer can derive: 3. ? croaks and ? eats flies Based on the facts, the computer can derive: 4. Fritz croaks and Fritz eats flies  This derivation will cause the computer to produce Fritz as the answer to the question "Who is a frog?".  Computer has not used any knowledge about Tweety to compute that Fritz is a frog.

36  FC is data-driven, automatic, unconscious processing  May do lots of work that is irrelevant to the goal  BC is goal-driven, appropriate for problem-solving  Complexity of BC can be much less than linear in size of KB

37  Logical agents apply inference to a knowledge base to derive new information and make decisions  Basic concepts of logic are syntax, semantics, entailment,inference,soundness and completeness.  Wumpus world requires the ability to represent partial and negated information,reason by cases.  Resolution is sound and complete for propositional logic.  Propositional logic lacks expressive power.

38


Download ppt "‘In which we introduce a logic that is sufficent for building knowledge- based agents!’"

Similar presentations


Ads by Google