Presentation is loading. Please wait.

Presentation is loading. Please wait.

DCP 1172 Introduction to Artificial Intelligence

Similar presentations


Presentation on theme: "DCP 1172 Introduction to Artificial Intelligence"— Presentation transcript:

1 DCP 1172 Introduction to Artificial Intelligence
Lecture notes for Chap. 7 [AIMA] Logical Agent Chang-Sheng Chen

2 Knowledge and reasoning – second part
Knowledge representation Logic and representation Propositional (Boolean) logic Normal forms Inference in propositional logic Wumpus world example DCP 1172, Ch. 7

3 Review We studied search because it facilitates the creation of agents that can reason about hypothetical (future) states of the world. But… we haven’t said much of anything about how those states should be represented. Or about how these future (successor) states can be generated from current states DCP 1172, Ch. 7

4 Typical Example of Knowledge-based Agent  Anti-spam Mail Filtering
Generic Mail Filtering Functions F(n) = g(n) + h(n) G(n): exact value known H(n): Heuristic / estimate value Mail Transfer Agent Client Generic Mail Filtering Fail Anti-SPAM Filtering Reject Mail Spool Pass Accept DCP 1172, Ch. 7

5 SPAM Mail Filtering Tool - Netscape Communicator
DCP 1172, Ch. 7

6 SPAM Mail c(2) DCP 1172, Ch. 7

7 SPAM Message c(1) DCP 1172, Ch. 7

8 Knowledge-Based Agents
A knowledge-based agent is composed of a knowledge base and an inference mechanism. A knowledge-base is simply a repository of domain-specific things (or sentences about the world) that you know represented in some useful way. A knowledge-based agent operates by storing sentences about the world in its knowledge base, using the inference mechanism to infer new sentences, and using these sentences to decide what action to take. The knowledge base cannot be a simple table because an agent should be able to conclude facts about the world that are not already represented in the knowledge base. DCP 1172, Ch. 7

9 Knowledge-Based Agent
Agent that uses prior or acquired knowledge to achieve its goals Can make more efficient decisions Can make informed decisions Knowledge Base (KB): contains a set of representations of facts about the Agent’s environment Each representation is called a sentence Use some knowledge representation language (KRL), to TELL it what to know e.g., (temperature 72F) ASK the agent (i.e., to query) what to do Agent can use inference to deduce new facts from TELLed facts Knowledge Base Inference engine Domain independent algorithms Domain specific content TELL ASK DCP 1172, Ch. 7

10 Knowledge-base Agents
TELL: operator to add new sentences into the KB. ASK: operator to query what it is known in the KB. The agent maintains a knowledge base, KB. Each time the agent is called, it does two things. First, it TELL the KB what it perceives. Second, it ASKs the KB what action it should perform. In the process of answering the query, extensive reasoning may be done. Once the action is chosen, the agent records its choice with TELL and executes the action. DCP 1172, Ch. 7

11 Generic knowledge-based agent
TELL KB what was perceived Uses a Knowledge Representation Language (KRL) to insert new sentences, representations of facts, into KB ASK KB what to do. Uses logical reasoning to examine actions and select best. DCP 1172, Ch. 7

12 Knowledge-Based Agents
A knowledge representation is a formal scheme that dictates how an agent is going to represent its knowledge in the knowledge base. Syntax (語法): Rules that determine the possible strings in the language. Semantics(語意): Rules that determine a mapping from sentences in the representation to situations in the world. . Knowledge Representation = Logic + Ontology + Computation DCP 1172, Ch. 7

13 Knowledge Representation = Logic + Ontology + Computation
Knowledge representation (KR) is a multi-disciplinary subject that applies theories and techniques from three fields: Logic provides the formal structure and rules of inference. Ontology defines the kinds of thins that exist in the application domain. Computation supports the applications that distinguish knowledge representation from pure philosophy. DCP 1172, Ch. 7

14 KR = Logic + Ontology + Computation (cont.)
Knowledge representation is the application of logic and ontology to the task of constructing computable models for some domain. Without logic, a knowledge representation is vague, with no criteria for determining whether statements are redundant or contradictory. Without ontology, the terms and symbols are ill-defined, confused, and confusing. Without computable models, the logic and ontology cannot be implemented in a computer program. DCP 1172, Ch. 7

15 Wumpus world example DCP 1172, Ch. 7

16 Wumpus world characterization
Deterministic? Yes – outcome exactly specified. Fully Observable? No – only local perception. Static? Yes – Wumpus and pits do not move. Discrete? Yes Episodic? (Yes) – because static. DCP 1172, Ch. 7

17 Exploring a Wumpus world
A= Agent B= Breeze S= Smell P= Pit W= Wumpus OK = Safe V = Visited G = Glitter DCP 1172, Ch. 7

18 Exploring a Wumpus world
A= Agent B= Breeze S= Smell P= Pit W= Wumpus OK = Safe V = Visited G = Glitter DCP 1172, Ch. 7

19 Exploring a Wumpus world
A= Agent B= Breeze S= Smell P= Pit W= Wumpus OK = Safe V = Visited G = Glitter DCP 1172, Ch. 7

20 Exploring a Wumpus world
A= Agent B= Breeze S= Smell P= Pit W= Wumpus OK = Safe V = Visited G = Glitter DCP 1172, Ch. 7

21 Exploring a Wumpus world
A= Agent B= Breeze S= Smell P= Pit W= Wumpus OK = Safe V = Visited G = Glitter DCP 1172, Ch. 7

22 Exploring a Wumpus world
A= Agent B= Breeze S= Smell P= Pit W= Wumpus OK = Safe V = Visited G = Glitter DCP 1172, Ch. 7

23 Exploring a Wumpus world
A= Agent B= Breeze S= Smell P= Pit W= Wumpus OK = Safe V = Visited G = Glitter DCP 1172, Ch. 7

24 Exploring a Wumpus world
A= Agent B= Breeze S= Smell P= Pit W= Wumpus OK = Safe V = Visited G = Glitter DCP 1172, Ch. 7

25 Other tight spots DCP 1172, Ch. 7

26 Another example solution
No perception  [1,2] and [2,1] OK Move to [2,1] B in [2,1]  P in [2,2] or[ 3,1] ? [1,1] V  no P in [1,1] Move to [1,2] (only option) DCP 1172, Ch. 7

27 Example solution S in [1,2] and No S when in [2,1]  [1,3] or [1,2] has W [1,2] OK  W in [1,3] No B in [1,2]  [2,2] OK & P in [3,1] DCP 1172, Ch. 7

28 Representation and Mappings
Two different kinds of entities are usually mentioned in the discussions about AI programs: Facts: truths in some relevant world (e.g., including each agent’s behavior and goals, etc.). “There is a pit in [3,1]” (proposition; true or false) Representation of facts in some chosen formalism. These are the things that we will actually be able to manipulate. P3,1 = there is a pit in [3,1] (true or false) DCP 1172, Ch. 7

29 Mapping between Facts and Representations
Reasoning Programs * Internal Representations Facts * Natural Language understanding Natural Language generation Natural Language Representation (e.g., English, Chinese, etc.) DCP 1172, Ch. 7

30 Logic in general DCP 1172, Ch. 7

31 Types of logic DCP 1172, Ch. 7

32 Overview of Proposition Logic
Proposition logic is a very simple language that consists of proposition symbols and logical connectives. Proposition symbols: P1, P2, Q, etc. Logical connectives: ¬ ,^,V,,⇔ , etc Proposition logic can handle propositions that are known true, known false, or completely unknown. Proposition X is a rose DCP 1172, Ch. 7

33 First-order logic (FOL)
Ontological commitments: Objects: wheel, door, body, engine, seat, car, passenger, driver Relations: Inside(car, passenger), Beside(driver, passenger) Functions: ColorOf(car) Properties: Color(car), IsOpen(door), IsOn(engine) Functions are relations with single value for each object DCP 1172, Ch. 7

34 Semantics there is a correspondence between
functions, which return values predicates, which are true or false Function: FatherOf(Mary) = Bill Predicate: FatherOf(Mary, Bill) more formally, this is how build interpretation of whole from interpretation of parts. predicate and relation used interchangeably; here predicate is the formal symbol and relation the real world relation give example of tuples for functions and relations (e.g. son-of, son) DCP 1172, Ch. 7

35 The Semantic Wall - Truth Depends on Interpretation
Physical Symbol System World +BLOCKA+ +BLOCKB+ +BLOCKC+ P1:(IS_ON +BLOCKA+ +BLOCKB+) P2:((IS_RED +BLOCKA+) Syntax: says what is allowed on the LHS Semantics: says how what is on the LHS relates to what is on the RHS Inference: says how you can manipulate (formally, i.e., with no reference to the RHS) the symbols. [remember PSSH] Want to be able to trust the results: want whatever the inference procedure does to “respect” what’s true or what follows in the world. So this is where we’re headed; good to keep in mind as we go through all the definitions now to follow. There is a method in this madness... algebra example (put on board): but don’t use “entails” instead convey the idea > n m: is this true or false? don’t know if n=3, m=5, and > has its usual meaning, then (>n m) is false (> n m) and (> m p) entail (n p) DCP 1172, Ch. 7

36 Filtering with H1(msg) Filtering With H2(msg)
Truth Depends on Interpretation (e.g., Anti-spam or anti-virus mail filtering) MTA0 Filtering with H1(msg) Accept Mail Spool MTA1 (or MUA1) Filtering With H2(msg) Discard MTA = Mail Transfer Agent MUA = Mail User Agent MTA2 (or MUA1) DCP 1172, Ch. 7

37 Logical entailment (伴隨, 隱含) between sentences
Logical entailment between sentences (i.e., in mathematical term, we write α ⊨ β ) – the idea that a sentence following logically from another sentence. For example, Proposition α = Bill is the father of Mary. Proposition β =Mary is a child of Bill. The formal definition of entailment is this: α ⊨ β , if and only if, in every model in which α is true, then β is true. DCP 1172, Ch. 7

38 Reasoning - Logic as a representation of the World
The proposition X is rose entails the proposition X is a follower because all roses are followers. DCP 1172, Ch. 7

39 Proposition α = X is the Child of Y.
Entailment (伴隨, 隱含) . Question: is α ⊨ β ? Proposition α = X is the Child of Y. Proposition β = Y is the Mother of X DCP 1172, Ch. 7

40 Models DCP 1172, Ch. 7

41 Inference Notice that inference is not directly related to truth;
i.e. we can infer a sentence provided we have rules of inference that produce the sentence from the original sentences. However, if rules of inference are to be useful we wish them to be related to entailment. Ideally we would like: p ⊢ q if and only if p ⊨ q DCP 1172, Ch. 7

42 p ⊢ q (inference) if and only if p ⊨ q (entailment)
Inference (cont.) Ideally we would like: p ⊢ q (inference) if and only if p ⊨ q (entailment) But this equivalence may fail in two ways: p ⊢ q but p ⊭ q We have inferred by applying rules of inference to , but there is some model in which p holds but q does not hold. In this case the rules of inference have inferred ``too much''. p ⊨ q but p ⊬ q q is a sentence which holds in all models in which p holds, but we cannot find rules of inference that will infer from . In this case the rules of inference are insufficient to infer the things we want to be able to infer. DCP 1172, Ch. 7

43 Inference (cont.) ``A sound inference procedure infers things that are valid consequences'' ``A complete inference procedure is able to infer anything that is that is a valid consequence'' The ``best'' inference procedures are both sound and complete, but gaining completeness is often computationally expensive. Notice that even if inference is not complete it is desirable that it is sound. DCP 1172, Ch. 7

44 Entailment vs. Inference
Entailment is different from inference Entailment: KB |= α if and only if α is true in all worlds where KB is true. That is, M(KB) ⊆ M(α). Inference: KB |–i α, sentence α can be derived from KB using procedure i Sound: whenever KB |–i α then KB |= α is true Complete: whenever KB |= α then KB |–i α is also true. DCP 1172, Ch. 7

45 Basic symbols of proposition logic
Propositions (expressions) only evaluate to either “true” or “false.” Basic Operations P “P is true” Negation : ¬P, “P is false” Disjunction : P V Q, “either P is true or Q is true or both” Conjunction : P ^ Q, “both P and Q are true” Implication : P => Q, “if P is true, the Q is true” Equivalence : P  Q , “P and Q are either both true or both false” DCP 1172, Ch. 7

46 Propositional logic: syntax
DCP 1172, Ch. 7

47 Propositional logic: semantics
DCP 1172, Ch. 7

48 Truth value: whether a statement is true or false.
Truth tables Truth value: whether a statement is true or false. Truth table: complete list of truth values for a statement given all possible values of the individual atomic expressions. Example: P Q P V Q T T T T F T F T T F F F DCP 1172, Ch. 7

49 Truth tables for basic connectives
P Q ¬P ¬Q P V Q P ^ Q P=>Q PQ T T F F T T T T T F F T T F F F F T T F T F T F F F T T F F T T DCP 1172, Ch. 7

50 Propositional logic: basic manipulation rules
¬(¬A) = A Double negation ¬(A ^ B) = (¬A) V (¬B) Negated “and” ¬(A V B) = (¬A) ^ (¬B) Negated “or” A ^ (B V C) = (A ^ B) V (A ^ C) Distributivity of ^ on V A => B = (¬A) V B by definition ¬(A => B) = A ^ (¬B) using negated or A  B = (A => B) ^ (B => A) by definition ¬(A  B) = (A ^ (¬B))V(B ^ (¬A)) using negated and & or DCP 1172, Ch. 7

51 Propositional inference: enumeration method
true DCP 1172, Ch. 7

52 Enumeration: Solution
DCP 1172, Ch. 7

53 Propositional inference: normal forms
“product of sums of simple variables or negated simple variables” “sum of products of simple variables or negated simple variables” DCP 1172, Ch. 7

54 Deriving expressions from functions
Given a boolean function in truth table form, find a propositional logic expression for it that uses only V, ^ and ¬. Idea: We can easily do it by disjoining the “T” rows of the truth table. Example: XOR function P Q RESULT T T F T F T P ^ (¬Q) F T T (¬P) ^ Q F F F RESULT = (P ^ (¬Q)) V ((¬P) ^ Q) DCP 1172, Ch. 7

55 A more formal approach To construct a logical expression in disjunctive normal form from a truth table: Build a “minterm” for each row of the table, where: - For each variable whose value is T in that row, include the variable in the minterm - For each variable whose value is F in that row, include the negation of the variable in the minterm - Link variables in minterm by conjunctions The expression consists of the disjunction of all minterms. DCP 1172, Ch. 7

56 Example: adder with carry
Takes 3 variables in: x, y and ci (carry-in); yields 2 results: sum (s) and carry-out (co). To get you used to other notations, here we assume T = 1, F = 0, V = OR, ^ = AND, ¬ = NOT. co is: s is: DCP 1172, Ch. 7

57 Tautologies Logical expressions that are always true. Can be simplified out. Examples: T T V A A V (¬A) ¬(A ^ (¬A)) A  A ((P V Q)  P) V (¬P ^ Q) (P  Q) => (P => Q) DCP 1172, Ch. 7

58 Validity and satisfiability
Theorem DCP 1172, Ch. 7

59 Proof methods DCP 1172, Ch. 7

60 Inference Rules DCP 1172, Ch. 7

61 Inference Rules DCP 1172, Ch. 7

62 Facts: Percepts inject (TELL) facts into the KB
Wumpus world: example Facts: Percepts inject (TELL) facts into the KB [stench at 1,1 and 2,1]  S1,1 ; S2,1 Rules: if square has no stench then neither the square nor adjacent square contain the wumpus R1: !S1,1 !W1,1  !W1,2  !W2,1 R2: !S2,1 !W1,1 !W2,1  !W2,2  !W3,1 Inference: KB contains !S1,1 then using Modus Ponens we infer !W1,1  !W1,2  !W2,1 Using And-Elimination we get: !W1,1 !W1,2 !W2,1 DCP 1172, Ch. 7

63 Limitations of Propositional Logic
1. It is too weak, i.e., has very limited expressiveness: Each rule has to be represented for each situation: e.g., “don’t go forward if the wumpus is in front of you” takes 64 rules 2. It cannot keep track of changes: If one needs to track changes, e.g., where the agent has been before then we need a timed-version of each rule. To track 100 steps we’ll then need 6400 rules for the previous example. Its hard to write and maintain such a huge rule-base Inference becomes intractable DCP 1172, Ch. 7

64 Summary DCP 1172, Ch. 7

65 Principles of Knowledge Representation - Randall Devis, Howard Schrobe, Peter Szolovits, 1993
Five basic principles of KR: A knowledge representation is a surrogate. A knowledge representation is a set of ontological commitments. A knowledge representation is a fragmentary theory of intelligent reasoning. A knowledge representation is a medium for efficient computation. A knowledge representation is a medium of human expression. DCP 1172, Ch. 7

66 A knowledge representation is a surrogate
Physical objects, events, and relationships, which cannot be stored directly in a computer, are represented by symbols that are surrogated (監護, 代理) for the external things. The symbols and the links between them form a model of the external system. By manipulating the internal surrogates, a computer program can simulate the external system or reason about it. DCP 1172, Ch. 7

67 Representation of Facts
Desired real reasoning Initial facts Final facts Forward representation mapping Backward representation mapping * * Internal representation of initial facts Internal representation of final facts Operation of program DCP 1172, Ch. 7

68 Generic Mail Filtering (cont)
Client (1) Generic Mail Filtering White List Pass (2) Reject Black List Fail (3) Accept Grey List Mail Spool Fail temporarily (4) Automatic SPAM Learning Fail Update Pass DCP 1172, Ch. 7

69 A knowledge representation is a set of ontological commitments.
Ontology is the study of existence. For database or knowledge base, ontology determines the categories of things that exist or may exist in an application domain. Those categories represent the ontological commitments of the designer or knowledge engineers. DCP 1172, Ch. 7

70 What is an ontology The word ontology comes from the Greek ontos for being and logos for word. An ontology is: a unifying framework for different viewpoints and serves as the basis for enabling communication (between people, between people and systems, between systems) a logical theory which gives an explicit, partial account of a conceptualization [Guarino and Giaretta, 1995] DCP 1172, Ch. 7

71 What are the Components of an Ontology ?
Concepts (broad sense): Anything about which something is said Mail User agent (e.g., outlook express, etc), Mail Transfer agent (e.g., sendmail, etc.), blacklist, sender address, etc. Relations: interaction between concepts of the domain E.g., SubsetOf, MemberOf, PartOf, etc. MUA (client) – MTA (server), etc. Axioms : to model sentences that are always true Any mail with invalid recipient address will be bounced back. Any sender address within the blacklist of the MTA will be rejected. …more Instances Used to represent elements DCP 1172, Ch. 7

72 Mail Ontology DCP 1172, Ch. 7

73 Candidate Features for Filtering with Heuristics
Envelop address (EnvFrom, EnvRcpt) Relay (Helo, rDNS, IP address range, etc.) SMTP Peak Connection Ratio (8.13) Header Address (HdrFrom, HdrRcpt, etc) Body Content URL DCP 1172, Ch. 7

74 A knowledge representation is a fragmentary theory of intelligent reasoning.
To support reasoning about the things in a domain, a knowledge representation must also describe their behavior and interactions. The descriptions constitutes a theory of the application domains. The theory may be stated in explicit axioms, or it may be compiled into executable programs. DCP 1172, Ch. 7

75 KR and Heuristics for e-mail filtering
# Ra-1, Mailformed HiNet sender address reject "Rejected, Malformed MAIL FROM (Ra-1) " envfrom envfrom # Rb-1, Invlaid sender address reject "Rejected, SPAM from TwIspBL (Rb-1);!" connect /^.*dynamic\..*(EBTnet|HiNet|ttn)\.net$/ei /.*/I connect /^.*dynamic\..*(net|com)\.tw$/ei /.*/i … more DCP 1172, Ch. 7

76 A knowledge representation is a medium for efficient computation
Besides representing knowledge, an AI system must encode knowledge in a form that can be processed efficiently on the available computing equipments. New developments in computer hardware and programming theory have had a major influence on the design and use of knowledge representation languages. DCP 1172, Ch. 7

77 Sample KR - Greylist for E-mail filtering
# # greylisted tuples # # Sender IP Sender Recipient Time accepted # :39:01 # :39:02 # :39:03 DCP 1172, Ch. 7

78 Sample KR - Greylist for E-mail filtering (cont.)
# # Auto-whitelisted tuples #======================================== # Sender IP Sender Recipient Expire AUTO # :16:23 AUTO # :19:14 AUTO # :03:17 … more DCP 1172, Ch. 7

79 A knowledge representation is a medium of human expression.
A good knowledge representation should facilitate the communication between the knowledge engineers who understands AI and the domain experts who understand the applications. Although the knowledge engineers may write the definitions and the rules, the domain experts should be able to read and verify whether they represent a realistic theory of the domain. DCP 1172, Ch. 7

80 Ontology, Domain Expert and Knowledge Engineer
DCP 1172, Ch. 7


Download ppt "DCP 1172 Introduction to Artificial Intelligence"

Similar presentations


Ads by Google