Download presentation
Presentation is loading. Please wait.
1
Review Midterm 1
2
Artificial Intelligence: An Introduction
Definition of AI Foundations of AI History of AI Advanced Techniques
3
Definitions of AI Some accepted definitions of AI:
“The effort to make computers think…” “The study of the design of intelligent agents…” “The study of mental faculties through … computational models.” Dilemma: acting humanly vs acting rationally.
4
Acting Humanly To pass the Turing test, a computer needs to display the following abilities: Natural language processing Knowledge representation Automated Reasoning Machine Learning Computer Vision Robotics
5
Acting Rationally Modern View. More current view is to build rational agents. Agents are autonomous, perceive, adapt, change goals and deal with uncertainty. It is easier to evaluate and more general. The focus of this course is on Rational Agents.
6
Intelligent Agents Rationality Nature of the Environment
Introduction Rationality Nature of the Environment Structure of Agents Summary
7
Introduction What is an agent? Perceives environment through sensors
(cameras, keystrokes, etc.) percept: single input percept sequence: sequence of inputs b. Acts upon environment through actuators (motors, displays info.)
8
Figure 2.1
9
Definition A rational agent selects an action that
maximizes its performance measure given The percept sequence b. Built-in knowledge Rational agents should be autonomous. (learn under incomplete knowledge).
10
Properties of Environments
How do we define an environment? Fully or partially observable (Do sensors capture all relevant info.?) Deterministic or stochastic (Is next state completely determined by the current state and action?) Episodic or Sequential (Is experience divided into atomic episodes?)
11
Properties of Environments
Static vs Dynamic (Can environment change while taking action?) Discrete vs Continuous (Is there a finite or infinite number of states?) Single Agent vs Multiagent (single-player game? or to-player game?)
12
Agents Reflex-based Model-Based Goal-based Utility-based
Learning Agents
13
Problem Solving By Searching
Introduction Solutions and Performance Uninformed Search Strategies Avoiding Repeated States Partial Information Summary
14
Solutions We search through a search tree
We expand new nodes to grow the tree There are different search strategies Nodes contain the following: state parent node action path cost depth
15
Search Tree Initial state Expanded nodes
16
Performance Four elements of performance:
Completeness (guaranteed to find solution) Optimality (optimal solution?) Time Complexity Space Complexity
17
Performance Complexity requires three elements: Branching factor b
Depth of the shallowest node d Maximum length of any path m
18
Techniques Breadth-first search Uniform-cost search Depth-first search
Depth-limited Iterative deepening Bi-directional search
19
Informed Search and Exploration
Search Strategies Heuristic Functions Local Search Algorithms Summary
20
Greedy Best-First Search
Expand node with lowest evaluation function f(n) Function f(n) estimates the distance to the goal. Simplest case: f(n) = h(n) estimates cost of cheapest path from node n to the goal. ** HEURISTIC FUNCTION **
21
A* Search Evaluation Function: F(n) = g(n) + h(n) Estimated cost of
cheapest path from node n to goal Path cost from root to node n
22
Effective Branching Factor
N: total number of nodes generated by A* d: solution depth b* : branching factor a uniform tree of depth d would have to contain N+1 nodes. N + 1 = 1 + b* + (b*)2 + … + (b*)d
23
Effective Branching Factor
Example: N = 6; d = 2; b* = 2 Example: N = 3; d = 2; b* = 1
24
Local Search Hill Climbing Simulated Annealing Local Beam search
Genetic Algorithms
25
Figure 4.10
26
Logical Agents Logic Propositional Logic Summary
Knowledge-Based Agents Logic Propositional Logic Summary
27
Knowledge-Based Algorithm
Function KB (percept) returns action TELL (KB; PERCEPT-SENTENCE(percept,t)) action = ASK(KB,MAKE-ACTION-QUERY(t)) TELL (KB;ACTION-SENTENCE(action,t)) T = t + 1 Return action
28
Logic Important terms: Syntax .- rules to represent well-formed
sentences Semantics.- define the truth of each sentence. Model.- Possible world (m is a model of a) Entailment: a |= b a entails b
29
Logic Sentence a is derived from KB by algorithm i: KB |=i a
Algorithm i is called sound (truth preserving) if it derives only entailed sentences. Algorithm i is called complete if it derives all entailed sentences.
30
Syntax Sentence Atom | Complex Structure
Atom True | False | Symbol Symbol P | Q | R Complex S sentence | ( sentence ^ sentence ) | ( sentence V sentence ) | ( sentence sentence) | ( sentence sentence)
31
Semantics How to define the truth value of statements?
Connectives associate to truth tables. The knowledge base of an agent grows by telling it of new statements: TELL(KB,S1), … TELL(KB,Sn) KB = S1 ^ … ^ Sn
32
Some concepts Equivalence: a = b a |= b and b |= a Validity
A sentence is valid if it is true in all models b |= a a b is valid Satisfiability A sentence is satisfiable if it is true in some model
33
Proofs Reasoning Patterns Resolution Horn Clauses Forward Chaining
Backward Chaining
34
First-Order Logic Introduction Syntax and Semantics
Using First-Order Logic Summary
35
Syntax Sentence AtomicSentence | ( Sentence Connective Sentence )
| ( Quantifier Variable,… Sentence ) | ~Sentence AtomicSentence Predicate(Term) | Term = Term Term Function(Term) | Constant | Variable
36
Universal Quantifiers
How do we express properties of entire collections of objects? Universal quantification V All stars are burning hydrogen: V x Star(x) burning-hydrogen(x) True in all extended interpretations.
37
Existential Quantifiers
x Star(x) ^ burning-hydrogen(x) Normally: Universal quantifier connects with Existential quantifiers connect to ^
38
Classical Planning The Problem Syntax and Semantics
Forward and Backward Search Partial Order Planning History and Summary
39
The Problem Goal: Find sequence of actions to achieve a goal.
Search Logic Problems with previous search methods: Irrelevant actions Heuristic functions Problem decomposition
40
Syntax States: Conjunction of literals Goals: One particular state
Actions: Preconditions and Effects Example: Action(Fly(p,from,to)), PreCondition: At(p,from) ^ Plane(p) ^ Airport(from) ^ Airport(to) Effect: ~At(p,from) ^ At(p,to)
41
Semantics Airport: At(P1,JFK) ^ At(P2,SFO) ^ Plane(P1) ^
Plane(P2) ^ Airport(JFK) ^ Airport(SFO) Satisfies: At(p,from) ^ Plane(p) ^ Airport(from) ^ Airport(to) With Θ: {p/P1, from/JFK, to/SFO}
42
Types of state-space search
Forward search Also called progressive planning Ignores the irrelevant action problem Branching factor is huge
43
Types of state-space search
Backward search Also called regression planning Needs predecessor function Only relevant actions Actions need to be consistent (avoid undoing desired literals)
44
Backward state-space search
Strategy: Given goal description G Let A be an action (relevant and consistent) Predecessor: Delete positive effects of A Add preconditions of A
45
Example Goal: At(P1,JFK) Action: Fly(P1,CHI,JFK)) Delete:(At(P1,JFK))
Add: At(P1,CHI) SFO CHI JFK
46
Partial-Order Planning
Forward and Backward search look for totally ordered plans. Start action1 action2 … Goal But ideally we wish to do the following: Goal subgoal1 subgoal2 subgoal3 combine
47
Consistency A consistent plan is one in which there
are no cycles in the ordering constraints and no conflicts with casual links. A consistent plan with no open preconditions is a solution.
48
Pseudo Code Initial plan has Start, Finish, and
Start < Finish. (preconditions Finish). Pick one precondition p and find action A that achieves p. Check whether the plan is a solution (are there open preconditions?) If not return to 2.
49
Enforced Consistency Consistency is enforced as follows:
Link A p B and constraint A < B are added to the plan. If A is new add Start < A , A < Finish If conflict exists between A p B and C Then make B < C or C < A.
50
Example Start: At(Flat,Axle) ^ At(Spare,Trunk) Goal: At(Spare,Axle)
Actions: Remove(Spare,Trunk) Precond: At(Spare,Trunk) Effect: ~At(Spare,Trunk) ^ At(Spare,Ground) Remove(Flat,Axle) PutOn(Spare,Axle) LeaveOvernight
51
Example Actions Remove(Flat,Axle) precond: At(Flat,Axle)
effect: ~At(Flat,Axle) ^ At(Flat,Ground) PutOn(Spare,Axle) precond: At(Spare,Ground) ^ ~At(Flat,Axle) effect: ~At(Spare,Ground) ^ At(Spare,Axle) LeaveOvernight effect: ~At(Spare,Ground) ^ …
52
Example Plan Start Remove(Spare,Trunk) Finish PutOn(Spare,Axle)
At(Spare,Trunk) Start At(Flat,Axle) At(Spare,Trunk) At(Spare,Axle) Remove(Spare,Trunk) Finish At(Spare,Ground) PutOn(Spare,Axle) ~At(Flat,Axle)
53
Example Plan Conflict PutOn(Spare,Axle) Remove(Spare,Trunk) ?
At(Spare,Ground) At(Spare,Trunk) PutOn(Spare,Axle) Remove(Spare,Trunk) ? ~At(Flat,Axle) ~At(Flat,Axle) LeaveOvernight ~At(Spare,Ground) ~At(Spare,Trunk) Fails
54
Example Final Plan Start Remove(Spare,Trunk) Finish PutOn(Spare,Axle)
At(Spare,Trunk) Start At(Flat,Axle) At(Spare,Trunk) At(Spare,Axle) Remove(Spare,Trunk) Finish At(Spare,Ground) PutOn(Spare,Axle) ~At(Flat,Axle) At(Flat,Axle) Remove(Flat,Axle)
55
Planning in the Real World
Time and Resources Hierarchical Task Network Conditional Planning Execution Monitoring and Replanning Continuous Planning MultiAgent Planning Summary
56
Fig 12.2
57
Resource Constraints Resource(k): k units of resource are needed
by the action. The resource is reduced by k during the duration of the action.
58
Fig. 12.4
59
Hierarchical Task Network
High Order Goal Subgoal Subgoal 2 … Subgoal n Build House Get Land Construct Pay Builder
60
Fig. 12.5
61
Hierarchical Task Network
HTN is complicated (undecidable). Recursion is a problem. Allows subtask sharing. It is in general more efficient than naïve planning (linear rather than exponential). Successful story: O-Plan (Bell and Tate ‘85) It helps develop production plans for Hitachi (350 products, 35 assembly machines, operations).
62
Conditional Planning What to do with incomplete and incorrect
information? Assume “bounded indeterminacy” Solution: Construct a conditional plan with branches to consider all sorts of contingencies (include sensing actions)
63
Actions Actions can have disjunctive effects: Example: Vaccum Cleaner
Action(Left, Precond: At Right Effect: At Left V AtRight)
64
Effects Also add conditional effects: Example: Vaccum Cleaner
Action(Suck, Precond: Effect: When (At Left) CleanL When (AtRight) CleanR)
65
Fig. 12.9
66
Execution Monitoring Check to see if all is going according
to the plan. Re-planning agents repair old plans if something goes wrong. Uses action monitoring to repair and continue with the plan.
67
Fig
68
Continuous Planning The agent persists in the environment
indefinitely. The agent is part of the way through executing a plan. Example: Blocks-world problem.
69
Fig
70
MultiAgent Planning Interaction can be Cooperative Joint plan
A goal is achieved when each agent performs its assigned actions. Competitive
71
Cooperative Play Tennis (doubles) Plan 1:
A: [Go(A, [Right,Baseline]), Hit(A,Ball)] B: [NoOp(B), NoOp(B)]
72
Cooperative A common solution is to have a
convention (constraint on joint plan). Conventions arise in evolutionary processes. Example: An ant colony. Flocking behavior of birds Besides convention you may have communication.
73
Maximum Expected Utility
EU(A|E) = Σ P(Resulti(A)) U(Resulti(A)) Principle of Maximum Expected Utility: Choose action A with highest EU(A|E)
74
Example Robot Turn Right Hits wall (P = 0.1; U = 0)
Turn Left Hits wall (P = 0.1; U = 0) Finds target (P = 0.9; U = 10) Falls water (P = 0.3; U = 0) Finds target (P = 0.7; U = 10) Choose action “Turn Right”
75
Utility Functions Television Game Show:
Assume you already have won $1,000,000 Flip a coin: Tails (P = 0.5) $3,000,000 Head (P = 0.5) $0
76
Utility Functions EU(Accept) = 0.5 U(Sk) + 0.5 U(Sk + 3M)
EU(Decline) = U(Sk + 1M) Assume: Sk = 5 Sk + 1M = 8 Sk + 3M = 10
77
Fig. 16.2
78
Risk-Averse Positive part: slope decreasing.
Utility is less than expected monetary value U $
79
Risk-Seeking Negative part: Linear curve: desperate region.
risk neutral U $ U $
80
Connection to AI Choices are as good as the preferences
they are based on. If user embeds in our intelligent agents : contradictory preferences Results may be negative reasonable preferences Results may be positive
81
Assessing Utilities Best possible outcome: Amax
Worst possible outcome: Amin Use normalized utilities: U(Amax) = 1 ; U(Amin ) = 0
82
Decision Networks It’s a mechanism to make rational decisions
Also called influence diagram Combine Bayesian Networks with other nodes
83
Types of Nodes Chance Nodes. Represent random variables (like BBN)
Decision Nodes Choice of action Utility Nodes Represent agent’s utility function
84
Fig. 16.5
85
The Value of Information
Important aspect of decision making: What questions to ask. Example: Oil company. Wishes to buy n blocks of ocean drilling rights.
86
The Value of Information
Exactly one block has oil worth C dollars. The price of each block is C/n. A seismologist offers the results of a survey of block number 3. How much would you pay for the info?
87
The Value of Information
Expected improvement in utility compared with making a decision without that information.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.