Download presentation
Presentation is loading. Please wait.
1
ARTIFICIAL INTELLIGENCE
CS 414 ARTIFICIAL INTELLIGENCE LECTURE 02
2
Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.
3
Agents
4
Solving problems by searching
We will consider the problem of designing goal-based agents in fully observable, deterministic, discrete, known environments. The agent must find a sequence of actions that reaches the goal. The performance measure is defined by reaching the goal and how “expensive” the path to the goal is
5
Search problem components
Initial state : Actions : Transition model : What state results from performing a given action in a given state? Goal state : Path cost : Assume that it is a sum of nonnegative step costs
6
Search Space Definitions
State A description of a possible state of the world Initial state the state in which the agent starts the search Goal test Conditions the agent is trying to meet Goal state Any state which meets the goal condition Action Function that maps (transitions) from one state to another
7
Example: Romania On vacation in Romania; currently in Arad and flight leaves tomorrow from Bucharest.
8
Example: Romania Initial state : Arad Actions :
Go from one city to another Transition model : If you go from city A to city B, you end up in city B Goal state : Bucharest Path cost : Total distance traveled
9
State space The initial state, actions, and transition model define the state space of the problem. State space : The set of all states reachable from initial state by any sequence of actions. Can be represented as a directed graph where the nodes are states and links between nodes are actions.
10
Building Goal-Based Agents
What are the key questions needing to be addressed? What goal does the agent need to achieve? What knowledge does the agent need? What actions does the agent need to do?
11
Search Example: Route Finding
Actions: go straight, turn left, turn right Goal: shortest? fastest? most scenic?
12
Search Example: 8-Puzzle
Actions: move tiles (e.g., Move2Down) Goal: reach a certain configuration
13
– Agent location and dirt location – How many possible states? Actions
Example: Vacuum world Initial State States : – Agent location and dirt location – How many possible states? Actions – Left, right, suck Vacuum world state space graph
14
Example: River Crossing
A farmer wants to get his cabbage, sheep, wolf across a river. He has a boat that only holds two. He and at most one item on boat. Unsupervised, wolf bites sheep, sheep eats cabbage. How should computers solve this?
15
Example: River Crossing
State space S : all valid configurations Initial states = {(CSDF, _ )} S Goal states G = {( _ , CSDF )} S Cost(s,s’) = 1 for all transitions. Initial State Goal State
16
Example: River Crossing
State space S :
17
Search Example: Water Jugs Problem
Given 4-liter and 3-liter pitchers, how do you get exactly 2 liters into the 4-liter pitcher? 4 3
18
Water Jugs Problem State: (x, y) for # liters in 4-liter and 3-liter pitchers, respectively Actions: empty, fill, pour water between pitchers Initial state: (0, 0) Goal state: (2, *) 4 3
19
Actions / Successor Functions
1. (x, y | x < 4) (4, y) “Fill 4” 2. (x, y | y < 3) (x, 3) “Fill 3” 3. (x, y | x > 0) (0, y) “Empty 4” 4. (x, y | y > 0) (x, 0) “Empty 3” 5. (x, y | x+y ≥ 4 and y > 0) (4, y - (4 - x)) “Pour from 3 to 4 until 4 is full” 6. (x, y | x+y ≥ 3 and x > 0) (x - (3 - y), 3) “Pour from 4 to 3 until 3 is full” 7. (x, y | x+y ≤ 4 and y > 0) (x+y, 0) “Pour all water from 3 to 4”
20
State space
21
Solving problems by searching
Problem solving We want: To automatically solve a problem We need: A representation of the problem Algorithms that use some strategy to solve the problem defined in that representation
22
Uninformed Search on Trees
Uninformed means we only know: The goal test The successors() function But not which non-goal states are better For now, also assume state space is a tree Search process constructs a "search tree" root is the start state leaf nodes are: unexpanded nodes (in the Frontier list) "dead ends" (nodes that aren't goals and have no successors because no operators were applicable) goal node is last leaf node found
23
Uninformed Search Strategies
Uninformed Search: strategies that order nodes without using any domain specific information, i.e., don’t use any information stored in a state BFS: breadth-first search Queue (FIFO) used for the Frontier remove from front, add to back DFS: depth-first search Stack (LIFO) used for the Frontier remove from front, add to front CLICK EACH SUB
24
Depth First Search ( DFS )
Expand deepest unexpanded node
25
Depth First Search ( DFS )
26
Depth First Search ( DFS )
27
Depth First Search ( DFS )
28
Depth First Search (DFS)
Put start state in the agenda Loop Get a state from the agenda If goal, then return Expand state (put children to the front of agenda) Avoiding loops Don’t add a node to the agenda if it’s already in the agenda Don’t expand a node (or add it to the agenda) if it has already been expanded.
29
Depth-First Search (DFS)
S start # of nodes tested: 0, expanded: 0 expnd. node Frontier {S} 5 2 4 A B C CLICK DESC: 1. interior nodes, 2. leaf nodes, 3. arcs 9 4 6 2 D E 6 G goal 1 F 7 H
30
Depth-First Search (DFS)
# of nodes tested: 1, expanded: 1 expnd. node Frontier {S} S not goal {A,B,C} S start 5 2 4 A B C CLICK ONCE 9 4 6 2 D E 6 G goal 1 F 7 H
31
Depth-First Search (DFS)
# of nodes tested: 2, expanded: 2 S start expnd. node Frontier {S} S {A,B,C} A not goal {D,E,B,C} 5 2 4 A B C CLICK ONCE 9 4 6 2 D E 6 G goal 1 F 7 H
32
Depth-First Search (DFS)
# of nodes tested: 3, expanded: 3 S start expnd. node Frontier {S} S {A,B,C} A {D,E,B,C} D not goal {H,E,B,C} 5 2 4 A B C CLICK ONCE 9 4 6 2 D E 6 G goal 1 F 7 H
33
Depth-First Search (DFS)
# of nodes tested: 4, expanded: 4 expnd. node Frontier {S} S {A,B,C} A {D,E,B,C} D {H,E,B,C} H not goal {E,B,C} S start 5 2 4 A B C CLICK ONCE 9 4 6 2 D E 6 G goal 1 F 7 H
34
Depth-First Search (DFS)
# of nodes tested: 5, expanded: 5 expnd. node Frontier {S} S {A,B,C} A {D,E,B,C} D {H,E,B,C} H {E,B,C} E not goal {G,B,C} S start 5 2 4 A B C CLICK ONCE 9 4 6 2 D E 6 G goal 1 F 7 H
35
Depth-First Search (DFS)
# of nodes tested: 6, expanded: 5 expnd. node Frontier {S} S {A,B,C} A {D,E,B,C} D {H,E,B,C} H {E,B,C} E {G,B,C} G goal {B,C} no expand S start 5 2 4 A B C CLICK ONCE 9 4 6 2 D E 6 G goal 1 F 7 H
36
Depth-First Search (DFS)
# of nodes tested: 6, expanded: 5 expnd. node Frontier {S} S {A,B,C} A {D,E,B,C} D {H,E,B,C} H {E,B,C} E {G,B,C} G {B,C} S start 5 2 4 A B C CLICK ONCE 9 4 6 2 D E 6 G goal 1 F 7 path: S,A,E,G cost: 15 H
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.