CS 343H: Artificial Intelligence

Slides:



Advertisements
Similar presentations
Artificial Intelligent
Advertisements

Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Additional Topics ARTIFICIAL INTELLIGENCE
Review: Search problem formulation
Uninformed search strategies
Solving Problem by Searching
Lirong Xia Basic search Friday, Jan 24, TA OH –Joe Johnson (mainly in charge of the project): Wed 12:30-1:30 pm, Amos Eaton 217 –Hongzhao Huang.
Problem Solving and Search Andrea Danyluk September 9, 2013.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence (CS 461D)
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Review: Search problem formulation
Searching the search space graph
Artificial Intelligence Chapter 3: Solving Problems by Searching
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Announcements Project 0: Python Tutorial
1 Lecture 3 Uninformed Search
CS 188: Artificial Intelligence Fall 2009 Lecture 2: Queue-Based Search 9/1/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell, Andrew Moore.
Lecture 3 Uninformed Search.
CSE 473: Artificial Intelligence Spring 2014 Hanna Hajishirzi Problem Spaces and Search slides from Dan Klein, Stuart Russell, Andrew Moore, Dan Weld,
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
1 Problem Solving and Searching CS 171/271 (Chapter 3) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Problem Solving and Search Andrea Danyluk September 11, 2013.
Lab 3 How’d it go?.
CSE 511a: Artificial Intelligence Spring 2012
How are things going? Core AI Problem Mobile robot path planning: identifying a trajectory that, when executed, will enable the robot to reach the goal.
Search Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik,
PEAS: Medical diagnosis system  Performance measure  Patient health, cost, reputation  Environment  Patients, medical staff, insurers, courts  Actuators.
CSE 573: Artificial Intelligence Autumn 2012 Heuristic Search With slides from Dan Klein, Stuart Russell, Andrew Moore, Luke Zettlemoyer Dan Weld.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
For Monday Read chapter 4, section 1 No homework..
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
yG7s#t=15 yG7s#t=15.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
Advanced Artificial Intelligence Lecture 2: Search.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
CSE 473: Artificial Intelligence Spring 2012 Search: Cost & Heuristics Luke Zettlemoyer Lecture adapted from Dan Klein’s slides Multiple slides from Stuart.
Solving problems by searching A I C h a p t e r 3.
AI Adjacent Fields  Philosophy:  Logic, methods of reasoning  Mind as physical system  Foundations of learning, language, rationality  Mathematics.
Warm-up We’ll often have a warm-up exercise for the 10 minutes before class starts. Here’s the first one… Write the pseudo code for breadth first search.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Search Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides adapted from those.
Artificial Intelligence Solving problems by searching.
Lecture 3: Uninformed Search
Uninformed search Lirong Xia Spring, Uninformed search Lirong Xia Spring, 2017.
Announcements Homework 1 will be posted this afternoon. After today, you should be able to do Questions 1-3. Python 2.7 only for the homework; turns.
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
Problem Solving and Searching
Problem Solving and Searching
CSE 473: Artificial Intelligence Spring 2012
Uninformed search Lirong Xia. Uninformed search Lirong Xia.
Uninformed search Lirong Xia. Uninformed search Lirong Xia.
Presentation transcript:

CS 343H: Artificial Intelligence Week2a: Uninformed Search

Announcements PS0 due this Thurs 1/23 by 11:59 pm All remaining reading response deadlines are firm Printing slides before lecture No laptops, phones, devices during lecture please

Today Agents that Plan Ahead Search Problems Uninformed Search Methods Depth-First Search Breadth-First Search Uniform-Cost Search

Recall: Rational Agents An agent is an entity that perceives and acts. A rational agent selects actions that maximize its utility function. Characteristics of the percepts, environment, and action space dictate techniques for selecting rational actions. Agent Sensors ? Actuators Environment Percepts Actions

Reflex Agents Reflex agents: Can a reflex agent be rational? Choose action based on current percept (and maybe memory) May have memory or a model of the world’s current state Do not consider the future consequences of their actions Consider how the world IS Can a reflex agent be rational?

Reflex Agents

Reflex Agents

Planning Agents Plan ahead Ask “what if” Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Consider how the world WOULD BE

Planning Agents: Example 1 [demo: plan fast ]

Planning Agents: Example 2 [demo: plan slow]

Quiz: Reflex or Planning? Select which type of agent is described: Pacman, where Pacman is programmed to move in the direction of the closest food pellet Pacman, where Pacman is programmed to move in the direction of the closest food pellet, unless there is a ghost in that direction that is less than 3 steps away. A navigation system that first considers all possible routes to the destination, then selects the shortest route.

Search Problems A search problem consists of: A state space A successor function (with actions, costs) A start state and a goal test A solution is a sequence of actions (a plan) which transforms the start state to a goal state “N”, 1.0 “E”, 1.0

Example: Romania State space: Successor function: Start state: Cities Successor function: Roads: Go to adj city with cost = dist Start state: Arad Goal test: Is state == Bucharest? Solution?

A search state keeps only the details needed (abstraction) What’s in a State Space? The world state specifies every last detail of the environment A search state keeps only the details needed (abstraction) Problem 1: Pathing States: (x,y) location Actions: NSEW Successor: update location only Goal test: is (x,y)=END Problem 2: Eat-All-Dots States: {(x,y), dot booleans} Actions: NSEW Successor: update location and possibly a dot boolean Goal test: dots all false

State Space Sizes? World state: How many Agent positions: 120 Food count: 30 Ghost positions: 12 Agent facing: NSEW How many World states? 120x(230)x(122)x4 States for pathing? 120 States for eat-all-dots? 120x(230)

Ridiculously tiny search graph for a tiny search problem State Space Graphs State space graph: A mathematical representation of a search problem Nodes: abstracted world configurations Arcs: successors (action results) Goal test is set of goal nodes (maybe only one) In a search graph, each state occurs only once! We can rarely build this graph in memory (so we don’t) S G d b p q c e h a f r Ridiculously tiny search graph for a tiny search problem

Search Trees A search tree: This is now / start “N”, 1.0 “E”, 1.0 Possible futures A search tree: This is a “what if” tree of plans and outcomes Start state at the root node Children correspond to successors Nodes contain states, correspond to PLANS to those states For most problems, we can never actually build the whole tree

Quiz Consider this 4-state graph: A S G How big is its search tree (from S)? A S G B

Recall: Romania example

Searching with a search tree Expand out possible plans Maintain a fringe of unexpanded plans Try to expand as few tree nodes as possible

Detailed pseudocode is in the book! General Tree Search Important ideas: Fringe Expansion Exploration strategy Main question: which fringe nodes to explore? Detailed pseudocode is in the book!

Fringe (potential plans) Example: Tree Search S G d b p q c e h a f r Fringe (potential plans) Tree

State Graphs vs. Search Trees d b p q c e h a f r Each NODE in the search tree is an entire PATH in the problem graph. S a b d p c e h f r q G We construct both on demand – and we construct as little as possible.

States vs. Nodes Problem States Search Nodes Nodes in state space graphs are problem states Represent an abstracted state of the world Have successors, can be goal / non-goal, have multiple predecessors Nodes in search trees are plans Represent a plan (sequence of actions) which results in the node’s state Have a problem state and one parent, a path length, a depth & a cost The same problem state may be achieved by multiple search tree nodes Problem States Search Nodes Parent Depth 5 Action Node Depth 6

Depth First Search S G Strategy: expand deepest node first b p q c e h a f r a Strategy: expand deepest node first Implementation: Fringe is a LIFO stack b c e State graph d f h p r q S a b d p c e h f r q G Search tree

Quiz Which solution would depth-first search find if run on the graph below? Assume ties are broken alphabetically. For example, a partial plan S->X->A would be expanded before S->X->B; similarly, S->A->Z would be expanded before S->B->A

Search Algorithm Properties Complete? Guaranteed to find a solution if one exists? Optimal? Guaranteed to find the least cost path? Time complexity? ~How many nodes get expanded? Space complexity? ~How big can the fringe get?

Search Algorithm Properties Complete? Guaranteed to find a solution if one exists? Optimal? Guaranteed to find the least cost path? Time complexity? Space complexity? Variables: n Number of states in the problem (huge) b The average branching factor B (the average number of successors) C* Cost of least cost solution s Depth of the shallowest solution m Max depth of the search tree

DFS Infinite paths make DFS incomplete… How can we fix this? O(BLMAX) Algorithm Complete Optimal Time Space DFS Depth First Search N O(BLMAX) O(LMAX) N N Infinite Infinite b START a GOAL Infinite paths make DFS incomplete… How can we fix this?

DFS With cycle checking, DFS is complete.* When is DFS optimal? … b 1 node b nodes b2 nodes bm nodes m tiers Algorithm Complete Optimal Time Space DFS w/ Path Checking Y N O(bm) O(bm) * Or graph search – next lecture.

Quiz Which are true about DFS? (b is branching factor, m is depth of search tree.) At any given time during the search, the number of nodes on the fringe can be no larger than bm. At any given time in the search, the number of nodes on the fringe can be as large as b^m. The number of nodes expanded throughout the entire search can be no larger than bm. The number of nodes expanded throughout the entire search can be as large as b^m.

Breadth First Search G d b p q c e h a f r Strategy: expand shallowest node first Implementation: Fringe is a FIFO queue Search Tiers S a b d p c e h f r q G

Quiz Which solution would BFS find if run on this graph?

BFS Algorithm Complete Optimal Time Space DFS BFS When is BFS optimal? w/ Path Checking BFS When is BFS optimal? Y N O(bm) O(bm) Y N* O(bs) O(bs) 1 node b … b nodes s tiers b2 nodes bs nodes bm nodes

BFS complexity: concretely Russell & Norvig

Quiz Which are true about BFS? (b is the branching factor, s is the depth of the shallowest solution) At any given time during the search, the number of nodes on the fringe can be no larger than bs. At any given time during the search, the number of nodes on the fringe can be as large as b^s. The number of nodes considered throughout the entire search can be no larger than bs. The number of nodes considered throughout the entire search can be as large as b^s.

Comparisons When will BFS outperform DFS? When will DFS outperform BFS?

BFS or DFS?

Iterative Deepening Algorithm Complete Optimal Time Space DFS BFS ID Y Iterative deepening: BFS using DFS as a subroutine: Do a DFS which only searches for paths of length 1 or less. If “1” failed, do a DFS which only searches paths of length 2 or less. If “2” failed, do a DFS which only searches paths of length 3 or less. ….and so on. b … Algorithm Complete Optimal Time Space DFS w/ Path Checking BFS ID Y N O(bm) O(bm) Y N* O(bs) O(bs) Y N* O(bs) O(bs)

Costs on Actions START GOAL d b p q c e h a f r 2 9 8 1 3 4 15 Notice that BFS finds the shortest path in terms of number of transitions. It does not find the least-cost path.

Uniform Cost Search S G 2 Expand cheapest node first: b p q c e h a f r 2 Expand cheapest node first: Fringe is a priority queue (priority: cumulative cost) 1 8 2 2 3 9 2 8 1 1 15 S a b d p c e h f r q G 9 1 3 4 5 17 11 16 11 Cost contours 6 13 7 8 11 10

Priority Queue Refresher A priority queue is a data structure in which you can insert and retrieve (key, value) pairs with the following operations: pq.push(key, value) inserts (key, value) into the queue. pq.pop() returns the key with the lowest value, and removes it from the queue. You can decrease a key’s priority by pushing it again Unlike a regular queue, insertions aren’t constant time, usually O(log n) We’ll need priority queues for cost-sensitive search methods

Quiz Which solution would uniform cost search find if run on the graph below?

Uniform Cost Search Remember: explores increasing cost contours c  1 … c  1 c  2 c  3

Uniform Cost Search Algorithm Complete Optimal Time Space DFS BFS UCS w/ Path Checking BFS UCS Y N O(bm) O(bm) Y N O(bs) O(bs) Y Y O(bC*/) O(bC*/) C*/ tiers b …

Uniform Cost Issues Remember: explores increasing cost contours The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location … c  1 c  2 c  3 Start Goal

Which search algorithm?

Search Gone Wrong?

Summary Agents that Plan Ahead Search Problems Uninformed Search Methods Depth-First Search Breadth-First Search Uniform-Cost Search Next time: informed search, A*

If we had arbitrarily large negative costs, we would have to explore the entire state space to get an optimal solution. Any path, no matter how bad it appears, might lead to an arbitrarily large reward (negative cost). Therefore, one would need to exhaust all possible paths to be sure of finding the best one.

Search Heuristics Any estimate of how close a state is to a goal Designed for a particular search problem Examples: Manhattan distance, Euclidean distance 10 5 11.2

Heuristics

Best First / Greedy Search Expand the node that seems closest… What can go wrong? [demo: greedy]

Best First / Greedy Search A common case: Best-first takes you straight to the (wrong) goal Worst-case: like a badly-guided DFS in the worst case Can explore everything Can get stuck in loops if no cycle checking Like DFS in completeness (finite states w/ cycle checking) b … b …

Some hints Graph search is almost always better than tree search (when not?) Implement your closed list as a dict or set! Nodes are conceptually paths, but better to represent with a state, cost, last action, and reference to the parent node

Extra Work? Failure to detect repeated states can cause exponentially more work (why?)

Graph Search In BFS, for example, we shouldn’t bother expanding the circled nodes (why?) S a b d p c e h f r q G

Graph Search Very simple fix: never expand a state type twice Can this wreck completeness? Why or why not? How about optimality? Why or why not?

Some Hints Graph search is almost always better than tree search (when not?) Implement your closed list as a dict or set! Nodes are conceptually paths, but better to represent with a state, cost, last action, and reference to the parent node

Best First Greedy Search Algorithm Complete Optimal Time Space Greedy Best-First Search Y* N O(bm) O(bm) b … m What do we need to do to make it complete? Can we make it optimal? Next class!

Uniform Cost Search What will UCS do for this graph? What does this mean for completeness? b 1 START a 1 GOAL

Best First / Greedy Search Strategy: expand the closest node to the goal S G d b p q c e h a f r 2 9 8 1 3 5 4 15 h=0 h=8 h=4 h=5 h=11 e h=8 h=4 h=6 h=12 h=11 h=6 h=9 [demo: greedy]

Example: Tree Search G a c b e d f S h p r q

5 Minute Break A Dan Gillick original