Warm-up We’ll often have a warm-up exercise for the 10 minutes before class starts. Here’s the first one… Write the pseudo code for breadth first search.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Solving problems by searching
Review: Search problem formulation
Uninformed search strategies
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Problem Solving and Search Andrea Danyluk September 9, 2013.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence for Games Uninformed search Patrick Olivier
Artificial Intelligence (CS 461D)
Uninformed Search Jim Little UBC CS 322 – Search 2 September 12, 2014
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Review: Search problem formulation
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Announcements Project 0: Python Tutorial
Solving problems by searching
CS 188: Artificial Intelligence Fall 2009 Lecture 2: Queue-Based Search 9/1/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell, Andrew Moore.
Solving Problems by Searching
CSE 473: Artificial Intelligence Spring 2014 Hanna Hajishirzi Problem Spaces and Search slides from Dan Klein, Stuart Russell, Andrew Moore, Dan Weld,
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
1 Problem Solving and Searching CS 171/271 (Chapter 3) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Problem Solving and Search Andrea Danyluk September 11, 2013.
CSE 511a: Artificial Intelligence Spring 2012
How are things going? Core AI Problem Mobile robot path planning: identifying a trajectory that, when executed, will enable the robot to reach the goal.
Search Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik,
PEAS: Medical diagnosis system  Performance measure  Patient health, cost, reputation  Environment  Patients, medical staff, insurers, courts  Actuators.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
Advanced Artificial Intelligence Lecture 2: Search.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
 Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance.
CS 343H: Artificial Intelligence
Solving problems by searching A I C h a p t e r 3.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 13, 2013.
AI Adjacent Fields  Philosophy:  Logic, methods of reasoning  Mind as physical system  Foundations of learning, language, rationality  Mathematics.
Brian Williams, Fall 041 Analysis of Uninformed Search Methods Brian C. Williams Sep 21 st, 2004 Slides adapted from: Tomas Lozano Perez,
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Search Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides adapted from those.
Artificial Intelligence Solving problems by searching.
Solving problems by searching
Uninformed search Lirong Xia Spring, Uninformed search Lirong Xia Spring, 2017.
Announcements Homework 1 will be posted this afternoon. After today, you should be able to do Questions 1-3. Python 2.7 only for the homework; turns.
Solving problems by searching
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
Principles of Computing – UFCFA3-30-1
Uninformed search Lirong Xia. Uninformed search Lirong Xia.
Uninformed search Lirong Xia. Uninformed search Lirong Xia.
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #2
Presentation transcript:

Warm-up We’ll often have a warm-up exercise for the 10 minutes before class starts. Here’s the first one… Write the pseudo code for breadth first search and depth first search Iterative version, not recursive class TreeNode TreeNode[] children() boolean isGoal() BFS(TreeNode start)… DFS(TreeNode start)…

Announcements Upcoming due dates Account forms Step-by-step videos F 5 pm Proj 0 M 11:59 pm HW 1 Be careful of edX timezone (UTC)! Account forms Pick-up after lecture (if you like) Step-by-step videos Found in edX Courseware (linked on edX Syllabus)

CS 188: Artificial Intelligence Search Please retain proper attribution and the reference to ai.berkeley.edu. Thanks! Instructors: Stuart Russell and Pat Virtue

Today Agents that Plan Ahead Search Problems Uninformed Search Methods Depth-First Search Breadth-First Search Uniform-Cost Search General theme in the class: A goal we have in mind. A mathematical abstraction to formalize this Algorithms that operate on these abstractions

Reflex Agents Reflex agents: Choose action based on current percept (and maybe memory) May have memory or a model of the world’s current state Do not consider the future consequences of their actions Consider how the world IS Examples: blinking your eye (not using your entire thinking capabilities), vacuum cleaner moving towards nearest dirt

Agents that Plan Ahead Planning agents: Spectrum of deliberativeness: Decisions based on predicted consequences of actions Must have a transition model: how the world evolves in response to actions Must formulate a goal Consider how the world WOULD BE Spectrum of deliberativeness: Generate complete, optimal plan offline, then execute Generate a simple, greedy plan, start executing, replan when something goes wrong

Demo Mastermind

Demo Replanning

Search Problems

Search Problems A search problem consists of: A state space For each state, a set Actions(s) of allowable actions A transition model Result(s,a) A step cost function c(s,a,s’) A start state and a goal test A solution is a sequence of actions (a plan) which transforms the start state to a goal state {N, E} 1 N E Often comes back on exams Goal test – sometimes more than one state that satisfies having achieved the goal, for example, “eat all the dots” Abstraction

Search Problems Are Models

Example: Travelling in Romania State space: Cities Actions: Go to adjacent city Transition model Result(A, Go(B)) = B Step cost Distance along road link Start state: Arad Goal test: Is state == Bucharest? Solution? Infinitely many solutions! Some better than others

What’s in a State Space? Problem: Pathing Problem: Eat-All-Dots The real world state includes every last detail of the environment Wrong example for “eat-all-dots”: (x, y, dot count) A search state abstracts away details not needed to solve the problem Problem: Pathing State representation: (x,y) location Actions: NSEW Transition model: update location Goal test: is (x,y)=END Problem: Eat-All-Dots State representation: {(x,y), dot booleans} Actions: NSEW Transition model: update location and possibly a dot boolean Goal test: dots all false MN states MN2MN states

Quiz: Safe Passage Problem: eat all dots while keeping the ghosts perma-scared What does the state representation have to specify? (agent position, dot booleans, power pellet booleans, remaining scared time)

State Space Graphs and Search Trees

State Space Graphs State space graph: A mathematical representation of a search problem Nodes are (abstracted) world configurations Arcs represent transitions resulting from actions The goal test is a set of goal nodes (maybe only one) In a state space graph, each state occurs only once! We can rarely build this full graph in memory (it’s too big), but it’s a useful idea

More Examples Typical finite graph explicitly defined (theoretical CS)

More Examples Typical exponential (or infinite) graph implicitly defined (AI)

Search Trees A search tree: This is now / start Possible futures Different plans that achieve the same state, will be different nodes in the tree. A search tree: A “what if” tree of plans and their outcomes The start state is the root node Children correspond to possible action outcomes Nodes show states, but correspond to PLANS that achieve those states For most problems, we can never actually build the whole tree

Quiz: State Space Graphs vs. Search Trees Consider this 4-state graph: How big is its search tree (from S)? S a b G a S G b ∞ Important: Lots of repeated structure in the search tree!

Tree Search

Search Example: Romania

Searching with a Search Tree Expand out potential plans (tree nodes) Maintain a frontier of partial plans under consideration Try to expand as few tree nodes as possible

General Tree Search Important ideas: function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier Important ideas: Frontier Expansion Exploration strategy Main question: which frontier nodes to explore?

A Note on Implementation Nodes have state, parent, action, path-cost A child of node by action a has state = result(node.state,a) parent = node action = a path-cost = node.path-cost + step-cost(node.state, a, self.state) Extract solution by tracing back parent pointers, collecting actions

Depth-First Search

Depth-First Search Strategy: expand a deepest node first Implementation: Frontier is a LIFO stack S G d b p q c e h a f r a b c e d f h p r q S a b d p c e h f r q G

Search Algorithm Properties

Search Algorithm Properties Complete: Guaranteed to find a solution if one exists? Optimal: Guaranteed to find the least cost path? Time complexity? Space complexity? Cartoon of search tree: b is the branching factor m is the maximum depth solutions at various depths Number of nodes in entire tree? 1 + b + b2 + …. bm = O(bm) 1 node b … b nodes b2 nodes m tiers bm nodes

Depth-First Search (DFS) Properties What nodes does DFS expand? Some left prefix of the tree. Could process the whole tree! If m is finite, takes time O(bm) How much space does the frontier take? Only has siblings on path to root, so O(bm) Is it complete? m could be infinite, so only if we prevent cycles (more later) Is it optimal? No, it finds the “leftmost” solution, regardless of depth or cost … b 1 node b nodes b2 nodes bm nodes m tiers

Breadth-First Search

Breadth-First Search Strategy: expand a shallowest node first Implementation: Frontier is a FIFO queue S G d b p q c e h a f r Search Tiers S a b d p c e h f r q G

Breadth-First Search (BFS) Properties What nodes does BFS expand? Processes all nodes above shallowest solution Let depth of shallowest solution be s Search takes time O(bs) How much space does the frontier take? Has roughly the last tier, so O(bs) Is it complete? s must be finite if a solution exists, so yes! Is it optimal? Only if costs are all 1 (more on costs later) 1 node b … b nodes s tiers b2 nodes bs nodes bm nodes

Quiz: DFS vs BFS

Quiz: DFS vs BFS When will BFS outperform DFS? When will DFS outperform BFS? [Demo: dfs/bfs maze water (L2D6)]

Video of Demo Maze Water DFS/BFS (part 1)

Video of Demo Maze Water DFS/BFS (part 2)

Iterative Deepening Idea: get DFS’s space advantage with BFS’s time / shallow-solution advantages Run a DFS with depth limit 1. If no solution… Run a DFS with depth limit 2. If no solution… Run a DFS with depth limit 3. ….. Isn’t that wastefully redundant? Generally most work happens in the lowest level searched, so not so bad! b …

Finding a Least-Cost Path START GOAL d b p q c e h a f r 2 9 8 1 3 4 15 BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path.

Uniform Cost Search

Uniform Cost Search S G 2 Strategy: expand a cheapest node first: b p q c e h a f r 2 Strategy: expand a cheapest node first: Frontier is a priority queue (priority: cumulative cost) 1 8 2 2 3 9 2 8 1 1 15 S a b d p c e h f r q G 9 1 3 4 5 17 11 16 11 Cost contours 6 13 7 8 11 10

Uniform Cost Search (UCS) Properties What nodes does UCS expand? Processes all nodes with cost less than cheapest solution! If that solution costs C* and arcs cost at least  , then the “effective depth” is roughly C*/ Takes time O(bC*/) (exponential in effective depth) How much space does the frontier take? Has roughly the last tier, so O(bC*/) Is it complete? Assuming best solution has a finite cost and minimum arc cost is positive, yes! Is it optimal? Yes! (Proof next lecture via A*) b c  1 … c  2 C*/ “tiers” c  3

Uniform Cost Issues Remember: UCS explores increasing cost contours … c  3 c  2 c  1 Remember: UCS explores increasing cost contours The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location We’ll fix that soon! Start Goal

Video of Demo Empty UCS

Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 1) Let students guess which one of the three it is. (This is BFS.)

Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 2) Let students guess which one of the three it is. (This is UCS.)

Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 3) Let students guess which one of the three it is. (This is DFS.)

Tree Search vs Graph Search

Tree Search: Extra Work! Failure to detect repeated states can cause exponentially more work. State Graph Search Tree O(2m) O(m)

Tree Search: Extra Work! Failure to detect repeated states can cause exponentially more work. State Graph Search Tree O(4m) O(2m2) m=20: 800 states 1,099,511,627,776 tree nodes

General Tree Search function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding each child to the frontier

General Graph Search function GRAPH-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem initialize the explored set to be empty loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution add the node to the explored set expand the chosen node, adding each child to the frontier but only if the child is not already in the frontier or explored set Theorem: each state appears at most once in the search tree constructed

Tree Search vs Graph Search Avoids infinite loops: m is finite in a finite state space Eliminates exponentially many redundant paths Requires memory proportional to its runtime!

Search Gone Wrong?