CS Fall 2016 (Shavlik©), Lecture 9, Week 5

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Heuristic Search techniques
Heuristics CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Review: Search problem formulation
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Informed Search CSE 473 University of Washington.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Blind Search-Part 2 Ref: Chapter 2. Search Trees The search for a solution can be described by a tree - each node represents one state. The path from.
CSC344: AI for Games Lecture 4: Informed search
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l all learning algorithms,
Today’s Topics FREE Code that will Write Your PhD Thesis, a Best-Selling Novel, or Your Next Methods for Intelligently/Efficiently Searching a Space.
Computer Science CPSC 322 Lecture 9 (Ch , 3.7.6) Slide 1.
Cost-based & Informed Search Chapter 4. Review §We’ve seen that one way of viewing search is generating a tree given a state-space graph (explicitly or.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Lecture 3: Uninformed Search
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
1 Solving problems by searching Chapter 3. Depth First Search Expand deepest unexpanded node The root is examined first; then the left child of the root;
CPSC 322, Lecture 6Slide 1 Uniformed Search (cont.) Computer Science cpsc322, Lecture 6 (Textbook finish 3.5) Sept, 17, 2012.
Today’s Topics Exam Thursday Oct 22. 5:30-7:3-pm, same room as lecture Makeup Thursday Oct 29? Or that Monday or Wednesday? Exam Covers Material through.
Artificial Intelligence Solving problems by searching.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Games: Expectimax MAX MIN MAX Prune if α ≥ β. Games: Expectimax MAX MIN MAX
Lecture 3: Uninformed Search
Optimization Problems
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Optimization via Search
CSCI 4310 Lecture 10: Local Search Algorithms
Department of Computer Science
Last time: search strategies
CS Fall 2016 (Shavlik©), Lecture 11, Week 6
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
Local Search Algorithms
Uninformed Search Chapter 3.4.
Computer Science cpsc322, Lecture 14
Artificial Intelligence (CS 370D)
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Russell and Norvig: Chapter 3, Sections 3.4 – 3.6
CS Fall 2016 (Shavlik©), Lecture 8, Week 5
CS 188: Artificial Intelligence Spring 2007
Heuristic search INT 404.
CS 188: Artificial Intelligence Fall 2008
Optimization Problems
What to do when you don’t know anything know nothing
COMP 8620 Advanced Topics in AI
CS Fall 2016 (Shavlik©), Lecture 10, Week 6
Informed search algorithms
More on Search: A* and Optimization
Artificial Intelligence
Artificial Intelligence
Lecture 9 Administration Heuristic search, continued
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More advanced aspects of search
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
State-Space Searches.
Local Search Algorithms
State-Space Searches.
Search.
Search.
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Basic Search Methods How to solve the control problem in production-rule systems? Basic techniques to find paths through state- nets. For the moment: -
Presentation transcript:

CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5 12/6/2018 Today’s Topics Tradeoffs in BFS, DFS, and BEST Dealing with Large OPEN and CLOSED A Clever Combo: Iterative Deepening Beam Search (BEAM) Hill Climbing (HC) HC with Multiple Restarts Simulated Annealing (SA) 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5 DFS - remember we fill out line n+1 while working on line n Step# OPEN CLOSED X CHILDREN RemainingCHILDREN 1 { S } { } S { SS, BS, CS} { BS, CS } 2 { BS, CS } { S } BS { DB } { DB } 3 { DB, CS } { S, BS } DB { CD, ED } { ED } 4 { ED, CS } { S, BS, DB} ED { GE } { GE } 5 { GE, CS } {S,BS,DB,ED} GE DONE Notice we did not get the shortest path here (BFS did, though) We might want to also record the PARENT of each node reached, so we can easily extract the path from START to GOAL. This was done in the above using SUPERSCRIPTs NOTE: THIS SLIDE WAS ADDED TO LECTURE 8 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

Tradeoffs (d = tree depth, b = branching factor) Method Positives Negatives Breadth Guaranteed to find soln if one exists (all possible solutions generated for each depth before increasing depth) OPEN can become big, O(bd) Finds shortest path (in #arcs traversed) Can be slow Depth Open grows slower, O(b  d) Might not get shortest solution path Might find long solution quickly Can get stuck in infinite spaces Best Provides means of using domain knowledge Requires a good heuristic function 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

Memory Needs for OPEN (d = tree depth, b = branching factor) Breadth Depth 1 b b2 b3 … bd Yellow nodes in OPEN Each level has b-1 nodes in OPEN (last level has b), so O(b  d) 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

DFS with Iterative Deepening Combines strengths of BFS and DFS Algo (Fig 3.18) Let k = 0 Loop let OPEN = { startNode } // Don’t use CLOSED (depth limits handles inf loops) do DFS but limit depth to k // If depth = k, don’t generate children if goal node found return solution else if never reached the depth bound of k return FAIL // Searched finite space fully else increment k 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

Iterative Deepening Visualized See Figure 3.19 of text (use blackboard) WE SAVE NO INFORMATION BETWEEN ITERATIONS OF THE LOOP! RECOMPUTE, rather than STORE (a space-time tradeoff; common in CS) At first glance, seems stupid, but … 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

Computing the Excess Work Number of nodes generated in a depth-limited search to depth d with branching factor b totalWork(d, b) = b0 + b1 + b2 + … + bd-2 + bd-1 + bd Number of nodes generated in an iterative deepening search iterDeep_totalWork(d, b) =  totalWork(i, b) i from 1 to d 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

Example: Computing Excess Work totalWork(5, 10) = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 iterDeep_totalWork(5, 10) = totalWork(0, 10) + totalWork(1, 10) + totalWork(2, 10) + totalWork(3, 10) + totalWork(4, 10) + totalWork(5, 10) = 1 + 11 + 111 + 1,111 + 11,111 + 111,111 = 123,456 Excess work = (123,456 - 111,111) / 111,111 = 11% - not bad! 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

BEAM Search: Another Way to Deal with Large Spaces CS 540 Fall 2015 (Shavlik) 12/6/2018 BEAM Search: Another Way to Deal with Large Spaces Simple idea Never let OPEN get larger than some constant, called the ‘beam width’ Insert children in OPEN, then reduce OPEN to size ‘beam width’ (only need 1 new line in our basic code, open  discardFromBackEndIfTooLong(open, beamWidth) ) Makes most sense with BEST first search, since most promising nodes at front of OPEN The above is a variation of what the text calls “local beam search” (pg 125-126) 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

What if No Explicit Goal Test? Sometimes we don’t have an explicit description of the GOAL If so, we simply aim to maximize (or minimize) the scoring function Eg, design factory with maximal expected profit. Or cheapest assembly cost When no explicit goal, assume goal?(X) always returns FALSE 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5 Hill Climbing (HC) Only keep ONE child in OPEN,and ONLY IF that child has a better score than the current node (recall the ‘greedy’ d-tree pruning algo) Like BEAM with beam-width = 1, but don’t keep nodes worse than the current one Will stop at LOCAL maxima rather than GLOBAL Sometimes we do ‘valley [gradient] descending’ if lower scores are better, but ideas are identical 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5 HC with Multiple Restarts (simple but effective, plus runs in parallel) For some tasks, we can start in various initial states (eg, slide the 8-puzzle pieces randomly for 100 moves) Repeat N times Choose a random initial state Do HC; record score and final state if best so far Return best state found Score x x x x State Space 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

Simulated Annealing (SA) HC but sometimes allow downhill moves Over time, the prob of allowing downhill moves gets smaller Let Temperature = 100 X = StartNode // Call the current node X for short LOOP If X is a goal node or Temperature = 0, return X Randomly choose a neighbor, Y If score(Y) > score (X) move to Y // Accept since uphill Else with prob e (score(Y) – score(X)) / Temperature go to Y Reduce Temperature // Need to choose a ‘cooling schedule’ 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5 Start score = -9 B score = -11 C score = -8 D score = -4 SA Example (Let Temp = 10; scores NEGATED since originally lower was better) Assume at Start and randomly choose B What is prob will move to B? Prob= e (-11 – (-9))/10 = e -0.2 = 0.82 Assume at Start and randomly choose C What is prob will move to C? Prob = 1.0 since an UPHILL (ie, good) move 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5 Case Analysis of SA Temp >> | score(Y) – score(X) | prob  e0 = 1 so most moves accepted when temp is high Temp  | score(Y) – score(X) | prob  e-1 = 0.37 // Since score(y) < score(X) Temp << | score(Y) – score(X) | prob  e-∞ = 0 so few moves accepted when temp is low 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

Dealing with Large OPEN Lists Iterative Deepening Keep OPEN small by doing repeated work Still get shortest solution (in #arcs) BEAM Limit OPEN to a max size (the beam-width) Might discard the best (or only!) solution HC (Hill Climbing) Only go to better-scoring nodes Good choice when no explicit GOAL test Stops at local (rather than global) optimum HC with Random Restarts K times start in random state, then go uphill; keep best Might not find the global optimum, but works well in practice SA (Simulated Annealing) Always accept good moves, with some prob make a bad move In the theoretical limit, finds global optimum 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

If OPEN Can Get Too Large, What about CLOSED? If branching factor is b, we add b items to OPEN whenever one item is removed from CLOSED – so OPEN grows faster Items in CLOSED can be hashed, approximated, etc while items in OPEN need to store more info But CLOSE growing too large can still be a problem so often it isn’t used and we live with some repeated work and risk of infinite loops CLOSED not needed in Iter. Deepening and HC; WHY? 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5

CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5 (Partially) Wrap Up Search spaces can grow rapidly Sometimes we give up on optimality and seek satisfactory solutions But sometimes we’re unaware of more powerful search methods and are too simplistic! Various ‘engineering tradeoffs’ exist and best design is problem specific As technology changes, choices change 10/4/16 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5