Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 188: Artificial Intelligence Fall 2009

Similar presentations


Presentation on theme: "CS 188: Artificial Intelligence Fall 2009"— Presentation transcript:

1 CS 188: Artificial Intelligence Fall 2009
Lecture 6: Adversarial Search 9/15/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore

2 Announcements Written 1 has been up (Search and CSPs)
Project 2 will be up soon (Multi-Agent Pacman) Other annoucements: None yet

3 Today Finish up Search and CSPs Start on Adversarial Search

4 Tree-Structured CSPs Theorem: if the constraint graph has no loops, the CSP can be solved in O(n d2) time Compare to general CSPs, where worst-case time is O(dn) This property also applies to probabilistic reasoning (later): an important example of the relation between syntactic restrictions and the complexity of reasoning.

5 Tree-Structured CSPs Choose a variable as root, order
variables from root to leaves such that every node’s parent precedes it in the ordering For i = n : 2, apply RemoveInconsistent(Parent(Xi),Xi) For i = 1 : n, assign Xi consistently with Parent(Xi) Runtime: O(n d2) (why?)

6 Tree-Structured CSPs Why does this work?
Claim: After each node is processed leftward, all nodes to the right can be assigned in any way consistent with their parent. Proof: Induction on position Why doesn’t this algorithm work with loops? Note: we’ll see this basic idea again with Bayes’ nets

7 Nearly Tree-Structured CSPs
Conditioning: instantiate a variable, prune its neighbors' domains Cutset conditioning: instantiate (in all ways) a set of variables such that the remaining constraint graph is a tree Cutset size c gives runtime O( (dc) (n-c) d2 ), very fast for small c

8 Tree Decompositions*    
Create a tree-structured graph of overlapping subproblems, each is a mega-variable Solve each subproblem to enforce local constraints Solve the CSP over subproblem mega-variables using our efficient tree-structured CSP algorithm M1 M2 M3 M4 NT SA WA Q SA NT NSW SA Q Q SA NSW Agree on shared vars Agree on shared vars Agree on shared vars {(WA=r,SA=g,NT=b), (WA=b,SA=r,NT=g), …} {(NT=r,SA=g,Q=b), (NT=b,SA=g,Q=r), …} Agree: (M1,M2)  {((WA=g,SA=g,NT=g), (NT=g,SA=g,Q=g)), …}

9 Iterative Algorithms for CSPs
Local search methods: typically work with “complete” states, i.e., all variables assigned To apply to CSPs: Start with some assignment with unsatisfied constraints Operators reassign variable values No fringe! Live on the edge. Variable selection: randomly select any conflicted variable Value selection by min-conflicts heuristic: Choose value that violates the fewest constraints I.e., hill climb with h(n) = total number of violated constraints

10 Example: 4-Queens States: 4 queens in 4 columns (44 = 256 states)
Operators: move queen in column Goal test: no attacks Evaluation: c(n) = number of attacks [DEMO]

11 Performance of Min-Conflicts
Given random initial state, can solve n-queens in almost constant time for arbitrary n with high probability (e.g., n = 10,000,000) The same appears to be true for any randomly-generated CSP except in a narrow range of the ratio

12 Hill Climbing Simple, general idea: Why can this be a terrible idea?
Start wherever Always choose the best neighbor If no neighbors have better scores than current, quit Why can this be a terrible idea? Complete? Optimal? What’s good about it?

13 Hill Climbing Diagram Random restarts? Random sideways steps?

14 Simulated Annealing Idea: Escape local maxima by allowing downhill moves But make them rarer as time goes on

15 Summary CSPs are a special kind of search problem:
States defined by values of a fixed set of variables Goal test defined by constraints on variable values Backtracking = depth-first search with incremental constraint checks Ordering: variable and value choice heuristics help significantly Filtering: forward checking, arc consistency prevent assignments that guarantee later failure Structure: Disconnected and tree-structured CSPs are efficient Iterative improvement: min-conflicts is usually effective in practice

16 Game Playing State-of-the-Art
Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Checkers is now solved! Chess: Deep Blue defeated human world champion Gary Kasparov in a six-game match in Deep Blue examined 200 million positions per second, used very sophisticated evaluation and undisclosed methods for extending some lines of search up to 40 ply. Current programs are even better, if less historic. Othello: Human champions refuse to compete against computers, which are too good. Go: Human champions are beginning to be challenged by machines, though the best humans still beat the best machines. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves, along with aggressive pruning. Pacman: unknown

17 GamesCrafters

18 [DEMO: mystery pacman]
Adversarial Search [DEMO: mystery pacman]

19 Game Playing Many different kinds of games! Axes:
Deterministic or stochastic? One, two, or more players? Perfect information (can you see the state)? Want algorithms for calculating a strategy (policy) which recommends a move in each state

20 Deterministic Games Many possible formalizations, one is:
States: S (start at s0) Players: P={1...N} (usually take turns) Actions: A (may depend on player / state) Transition Function: SxA  S Terminal Test: S  {t,f} Terminal Utilities: SxP  R Solution for a player is a policy: S  A

21 Deterministic Single-Player?
Deterministic, single player, perfect information: Know the rules Know what actions do Know when you win E.g. Freecell, 8-Puzzle, Rubik’s cube … it’s just search! Slight reinterpretation: Each node stores a value: the best outcome it can reach This is the maximal outcome of its children (the max value) Note that we don’t have path sums as before (utilities at end) After search, can pick move that leads to best node win lose

22 Deterministic Two-Player
E.g. tic-tac-toe, chess, checkers Zero-sum games One player maximizes result The other minimizes result Minimax search A state-space search tree Players alternate Each layer, or ply, consists of a round of moves* Choose move to position with highest minimax value = best achievable utility against best play max min 8 2 5 6 * Slightly different from the book definition

23 Tic-tac-toe Game Tree

24 Minimax Example

25 Minimax Search

26 Minimax Properties Optimal against a perfect player. Otherwise?
Time complexity? O(bm) Space complexity? For chess, b  35, m  100 Exact solution is completely infeasible But, do we need to explore the whole tree? max min 10 10 9 100 [DEMO: minVsExp]

27 Resource Limits Cannot search to leaves Depth-limited search
Instead, search a limited depth of tree Replace terminal utilities with an eval function for non-terminal positions Guarantee of optimal play is gone More plies makes a BIG difference [DEMO: limitedDepth] Example: Suppose we have 100 seconds, can explore 10K nodes / sec So can check 1M nodes per move - reaches about depth 8 – decent chess program max 4 min min -2 4 -1 -2 4 9 ? ? ? ?

28 Evaluation Functions Function which scores non-terminals
Ideal function: returns the utility of the position In practice: typically weighted linear sum of features: e.g. f1(s) = (num white queens – num black queens), etc.

29 [DEMO: thrashing, smart ghosts]
Evaluation for Pacman [DEMO: thrashing, smart ghosts]

30 Why Pacman Starves He knows his score will go up by eating the dot now
He knows his score will go up just as much by eating the dot later on There are no point-scoring opportunities after eating the dot Therefore, waiting seems just as good as eating

31

32 Iterative Deepening Iterative deepening uses DFS as a subroutine: b
Do a DFS which only searches for paths of length 1 or less. (DFS gives up on any path of length 2) If “1” failed, do a DFS which only searches paths of length 2 or less. If “2” failed, do a DFS which only searches paths of length 3 or less. ….and so on. Why do we want to do this for multiplayer games? b

33 Pruning in Minimax Search
[3,+] [3,14] [3,5] [3,3] [-,+] 3 2 14 [3,3] [-,3] [-,2] [-,14] [2,2] [-,5] 12 8 5 2

34 - Pruning Example

35 - Pruning General configuration
 is the best value that MAX can get at any choice point along the current path If n becomes worse than , MAX will avoid it, so can stop considering n’s other children Define  similarly for MIN Player Opponent Player Opponent n

36 - Pruning Pseudocode
v

37 - Pruning Properties
Pruning has no effect on final result Good move ordering improves effectiveness of pruning With “perfect ordering”: Time complexity drops to O(bm/2) Doubles solvable depth Full search of, e.g. chess, is still hopeless! A simple example of metareasoning, here reasoning about which computations are relevant

38 Non-Zero-Sum Games Similar to minimax: Utilities are now tuples
Each player maximizes their own entry at each node Propagate (or back up) nodes from children 1,2,6 4,3,2 6,1,2 7,4,1 5,1,1 1,5,2 7,7,1 5,4,5

39 Stochastic Single-Player
What if we don’t know what the result of an action will be? E.g., In solitaire, shuffle is unknown In minesweeper, mine locations In pacman, ghosts! Can do expectimax search Chance nodes, like actions except the environment controls the action chosen Calculate utility for each node Max nodes as in search Chance nodes take average (expectation) of value of children Later, we’ll learn how to formalize this as a Markov Decision Process max average 10 4 5 7 [DEMO: minVsExp]

40 Stochastic Two-Player
E.g. backgammon Expectiminimax (!) Environment is an extra player that moves after each agent Chance nodes take expectations, otherwise like minimax

41 Stochastic Two-Player
Dice rolls increase b: 21 possible rolls with 2 dice Backgammon  20 legal moves Depth 4 = 20 x (21 x 20) x 109 As depth increases, probability of reaching a given node shrinks So value of lookahead is diminished So limiting depth is less damaging But pruning is less possible… TDGammon uses depth-2 search + very good eval function + reinforcement learning: world-champion level play

42 What’s Next? Make sure you know what: Next topics: Probabilities are
Expectations are Next topics: Dealing with uncertainty How to learn evaluation functions Markov Decision Processes

43 Local Search Methods Tree search keeps unexplored alternatives on the fringe (ensures completeness) Local search: improve what you have until you can’t make it better Generally much faster and more memory efficient (but incomplete)

44 Types of Search Problems
Planning problems: We want a path to a solution (examples?) Usually want an optimal path Incremental formulations Identification problems: We actually just want to know what the goal is (examples?) Usually want an optimal goal Complete-state formulations Iterative improvement algorithms

45 Simulated Annealing Theoretical guarantee:
Stationary distribution: If T decreased slowly enough, will converge to optimal state! Is this an interesting guarantee? Sounds like magic, but reality is reality: The more downhill steps you need to escape, the less likely you are to ever make them all in a row People think hard about ridge operators which let you jump around the space in better ways

46 Genetic Algorithms Genetic algorithms use a natural selection metaphor
Like beam search (selection), but also have pairwise crossover operators, with optional mutation Probably the most misunderstood, misapplied (and even maligned) technique around!

47 Example: N-Queens Why does crossover make sense here?
When wouldn’t it make sense? What would mutation be? What would a good fitness function be?


Download ppt "CS 188: Artificial Intelligence Fall 2009"

Similar presentations


Ads by Google