Download presentation
Presentation is loading. Please wait.
Published bySamson Goodman Modified over 9 years ago
1
Quiz 4 : CSP Tree-structured CSPs require only polynomial time.True If a CSP is not tree-structured, you can always transform it into a tree. True Through tree-conversion NP-hard problems can be solved in polynomial time. False Arc consistency is identical to 2-consistency. True Arc consistency can be checked in O(nd 2 ). False Arc consistency guarantees solvability. False Forward checking implies 2-consistency. False Local search with min-conflicts heuristics is complete. False More constraints can make CSPs easier. True
2
CSE 511a: Artificial Intelligence Spring 2012 Lecture 6: Adversarial Search 2/4/2013 Robert Pless Directly adapted from course by Kilian Weinberger, most slides originally from Dan Klein, Stuart Russell or Andrew Moore 2
3
Announcements Project 1, due midnight, Thursday!
4
Game Playing 4
5
Game Playing State-of-the-Art Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Checkers is now solved! Backgammon: World champion defeated in1994 by TD-Gammon (Gerry Tesauro, IBM). First completely learned approach (neural network). Chess: Deep Blue defeated human world champion Gary Kasparov in a six-game match in 1997. Deep Blue examined 200 million positions per second, used very sophisticated evaluation and undisclosed methods for extending some lines of search up to 40 ply. Current programs are even better, if less historic. Othello: Human champions refuse to compete against computers, which are too good. Go: Human champions are beginning to be challenged by machines, though the best humans still beat the best machines. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves, along with aggressive pruning. 5
6
Adversarial Search [DEMO] 6
7
Game Playing Many different kinds of games! Axes: Deterministic or stochastic? One, two, or more players? Perfect information (can you see the state)? Want algorithms for calculating a strategy (policy) which recommends a move in each state 7
8
Deterministic Games Many possible formalizations, one is: States: S (start at s 0 ) Players: P={1...N} (usually take turns) Actions: A (may depend on player / state) Transition Function: SxA S Terminal Test: S {t,f} Terminal Utilities: SxP R Solution for a player is a policy: S A 8
9
9
10
Det. Games / Search Search Problems CSP Det. Games
11
Minimax 11 Leslie Gilbert Illingworth, “The Daily Mail”, 1962
12
Deterministic Single-Player? Deterministic, single player, perfect information: Know the rules Know what actions do Know when you win E.g. Freecell, 8-Puzzle, Rubik’s cube … it’s just search! Slight reinterpretation: Each node stores a value: the best outcome it can reach This is the maximal outcome of its children (the max value) Note that we don’t have path sums as before (utilities at end) After search, can pick move that leads to best node winlose 12
13
Deterministic Two-Player E.g. tic-tac-toe, chess, checkers Zero-sum games One player maximizes result The other minimizes result Minimax search A state-space search tree Players alternate Each layer, or ply, consists of a round of moves* Choose move to position with highest minimax value = best achievable utility against best play 8256 max min 13 * Slightly different from the book definition
14
Tic-tac-toe Game Tree 14
15
Minimax Example ///// 15
16
Minimax Search 16
17
Minimax Properties Optimal against a perfect player. Otherwise? Time complexity? O(b m ) Space complexity? O(bm) For chess, b 35, m 100 Exact solution is completely infeasible But, do we need to explore the whole tree? 10 9100 max min 17 vs.
18
Resource Limits Cannot search to leaves Depth-limited search Instead, search a limited depth of tree Replace terminal utilities with an eval function for non-terminal positions Guarantee of optimal play is gone More plies makes a BIG difference Example: Suppose we have 100 seconds, can explore 10K nodes / sec So can check 1M nodes per move reaching about depth 8 – decent chess program ???? -249 4 min max -24 18
19
Evaluation Functions Function which scores non-terminals Ideal function: returns the utility of the position In practice: typically weighted linear sum of features: e.g. f 1 ( s ) = (num white queens – num black queens), etc. 19
20
Evaluation for Pacman 20
21
An interesting example … He knows his score will go up by eating the dot now He knows his score will go up just as much by eating the dot later on There are no point-scoring opportunities after eating the dot Therefore, waiting seems just as good as eating [demo first]
22
Iterative Deepening Iterative deepening uses DFS as a subroutine: 1.Do a DFS which only searches for paths of length 1 or less. (DFS gives up on any path of length 2) 2.If “1” failed, do a DFS which only searches paths of length 2 or less. 3.If “2” failed, do a DFS which only searches paths of length 3 or less. ….and so on. Why do we want to do this for multiplayer games? … b 22
23
- Tree-Pruning 23
24
- Pruning Example 24
25
Pruning in Minimax Search [- ,+ ] 3 128 214 52 [- ,3][- ,2][- ,14] [3,3] [- ,5] [2,2] [3,+ ] [3,14][3,5][3,3] 25
26
- Pruning General configuration is the best value that MAX can get at any choice point along the current path If n becomes worse than , MAX will avoid it, so can stop considering n’s other children Define similarly for MIN Player Opponent Player Opponent n 26
27
- Pruning Pseudocode v 27
28
- Pruning Properties Pruning has no effect on final result Good move ordering improves effectiveness of pruning With “perfect ordering”: Time complexity drops to O(b m/2 ) Doubles solvable depth Full search of, e.g. chess, is still hopeless! A simple example of metareasoning, here reasoning about which computations are relevant 28 [Demo: Minimax vs. AlphaBeta]
29
Non-Zero-Sum Games Similar to minimax: Utilities are now tuples Each player maximizes their own entry at each node Propagate (or back up) nodes from children 1,2,61,2,64,3,24,3,26,1,26,1,27,4,17,4,15,1,15,1,11,5,21,5,27,7,17,7,15,4,55,4,5 29
30
What’s Next? Make sure you know what: Probabilities are Expectations are Next topics: Dealing with uncertainty How to learn evaluation functions Markov Decision Processes 30
31
Local Search Methods Tree search keeps unexplored alternatives on the fringe (ensures completeness) Local search: improve what you have until you can’t make it better Generally much faster and more memory efficient (but incomplete) 31
32
Types of Search Problems Planning problems: We want a path to a solution (examples?) Usually want an optimal path Incremental formulations Identification problems: We actually just want to know what the goal is (examples?) Usually want an optimal goal Complete-state formulations Iterative improvement algorithms 32
33
Simulated Annealing Theoretical guarantee: Stationary distribution: If T decreased slowly enough, will converge to optimal state! Is this an interesting guarantee? Sounds like magic, but reality is reality: The more downhill steps you need to escape, the less likely you are to ever make them all in a row People think hard about ridge operators which let you jump around the space in better ways 33
34
Genetic Algorithms Genetic algorithms use a natural selection metaphor Like beam search (selection), but also have pairwise crossover operators, with optional mutation Probably the most misunderstood, misapplied (and even maligned) technique around! 34
35
Example: N-Queens Why does crossover make sense here? When wouldn’t it make sense? What would mutation be? What would a good fitness function be? 35
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.