Download presentation
Presentation is loading. Please wait.
Published byQuentin McCarthy Modified over 9 years ago
1
Explorations in Artificial Intelligence Prof. Carla P. Gomes gomes@cs.cornell.edu Module 5 Adversarial Search (Thanks Meinolf Sellman!)
2
Games vs. search problems "Unpredictable" opponent specifying a move for every possible opponent reply Time limits unlikely to find goal, must approximate
3
Game tree (2-player, deterministic, turns)
4
Minimax Perfect play for deterministic games Idea: choose move to position with highest minimax value = best achievable payoff against best play E.g., 2-ply game:
5
Example Your good friend and roomate suggests the following game to you: –There are 8 coins: 7 quarters and one cent. –You and him take turns to take away either 1,2, or 3 coins. –The one taking the cent loses and has to do the dishes for the next year Do you accept, if he “offers” to start? What if you may start?
6
Example 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y Min
7
Example 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 1 1 1 1 1 1 Min Max Min Max Min
8
MiniMax The previous two-player game tree evaluation is called a MiniMax-evaluation. [Note that the previous graph denoted a compressed version of a tree!] How can the MiniMax-evaluation of a two player game be computed? One way to compute it is by backtracking: On odd depths we return the max, on even depths the min of all child node evaluations. And at leaf nodes, we simply evaluate the game.
9
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? ? ? ? ? ? ? Min Max Min Max Min
10
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 ? ? ? ? ? ? Min Max Min Max Min
11
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? 1 ? ? ? ? ? 1 Min Max Min Max Min
12
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? ? ? ? 1 Min Max Min Max Min
13
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? 1 1 ? ? ? ? 1 Min Max Min Max Min
14
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? ? ? 1 Min Max Min Max Min ?
15
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? ? ? 1 Min Max Min Max Min
16
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? 1 1 1 ? ? ? 1 Min Max Min Max Min
17
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? ? ? 1 Min Max Min Max Min ?
18
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? ? ? 1 Min Max Min Max Min 1
19
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? ? 1 Min Max Min Max Min 1 ? ?
20
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? ? 1 Min Max Min Max Min 1 ? 1
21
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? ? 1 Min Max Min Max Min 1 1 1 ?
22
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? ? 1 Min Max Min Max Min 1 1 1
23
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? 1 1 Min Max Min Max Min 1 1 1 ? ?
24
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? 1 1 Min Max Min Max Min 1 1 1 ?
25
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? 1 1 Min Max Min Max Min 1 1 1 1? ?
26
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? 1 1 Min Max Min Max Min 1 1 1 1?
27
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 ? 1 1 Min Max Min Max Min 1 1 1 1
28
MiniMax Algorithm 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 1 1 Min Max Min Max Min 1 1 1 1
29
Minimax Standard Notation– Max goes first; Min goes second; The leaves of a game tree represent the final positions of the game – we assign a value to each leaf representing the payoff of the first player (MAX). Idea: choose move to position with highest minimax value = best achievable payoff against best play E.g., 2-ply game:
30
The value of a vertex in a game is defined recursively: (1)The value of a leaf is the payoff of the first player, when the game terminates in th eposition represented by the leaf; (2)The value of an internal vertex is the MAXIMUM of its children, for the case of the first player (MAX) and it is the MINIMUM of the values of its children for the case of the second player (MIN). Theorem – the value of a vertex of a game tells us the payoff to the first player if both players follow the minimax strategy and play starts from the position represented by this vertex.
31
Minimax algorithm
32
Properties of minimax Complete? Yes (if tree is finite) Optimal? Yes (against an optimal opponent) Time complexity? O(b m ) Space complexity? O(bm) (depth-first exploration) For chess, b ≈ 35, m ≈100 for "reasonable" games exact solution completely infeasible
33
Properties of minimax Complete? Yes (if tree is finite) Optimal? Yes (against an optimal opponent) Time complexity? O(b m ) Space complexity? O(bm) (depth-first exploration) For chess, b ≈ 35, m ≈100 for "reasonable" games exact solution completely infeasible
34
Alpha-Beta Pruning Was all the work that we just did really necessary to compute the MinMax-evaluation of the root-node? When can we save work? Let’s do it again!
35
α-β pruning example
40
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? ? ? ? ? ? ? Min Max Min Max Min
41
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 ? ? ? ? ? ? Min Max Min Max Min
42
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? 1 ? ? ? ? ? 1 Min Max Min Max Min
43
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? ? ? ? 1 Min Max Min Max Min
44
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? 1 1 ? ? ? ? 1 Min Max Min Max Min
45
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? ? ? 1 Min Max Min Max Min ?
46
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? ? ? 1 Min Max Min Max Min
47
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? ? 1 Min Max Min Max Min
48
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? 1 Min Max Min Max Min ? ? 1
49
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? 1 Min Max Min Max Min ? 1 1 ?
50
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? 1 Min Max Min Max Min ? 1 1 1?
51
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? 1 Min Max Min Max Min ? 1 1 11
52
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? ? 1 Min Max Min Max Min 1 1 1 11
53
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? 1 1 Min Max Min Max Min 1 1 1 11 ?
54
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? 1 1 Min Max Min Max Min 1 1 1 11 1? ?
55
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? 1 1 Min Max Min Max Min 1 1 1 11 1? ?
56
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? 1 1 Min Max Min Max Min 1 1 1 11 1? ?
57
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? 1 1 Min Max Min Max Min 1 1 1 11 1?
58
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 ? 1 1 Min Max Min Max Min 1 1 1 11 1
59
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y 1 1 1 1 Min Max Min Max Min 1 1 1 11 1
60
Alpha-Beta Pruning So far we just used slavish backtracking. Now let us assume that we have a heuristic at hand that lets us investigate more promising moves first. More promising means: For a node on a Min-level the move that gives the most negative outcome; for a node on a Max-level a move that maximizes the outcome.
61
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? Min Max Min Max Min ? ?
62
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? Min Max Min Max Min ? ?
63
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? Min Max Min Max Min ? ?
64
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? Min Max Min Max Min ?
65
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y ? Min Max Min Max Min
66
Alpha-Beta Pruning 6 5 4 3 2 1 5 4 3 2 1 3 2 1 2 1 8 7 6 5 4 3 2 1 F Y F Y F Y F Y Min Max Min Max Min
67
Properties of α-β Pruning does not affect final result Good move ordering improves effectiveness of pruning With "perfect ordering," time complexity = O(b m/2 ) doubles depth of search A simple example of the value of reasoning about which computations are relevant (a form of metareasoning)
68
Why is it called α-β? α is the value of the best (i.e., highest- value) choice found so far at any choice point along the path for max If v is worse than α, max will avoid it prune that branch Define β similarly for min
69
The α-β algorithm
71
Resource limits Suppose we have 100 secs, explore 10 4 nodes/sec 10 6 nodes per move Standard approach: cutoff test: e.g., depth limit (perhaps add quiescence search) evaluation function = estimated desirability of position
72
Evaluation functions For chess, typically linear weighted sum of features Eval(s) = w 1 f 1 (s) + w 2 f 2 (s) + … + w n f n (s) e.g., w 1 = 9 with f 1 (s) = (number of white queens) – (number of black queens), etc.
73
Cutting off search MinimaxCutoff is identical to MinimaxValue except 1.Terminal? is replaced by Cutoff? 2.Utility is replaced by Eval Does it work in practice? b m = 10 6, b=35 m=4 4-ply lookahead is a hopeless chess player! –4-ply ≈ human novice –8-ply ≈ typical PC, human master –12-ply ≈ Deep Blue, Kasparov
74
Deterministic games in practice Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used a precomputed endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 444 billion positions. Chess: Deep Blue defeated human world champion Garry Kasparov in a six-game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply. Othello: human champions refuse to compete against computers, who are too good. Go: human champions refuse to compete against computers, who are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves.
75
Summary Games are fun to work on! They illustrate several important points about AI perfection is unattainable must approximate good idea to think about what to think about
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.