Download presentation
Presentation is loading. Please wait.
2
1 DCP 1172 Introduction to Artificial Intelligence Lecture notes for Chap. 6 [AIMA] Chang-Sheng Chen
3
DCP 1172, Ch. 6 2 This time: Outline Adversarial search - Game playing The mini-max algorithm Resource limitations alpha-beta pruning Elements of chance
4
DCP 1172, Ch. 6 3 Game Playing Search Why study games? Why is search a good idea?
5
DCP 1172, Ch. 6 4 Why Study Games ? (1) Game playing was one the first tasks undertaken in AI. By 1950, Chess had been studied by many forerunners in AI ( e.g., Claude Shannon, Alan Turing, etc.) For AI researchers, the abstract nature of games make them an appealing feature for study. The state of a game is easy to represent, and agents are usually restricted to a small number of actions, whose outcomes are defined by precise rules.
6
DCP 1172, Ch. 6 5 Why Study Games ? (2) Games are interesting because they are too hard to solve. Games requires the ability to make some decision even when calculating the optimal decision is infeasible. Games also penalize inefficiency severely. Game-playing research has therefore spawned a number of interesting ideas on how to make the best possible use of time.
7
DCP 1172, Ch. 6 6 Why is search a good idea? Ignoring computational complexity, games are a perfect application for a complete search. Some majors assumptions we ’ ve been making: Only an agent ’ s actions change the world World is deterministic and fully observable Pretty much true in lots of games Of course, ignoring complexity is a bad idea, so games are a good place to study resource bounded searches.
8
DCP 1172, Ch. 6 7 What kind of games? Abstraction: To describe a game we must capture every relevant aspect of the game. Such as: Chess Tic-tac-toe … Fully observable environments: Such games are characterized by perfect information Search: game-playing then consists of a search through possible game positions Unpredictable opponent: introduces uncertainty thus game-playing must deal with contingency problems
9
DCP 1172, Ch. 6 8 Searching for the next move Complexity: many games have a huge search space Chess:b = 35, m=100 nodes = 35 100 if each node takes about 1 ns to explore then each move will take about 10 50 millennia to calculate. Resource (e.g., time, memory) limit: optimal solution not feasible/possible, thus must approximate 1.Pruning: makes the search more efficient by discarding portions of the search tree that cannot improve quality result. 2.Evaluation functions: heuristics to evaluate utility of a state without exhaustive search.
10
DCP 1172, Ch. 6 9 Two-player games A game formulated as a search problem: Initial state: board position and turn Successor functions: definition of legal moves Terminal state: conditions for when game is over Utility function: a numeric value that describes the outcome of the game. E.g., -1, 0, 1 for loss, draw, win. (AKA payoff function)
11
DCP 1172, Ch. 6 10 Game vs. search problem
12
DCP 1172, Ch. 6 11 Example: Tic-Tac-Toe
13
DCP 1172, Ch. 6 12 Type of games
14
DCP 1172, Ch. 6 13 Type of games
15
DCP 1172, Ch. 6 14 Generate Game Tree
16
DCP 1172, Ch. 6 15 Generate Game Tree x xxx x
17
DCP 1172, Ch. 6 16 Generate Game Tree x ox x o x o xo
18
DCP 1172, Ch. 6 17 Generate Game Tree x ox x o x o xo 1 ply 1 move
19
DCP 1172, Ch. 6 18 A subtree win lose draw xx o o o x xx o o o x xx o o o x x xx o o o x x x xx o o o x x xx o o o x x xx o o o x x xx o o o x x xx o o o x x xx o o o x x o o oo o o xx o o o x x oxxxx xx o o o x x o xx o o o x x o xx o o o x x o x xx o o o x xo
20
DCP 1172, Ch. 6 19 What is a good move? win lose draw xx o o o x xx o o o x xx o o o x x xx o o o x x x xx o o o x x xx o o o x x xx o o o x x xx o o o x x xx o o o x x xx o o o x x o o oo o o xx o o o x x oxxx xx o o o x x o xx o o o x x o x xx o o o x xo
21
DCP 1172, Ch. 6 20 MiniMax Perfect play for deterministic environments with perfect information From among the moves available to you, take the best one Where the best one is determined by a search using the MiniMax strategy
22
DCP 1172, Ch. 6 21 The minimax algorithm Basic idea: choose move with highest minimax value = best achievable payoff against best play Algorithm: 1.Generate game tree completely 2.Determine utility of each terminal state 3.Propagate the utility values upward in the three by applying MIN and MAX operators on the nodes in the current level 4.At the root node use minimax decision to select the move with the max (of the min) utility value Steps 2 and 3 in the algorithm assume that the opponent will play perfectly.
23
DCP 1172, Ch. 6 22 Minimax 38124614252 Minimize opponent’s chance Maximize your chance
24
DCP 1172, Ch. 6 23 Minimax 32 3 2 8124614252 MIN Minimize opponent’s chance Maximize your chance
25
DCP 1172, Ch. 6 24 Minimax 3 3 2 3 2 8124614252 MAX MIN Minimize opponent’s chance Maximize your chance
26
DCP 1172, Ch. 6 25 Minimax 3 3 2 3 2 8124614252 MAX MIN Minimize opponent’s chance Maximize your chance
27
DCP 1172, Ch. 6 26 MiniMax = maximum of the minimum I ’ ll choose the best move for me (max) You ’ ll choose the best move for you (min) 1st Ply 2nd Ply
28
DCP 1172, Ch. 6 27 Minimax: Recursive implementation Complete: Yes, for finite state-space Optimal: Yes Time complexity: O(b m ) Space complexity: O(bm) (= DFS Does not keep all nodes in memory.)
29
DCP 1172, Ch. 6 28 Do We Have To Do All That Work? 3812 MAX MIN
30
DCP 1172, Ch. 6 29 Do We Have To Do All That Work? 3 3 3812 MAX MIN
31
DCP 1172, Ch. 6 30 Do We Have To Do All That Work? 3 3 2 38122 MAX MIN Since 2 is smaller than 3, then there is no need for further search
32
DCP 1172, Ch. 6 31 Do We Have To Do All That Work? 3 3 X 3812 MAX MIN 2 1425 More on this next time: α-β pruning
33
DCP 1172, Ch. 6 32 Ideal Case Search all the way to the leaves (end game positions) Return the leaf (leaves) that leads to a win (for me) Anything wrong with that?
34
DCP 1172, Ch. 6 33 More Realistic Search ahead to a non-leaf (non-goal) state and evaluate it somehow Chess 4 ply is a novice 8 ply is a master 12 ply can compete at the highest level In no sense can 12 ply be likened to a search of the whole space
35
DCP 1172, Ch. 6 34 1. Move evaluation without complete search Complete search is too complex and impractical Evaluation function: evaluates value of state using heuristics and cuts off search New MINIMAX: CUTOFF-TEST: cutoff test to replace the terminal test condition (e.g., deadline, depth-limit, etc.) EVAL: evaluation function to replace utility function (e.g., number of chess pieces taken)
36
DCP 1172, Ch. 6 35 Evaluation Functions Need a numerical function that assigns a value to a non- goal state Has to capture the notion of a position being good for one player Has to be fast Typically a linear combination of simple metrics
37
DCP 1172, Ch. 6 36 Evaluation functions Weighted linear evaluation function: to combine n heuristics f = w 1 f 1 + w 2 f 2 + … + w n f n E.g, w’s could be the values of pieces (1 for prawn, 3 for bishop etc.) f’s could be the number of type of pieces on the board
38
DCP 1172, Ch. 6 37 Note: exact values do not matter
39
DCP 1172, Ch. 6 38 Minimax with cutoff: viable algorithm? Assume we have 100 seconds, evaluate 10 4 nodes/s; can evaluate 10 6 nodes/move
40
DCP 1172, Ch. 6 39 2. - pruning: search cutoff Pruning: eliminating a branch of the search tree from consideration without exhaustive examination of each node - pruning: the basic idea is to prune portions of the search tree that cannot improve the utility value of the max or min node, by just considering the values of nodes seen so far. Does it work? Yes, in roughly cuts the branching factor from b to b resulting in double as far look- ahead than pure minimax
41
DCP 1172, Ch. 6 40 - pruning: example 6 6 6 MAX 6128 MIN
42
DCP 1172, Ch. 6 41 - pruning: example 6 6 6 MAX 61282 2 2 MIN
43
DCP 1172, Ch. 6 42 - pruning: example 6 6 6 MAX 61282 2 2 5 5 5 MIN
44
DCP 1172, Ch. 6 43 - pruning: example 6 6 6 MAX 61282 2 2 5 5 5 MIN Selected move
45
DCP 1172, Ch. 6 44 Properties of -
46
DCP 1172, Ch. 6 45 - pruning: general principle Player Opponent m n v If > v then MAX will chose m so prune tree under n Similar for for MIN
47
DCP 1172, Ch. 6 46 Remember: Minimax: Recursive implementation
48
DCP 1172, Ch. 6 47 Alpha-beta Pruning Algorithm
49
DCP 1172, Ch. 6 48 More on the - algorithm Same basic idea as minimax, but prune (cut away) branches of the tree that we know will not contain the solution. Because minimax is depth-first, let’s consider nodes along a given path in the tree. Then, as we go along this path, we keep track of: : Best choice so far for MAX : Best choice so far for MIN
50
DCP 1172, Ch. 6 49 More on the - algorithm: start from Minimax Note: These are both Local variables. At the Start of the algorithm, We initialize them to = - and = +
51
DCP 1172, Ch. 6 50 More on the - algorithm … MAX MIN MAX = - = + 5 10 6 2 8 7 Min-Value loops over these In Min-Value: = - = 5 = - = 5 = - = 5 Max-Value loops over these
52
DCP 1172, Ch. 6 51 More on the - algorithm … MAX MIN MAX = - = + 5 10 6 2 8 7 In Max-Value: = - = 5 = - = 5 = - = 5 = 5 = + Max-Value loops over these
53
DCP 1172, Ch. 6 52 In Min-Value: More on the - algorithm … MAX MIN MAX = - = + 5 10 6 2 8 7 = - = 5 = - = 5 = - = 5 = 5 = + = 5 = 2 Min-Value loops over these < , End loop and return 5
54
DCP 1172, Ch. 6 53 In Max-Value: More on the - algorithm … MAX MIN MAX = - = + 5 10 6 2 8 7 = - = 5 = - = 5 = - = 5 = 5 = + = 5 = 2 End loop and return 5 = 5 = + Max-Value loops over these
55
DCP 1172, Ch. 6 54 Operation of - pruning algorithm < , End loop and return
56
DCP 1172, Ch. 6 55 Example
57
DCP 1172, Ch. 6 56 - algorithm:
58
DCP 1172, Ch. 6 57 Solution NODE TYPE ALPHA BETA SCORE A Max -I +I B Min -I +I C Max -I +I D Min -I +I E Max 10 10 10 D Min -I 10 F Max 11 11 11 D Min -I 10 10 C Max 10 +I G Min 10 +I H Max 9 9 9 G Min 10 9 9 C Max 10 +I 10 B Min -I 10 J Max -I 10 K Min -I 10 L Max 14 14 14 K Min -I 10 10 … NODE TYPE ALPHA BETA SCORE … J Max 10 10 10 B Min -I 10 10 A Max 10 +I Q Min 10 +I R Max 10 +I S Min 10 +I T Max 5 5 5 S Min 10 5 5 R Max 10 +I V Min 10 +I W Max 4 4 4 V Min 10 4 4 R Max 10 +I 10 Q Min 10 10 10 A Max 10 10 10
59
DCP 1172, Ch. 6 58 State-of-the-art for deterministic games
60
DCP 1172, Ch. 6 59 Stochastic games
61
DCP 1172, Ch. 6 60 Algorithm for stochastic games
62
DCP 1172, Ch. 6 61 Remember: Minimax algorithm
63
DCP 1172, Ch. 6 62 Stochastic games: the element of chance 3 ? 0.5 817 8 ? CHANCE ? expectimax and expectimin, expected values over all possible outcomes
64
DCP 1172, Ch. 6 63 Stochastic games: the element of chance 3 5 0.5 817 8 5 CHANCE 4 = 0.5*3 + 0.5*5 Expectimax Expectimin
65
DCP 1172, Ch. 6 64 Evaluation functions: Exact values DO matter Order-preserving transformation do not necessarily behave the same!
66
DCP 1172, Ch. 6 65 State-of-the-art for stochastic games
67
DCP 1172, Ch. 6 66 Summary
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.