Mini-Max search Alpha-Beta pruning General concerns on games Game Playing Mini-Max search Alpha-Beta pruning General concerns on games
Why study board games ? One of the oldest subfields of AI (Shannon and Turing, 1950) Abstract and pure form of competition that seems to require intelligence Easy to represent the states and actions Very little world knowledge required ! Game playing is a special case of a search problem, with some new requirements.
Types of games Chance Deterministic Imperfect information Chess, checkers, go, othello Backgammon, monopoly Sea battle Bridge, poker, scrabble, nuclear war
Why new techniques for games? “Contingency” problem: We don’t know the opponents move ! The size of the search space: Chess : ~15 moves possible per state, 80 ply 1580 nodes in tree Go : ~200 moves per state, 300 ply 200300 nodes in tree Game playing algorithms: Search tree only up to some depth bound Use an evaluation function at the depth bound Propagate the evaluation upwards in the tree
MINI MAX Restrictions: 2 players: MAX (computer) and MIN (opponent) deterministic, perfect information Select a depth-bound (say: 2) and evaluation function - Construct the tree up till the depth-bound MAX MIN Select this move 3 - Compute the evaluation function for the leaves 2 1 3 - Propagate the evaluation function upwards: - taking minima in MIN - taking maxima in MAX 2 5 3 1 4
The MINI-MAX algorithm: Initialise depthbound; Minimax (board, depth) = IF depth = depthbound THEN return static_evaluation(board); ELSE IF maximizing_level(depth) THEN FOR EACH child child of board compute Minimax(child, depth+1); return maximum over all children; ELSE IF minimizing_level(depth) return minimum over all children; Call: Minimax(current_board, 0)
Alpha-Beta Cut-off Generally applied optimization on Mini-max. Instead of: first creating the entire tree (up to depth-level) then doing all propagation Interleave the generation of the tree and the propagation of values. Point: some of the obtained values in the tree will provide information that other (non-generated) parts are redundant and do not need to be generated.
Alpha-Beta idea: Principles: generate the tree depth-first, left-to-right propagate final values of nodes as initial estimates for their parent node. MIN MAX 2 2 - The MIN-value (1) is already smaller than the MAX-value of the parent (2) 1 - The MIN-value can only decrease further, 2 =2 1 - The MAX-value is only allowed to increase, 5 - No point in computing further below this node
Terminology: - The (temporary) values at MAX-nodes are ALPHA-values - The (temporary) values at MIN-nodes are BETA-values MIN MAX 2 2 5 =2 2 1 1 Alpha-value Beta-value
The Alpha-Beta principles (1): - If an ALPHA-value is larger or equal than the Beta-value of a descendant node: stop generation of the children of the descendant MIN MAX 2 2 5 =2 2 1 1 Alpha-value Beta-value
The Alpha-Beta principles (2): - If an Beta-value is smaller or equal than the Alpha-value of a descendant node: stop generation of the children of the descendant MIN MAX 2 2 5 =2 2 3 1 Alpha-value Beta-value
Mini-Max with at work: 8 7 3 9 1 6 2 4 5 4 16 5 31 39 = 5 MAX 6 8 5 23 15 = 4 30 = 5 3 38 MIN 33 1 2 8 10 2 18 1 25 3 35 2 12 4 20 3 5 = 8 8 9 27 9 29 6 37 = 3 14 = 4 22 = 5 MAX 1 3 4 7 9 11 13 17 19 21 24 26 28 32 34 36 11 static evaluations saved !!
“DEEP” cut-offs - For game trees with at least 4 Min/Max layers: the Alpha - Beta rules apply also to deeper levels. 4 4 4 2 2
The Gain: Best case: - If at every layer: the best node is the left-most one MAX MIN Only THICK is explored
Example of a perfectly ordered tree MAX MIN 21 21 12 3 21 24 27 12 15 18 3 6 9 21 20 19 24 23 22 27 26 25 12 11 10 15 14 13 18 17 16 3 2 1 6 5 4 9 8 7
How much gain ? # (static evaluations) = - Alpha / Beta : best case : 2 bd/2 - 1 (if d is even) b(d+1)/2 + b(d-1)/2 - 1 (if d is odd) - The proof is by induction. - In the running example: d=3, b=3 : 11 !
Best case gain pictured: 10 100 1000 10000 100000 1 2 3 4 5 6 7 # Static evaluations Depth No pruning b = 10 Alpha-Beta Best case - Note: a logarithmic scale. - Conclusion: still exponential growth !! - Worst case?? For some trees alpha-beta does nothing, For some trees: impossible to reorder to avoid cut-offs
The horizon effect. Because of the depth-bound Queen lost Pawn lost horizon = depth bound of mini-max Because of the depth-bound we prefer to delay disasters, although we don’t prevent them !! solution: heuristic continuations
Time bounds: How to play within reasonable time bounds? Even with fixed depth-bound, times can vary strongly! Solution: Iterative Deepening !!! Remember: overhead of previous searches = 1/b Good investment to be sure to have a move ready.
Games of chance Ex.: Backgammon: Form of the game tree:
State of the art Drawn from an article by Mathew Ginsberg, Scientific American, Winter 1998, Special Issue on Exploring Intelligence
State of the art (2)
State of the art (3)
Win of deep blue predicted: Computer chess ratings studied around 90ies: 1500 2000 2500 3000 3500 2 4 6 8 10 12 14 Chess Rating Depth in ply Kasparov ? Further increase of depth was likely to win !