Download presentation
Presentation is loading. Please wait.
Published byCleopatra Park Modified over 9 years ago
1
Pruned Search Strategies CS344 : AI - Seminar 20 th January 2011 TL Nishant Totla, RM Pritish Kamath M1 Garvit Juniwal, M2 Vivek Madan guided by Prof. Pushpak Bhattacharya
2
Outline ● Two player games ● Game trees ● MiniMax algorithm ● α-β pruning ● A demonstration : Chess ● Iterative Deepening A*
3
A Brief History ● Computer considers possible lines of play (Babbage, 1846) ● Minimax theorem (von Neumann, 1928) ● First chess program (Turing, 1951) ● Machine learning to improve evaluation accuracy (Samuel, 1952–57) ● Pruning to allow deeper search (McCarthy, 1956) ● Deep Blue wins 6-game chess match against Kasparov (Hsu et al, 1997) ● Checkers solved (Schae ff er et al, 2007)
4
Two player games ● The game is played by two players, who take alternate turns to change the state of the game. ● The game has a starting state S. ● The game ends when a player does not have a legal move. ● Both players end up with a score at an end state.
5
Classification of 2-player games ● Sequential : players move one-at-a-time ● Zero-Sum game : sum of scores assigned to the players at any end state equals 0. deterministicchance Perfect informationchess, checkers, go, Othello backgammon, monopoly, roulette Imperfect informationbattleship, kriegspiel, rock-paper-scissors bridge, poker
6
Game Tree ● A move changes the state of the game. ● This naturally induces a graph with the set of states as the vertices, and moves represented by the edges. ● A game tree is a graphical representation of a finite, sequential, deterministic, perfect-information game.
7
Tic-Tac-Toe Game Tree
8
Strategy for 2-player games? ● How does one go about playing 2-player games? Choose a move. Look at all possibile moves that the opponent can play. Choose a move for each of the opponent's possible move and so on... ● Consider an instance of Tic-Tac-Toe game played between Max and Min. ● The following two images describe strategies for each player.
11
Best Strategy? The MiniMax Algorithm -- taken from Wikipedia (http://en.wikipedia.org/wiki/Minimax)
12
But what about the heuristics? ● The heuristics used at the leaves of the MiniMax tree depend on the rules of the game and our understanding of the same. ● A heuristic is an objective way to quantify the “goodness” of a particular state. ● For example, in chess you can use the weighted sum of pieces remaining on the board.
13
Properties of the minimax algorithm ● Space complexity: O(bh), where b is the average fanout, and h is the maximum search depth. ● Time complexity: O(b h ) ● For chess, b≈35, h≈100 for 'reasonable' games. ● 35 100 ≈10 135 nodes. ● This is about 10 55 times the number of particles in the Universe (about 10 87 ) ⇨ no way to examine every node! ● But do we really need to examine every node? Let's now see an improved idea.
14
Improvements? ● α-β Pruning : “Stop exploring unfavourable moves if you have already found a more favourable one.”
15
α-β pruning (execution) - taken from Wikipedia (http://en.wikipedia.org/wiki/Alpha-beta_pruning)
16
d-depth α-β p runing (pseudocode)
17
Resource Limits ● Even after pruning, Chess has too large a state space, which hence search depths are restricted. ● Fact: Deep Blue defeated human world champion Garry Kasparov in a six-game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply.
18
Demonstration! ● We shall now demonstrate a chess program that uses the Minimax algorithm with α-β pruning. ● The code is written in Scheme (functional programming language). ● After this, we move to a different pruned search strategy for general graphs.
19
Iterative Deepening A*
20
Search Strategies ● Two types of search algorithms: – Brute force (breadth-first, depth-first, etc.) – Heuristic (A*, heuristic depth-first, etc.)
21
Definitions ● Node branching factor (b): Maximum fan-out of of the nodes of the search tree. ● Depth (d): Length of the shortest path from the initial state to a goal state ● Maximum depth(m): Maximum depth of the tree.
22
Breadth First Search
24
Depth First Search
26
Iterative Deepening DFS Source: http://homepages.ius.edu/rwisman/C463/html/Chapter3.htm
27
Iterative Deepening DFS Source: http://homepages.ius.edu/rwisman/C463/html/Chapter3.html
28
IDA* Algorithm ● IDA*, like depth-first search, except based on increasing values of total cost (f=g+h) rather than increasing depths. ● At each iteration perform a depth first search cutting off a branch when its total cost exceeds a given threshold. ● Threshold is initially set to h(start). ● Threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold. ● IDA* always finds a cheapest solution if the heuristic is admissible.
29
Monotonocity ● For any admissible cost function f, we can construct a monotone admissible function f ' which is at least as informed as f. ● We restrict our attention to cost functions which are monotonically non-decreasing along any path in the problem space, without loss of generality.
30
Correctness ● Since the cost cutoff for each succeeding iteration is the minimum value which exceeded the previous cutoff, no paths can have a cost which lies in a gap between two successive cutoffs. ● IDA* examines nodes in order of increasing f-cost. ● Hence, IDA* finds the optimal path. ● Source: http://reference.kfupm.edu.sa/content/d/e/depth_first_iterative_deepening__an_opti_93341.pdf
31
Why IDA* over A*? ● Uses far less space than A* ● Expands asymptotically, the same number of nodes as A* in a tree search. ● Simpler to implement since there are no open or closed lists to be managed.
32
Optimality Given an admissible monotone heuristic with constant relative error, then IDA* is optimal in terms of solution cost, time, and space, over the class of admissible best- first searches on a tree.
33
An Empirical Test ● Both IDA* and A* were implemented for the Fifteen Puzzle. Manhattan distance heuristic. ● A* couldn't solve most cases. It ran out of space. ● IDA* generated more nodes than A*, still ran faster than A*, due to less overhead per node. ● Also refer: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.8560
34
Application to Game Trees ● We want to maximize search depth subject to fixed time and space constraints. ● Since IDA* minimizes, at least asymptotically, time and space for any given search depth, it maximizes the depth of search possible for any fixed time and space restrictions as well.
35
Thank You. Questions??
36
References list ● www.cs.umd.edu/~nau/cmsc828n/game-tree-search.pdf www.cs.umd.edu/~nau/cmsc828n/game-tree-search.pdf ● http://reference.kfupm.edu.sa/content/d/e/depth_first_iterati ve_deepening__an_opti_93341.pdf http://reference.kfupm.edu.sa/content/d/e/depth_first_iterati ve_deepening__an_opti_93341.pdf ● http://www.cs.nott.ac.uk/~ajp/courses/g51iai/004heuristicse arches/intro-to-iterative-deepening.ppt http://www.cs.nott.ac.uk/~ajp/courses/g51iai/004heuristicse arches/intro-to-iterative-deepening.ppt ● http://www.cs.nott.ac.uk/~ajp/courses/g51iai/003blindsearch es/ids.ppt http://www.cs.nott.ac.uk/~ajp/courses/g51iai/003blindsearch es/ids.ppt ● http://homepages.ius.edu/rwisman/C463/html/Chapter3.htm http://homepages.ius.edu/rwisman/C463/html/Chapter3.htm
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.