Pruned Search Strategies CS344 : AI - Seminar 20 th January 2011 TL Nishant Totla, RM Pritish Kamath M1 Garvit Juniwal, M2 Vivek Madan guided by Prof.

Slides:



Advertisements
Similar presentations
Chapter 6, Sec Adversarial Search.
Advertisements

Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
1 CSC 550: Introduction to Artificial Intelligence Fall 2008 search in game playing  zero-sum games  game trees, minimax principle  alpha-beta pruning.
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search Game Playing Chapter 6. Outline Games Perfect Play –Minimax decisions –α-β pruning Resource Limits and Approximate Evaluation Games.
Adversarial Search CSE 473 University of Washington.
Adversarial Search Chapter 6.
Artificial Intelligence for Games Game playing Patrick Olivier
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
This time: Outline Game playing The minimax algorithm
1 Game Playing Chapter 6 Additional references for the slides: Luger’s AI book (2005). Robert Wilensky’s CS188 slides:
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
How computers play games with you CS161, Spring ‘03 Nathan Sturtevant.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Game Tree Search based on Russ Greiner and Jean-Claude Latombe’s notes.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
CSC 412: AI Adversarial Search
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
CSE373: Data Structures & Algorithms Lecture 23: Intro to Artificial Intelligence and Game Theory Based on slides adapted Luke Zettlemoyer, Dan Klein,
Artificial Intelligence
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Adversarial Search Chapter 6 Section 1 – 4. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Luca Weibel Honors Track: Competitive Programming & Problem Solving Partisan game theory.
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
Chapter 5 Adversarial Search. 5.1 Games Why Study Game Playing? Games allow us to experiment with easier versions of real-world situations Hostile agents.
1 Chapter 6 Game Playing. 2 Chapter 6 Contents l Game Trees l Assumptions l Static evaluation functions l Searching game trees l Minimax l Bounded lookahead.
5/4/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 9, 5/4/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Game Playing Why do AI researchers study game playing?
PENGANTAR INTELIJENSIA BUATAN (64A614)
Adversarial Search Chapter 5.
Games & Adversarial Search
Games & Adversarial Search
Adversarial Search.
Games & Adversarial Search
Games & Adversarial Search
Pruned Search Strategies
Mini-Max search Alpha-Beta pruning General concerns on games
Games & Adversarial Search
Adversarial Search CMPT 420 / CMPG 720.
Adversarial Search CS 171/271 (Chapter 6)
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4.
Presentation transcript:

Pruned Search Strategies CS344 : AI - Seminar 20 th January 2011 TL Nishant Totla, RM Pritish Kamath M1 Garvit Juniwal, M2 Vivek Madan guided by Prof. Pushpak Bhattacharya

Outline ● Two player games ● Game trees ● MiniMax algorithm ● α-β pruning ● A demonstration : Chess ● Iterative Deepening A*

A Brief History ● Computer considers possible lines of play (Babbage, 1846) ● Minimax theorem (von Neumann, 1928) ● First chess program (Turing, 1951) ● Machine learning to improve evaluation accuracy (Samuel, 1952–57) ● Pruning to allow deeper search (McCarthy, 1956) ● Deep Blue wins 6-game chess match against Kasparov (Hsu et al, 1997) ● Checkers solved (Schae ff er et al, 2007)

Two player games ● The game is played by two players, who take alternate turns to change the state of the game. ● The game has a starting state S. ● The game ends when a player does not have a legal move. ● Both players end up with a score at an end state.

Classification of 2-player games ● Sequential : players move one-at-a-time ● Zero-Sum game : sum of scores assigned to the players at any end state equals 0. deterministicchance Perfect informationchess, checkers, go, Othello backgammon, monopoly, roulette Imperfect informationbattleship, kriegspiel, rock-paper-scissors bridge, poker

Game Tree ● A move changes the state of the game. ● This naturally induces a graph with the set of states as the vertices, and moves represented by the edges. ● A game tree is a graphical representation of a finite, sequential, deterministic, perfect-information game.

Tic-Tac-Toe Game Tree

Strategy for 2-player games? ● How does one go about playing 2-player games? Choose a move. Look at all possibile moves that the opponent can play. Choose a move for each of the opponent's possible move and so on... ● Consider an instance of Tic-Tac-Toe game played between Max and Min. ● The following two images describe strategies for each player.

Best Strategy? The MiniMax Algorithm -- taken from Wikipedia (

But what about the heuristics? ● The heuristics used at the leaves of the MiniMax tree depend on the rules of the game and our understanding of the same. ● A heuristic is an objective way to quantify the “goodness” of a particular state. ● For example, in chess you can use the weighted sum of pieces remaining on the board.

Properties of the minimax algorithm ● Space complexity: O(bh), where b is the average fanout, and h is the maximum search depth. ● Time complexity: O(b h ) ● For chess, b≈35, h≈100 for 'reasonable' games. ● ≈ nodes. ● This is about times the number of particles in the Universe (about ) ⇨ no way to examine every node! ● But do we really need to examine every node? Let's now see an improved idea.

Improvements? ● α-β Pruning : “Stop exploring unfavourable moves if you have already found a more favourable one.”

α-β pruning (execution) - taken from Wikipedia (

d-depth α-β p runing (pseudocode)

Resource Limits ● Even after pruning, Chess has too large a state space, which hence search depths are restricted. ● Fact: Deep Blue defeated human world champion Garry Kasparov in a six-game match in Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply.

Demonstration! ● We shall now demonstrate a chess program that uses the Minimax algorithm with α-β pruning. ● The code is written in Scheme (functional programming language). ● After this, we move to a different pruned search strategy for general graphs.

Iterative Deepening A*

Search Strategies ● Two types of search algorithms: – Brute force (breadth-first, depth-first, etc.) – Heuristic (A*, heuristic depth-first, etc.)

Definitions ● Node branching factor (b): Maximum fan-out of of the nodes of the search tree. ● Depth (d): Length of the shortest path from the initial state to a goal state ● Maximum depth(m): Maximum depth of the tree.

Breadth First Search

Depth First Search

Iterative Deepening DFS Source:

Iterative Deepening DFS Source:

IDA* Algorithm ● IDA*, like depth-first search, except based on increasing values of total cost (f=g+h) rather than increasing depths. ● At each iteration perform a depth first search cutting off a branch when its total cost exceeds a given threshold. ● Threshold is initially set to h(start). ● Threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold. ● IDA* always finds a cheapest solution if the heuristic is admissible.

Monotonocity ● For any admissible cost function f, we can construct a monotone admissible function f ' which is at least as informed as f. ● We restrict our attention to cost functions which are monotonically non-decreasing along any path in the problem space, without loss of generality.

Correctness ● Since the cost cutoff for each succeeding iteration is the minimum value which exceeded the previous cutoff, no paths can have a cost which lies in a gap between two successive cutoffs. ● IDA* examines nodes in order of increasing f-cost. ● Hence, IDA* finds the optimal path. ● Source:

Why IDA* over A*? ● Uses far less space than A* ● Expands asymptotically, the same number of nodes as A* in a tree search. ● Simpler to implement since there are no open or closed lists to be managed.

Optimality Given an admissible monotone heuristic with constant relative error, then IDA* is optimal in terms of solution cost, time, and space, over the class of admissible best- first searches on a tree.

An Empirical Test ● Both IDA* and A* were implemented for the Fifteen Puzzle. Manhattan distance heuristic. ● A* couldn't solve most cases. It ran out of space. ● IDA* generated more nodes than A*, still ran faster than A*, due to less overhead per node. ● Also refer:

Application to Game Trees ● We want to maximize search depth subject to fixed time and space constraints. ● Since IDA* minimizes, at least asymptotically, time and space for any given search depth, it maximizes the depth of search possible for any fixed time and space restrictions as well.

Thank You. Questions??

References list ● ● ve_deepening__an_opti_93341.pdf ve_deepening__an_opti_93341.pdf ● arches/intro-to-iterative-deepening.ppt arches/intro-to-iterative-deepening.ppt ● es/ids.ppt es/ids.ppt ●