Group 1 : Ashutosh Pushkar Ameya Sudhir From. Motivation  Game playing was one of the first tasks undertaken in AI  Study of games brings us closer.

Slides:



Advertisements
Similar presentations
Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Advertisements

Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
For Friday Finish chapter 5 Program 1, Milestone 1 due.
Games & Adversarial Search
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
Adversarial Search CSE 473 University of Washington.
Hoe schaakt een computer? Arnold Meijster. Why study games? Fun Historically major subject in AI Interesting subject of study because they are hard Games.
Adversarial Search Chapter 6.
Artificial Intelligence for Games Game playing Patrick Olivier
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Games and adversarial search
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
Game Tree Search based on Russ Greiner and Jean-Claude Latombe’s notes.
Games & Adversarial Search Chapter 6 Section 1 – 4.
CSC 412: AI Adversarial Search
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
Instructor: Vincent Conitzer
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
CSCI 4310 Lecture 6: Adversarial Tree Search. Book Winston Chapter 6.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Artificial Intelligence
Parallel Programming in Chess Simulations Tyler Patton.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Adversarial Search Chapter 6 Section 1 – 4. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time.
Adversarial Search 2 (Game Playing)
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
Adversarial Search and Game-Playing
Games and adversarial search (Chapter 5)
EA C461 – Artificial Intelligence Adversarial Search
4. Games and adversarial search
Games and adversarial search (Chapter 5)
Adversarial Search Chapter 5.
Dakota Ewigman Jacob Zimmermann
Games & Adversarial Search
Games & Adversarial Search
Adversarial Search.
Artificial Intelligence
Games & Adversarial Search
Games & Adversarial Search
Artificial Intelligence
Games & Adversarial Search
Adversarial Search CS 171/271 (Chapter 6)
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Group 1 : Ashutosh Pushkar Ameya Sudhir From

Motivation  Game playing was one of the first tasks undertaken in AI  Study of games brings us closer to :  Machines capable of logical deduction  Machines for making strategic decisions  Analyze the limitations of machines to human thought process  Games are an idealization of worlds  World state is fully accessible  Actions & outcomes are well-defined

Outline of the presentation  Evaluation Functions  Algorithms  Deep Blue  Conclusions from Deep Blue  Conclusion  References

Chess  Neither too simple  Nor too difficult for satisfactory solution  Requires “thinking” for a skilled player  Designing a chess playing program  Perfect chess playing : INTRACTABLE  Legal Chess : TRIVIAL  Play tolerably good game : SKILLFULLY

Evaluation Function

 Utility function  Whole game tree is explored  computationally expensive task !!  Estimates the expected utility of a state  Evaluation functions  cut off the exploration depth by estimating whether a state will lead to a win or loss

Evaluation Function (cont.)  A good evaluation function should  not take too long  Preserve ordering of the terminal states otherwise it will lead to bad decision making  Consider strategic moves that lead to long term advantages

Evaluation Function (cont.)  Typically includes :  Material Advantage (difference in total material of both sides) f (P) = 200(k – k’) + 9(q – q’) + 5(r – r’) + 3(b – b’) + (p – p’) + g (P) + h (P) + …  Positions of pieces  Rook on open file  double rooks  rook on seventh rank etc. and their relative positions.  Pawn Formation  Mobility Evaluation function is an attempt to write a mathematical formula for intelligence

Algorithms

Games as Search Problems  Initial states  Where game starts  Initial position in chess  Successor function  List of all legal moves from current position  Terminal State  Where the game is concluded  Utility function  Numeric value for all terminal states

Minimax Strategy

 Optimal strategy  Assumption : opponent plays his best possible move.  An option is picked which  Minimizes damage done by opponent  Does most damage to the opponent  Idea:  For each node find minimum minimax value  Choose the node with maximum of such values  This will ensure best value against most damage done by opponent

Strategy of Minimax  Opponent tries to reduce utility function’s value  For any move made by opponent in reply of computer’s move, choose minimum reduced value by opponent  Find the move with maximum such value

Analysis  Algorithm is complete for complete tree only  Not best strategy against irrational opponent  According to definition  Time complexity :O(b m )  b = max. no. of possible moves  m = max. depth of tree  In chess even in average case, b = 35 and m = 100 => time exceeds practical limits  # of states grows exponentially as per number of moves played

α-β Pruning

 The problem of minimax search  # of state to examine: exponential in number of moves  Returns same moves as minimax does  Prunes away branches that can’t influence final decision.  α: the value of the best (highest) choice so far in search of MAX  β: the value of the best (lowest) choice so far in search of MIN  Order of considering successor

Algorithm for α-β Pruning  Current highest β is found and assigned as α  β is current lowest for α’s from that move  For next possible node, while finding β, if some α is found lower than current highest β:  It will only give lesser value of final β  S0, other α’s are not found for that node  After calculating β for this node, α is replaced by max(α,β for this node)  In this way after all possible set of moves final value of α is found

α-β Pruning (2) If m is better than n for Player, we will never get to n in play and just prune it.

Analysis of α-β Pruning:  Does not affect final results  Worthwhile to examine that successor first which is likely to be best  Time complexity: O(b (m/2) )  Effective branching factor = √b  i.e. 6 rather than 35  In case of random ordering :  Total number of nodes examined is of the order O(b^ (3m/4) ) Improvement over minimax algorithm

Transposition table  Dynamic programming  Multiple paths to the same position  Savings through memorization  Use a hash table of evaluated positions

Iterative Deepening  Sometime chess is played under a strict time  Depth of search depend on time  Use of Breadth first Search  Advantage : program know which move was best at previous level

Horizon Effect  Problem with fixed depth search  Positive Horizon Effect  Negative horizon effect

Quiescence Search  Search till “quiet” position  Quiet Position  Doesn’t affect the current position so much  Example : no capture of any piece, no check, no pawn promotions/threats

State of the art : DEEP BLUE

Defeats Gary Kasparov  Won a match in 1997  Brute force computing power  Massive, parallel architecture  Special purpose hardware for chess  Parameters of the evaluation function  Learnt by studying many master games  Different evaluation function for different positions  Utilized heavily loaded endgame databases

Humans vs. Computers HumansComputers Lower Computational SpeedHigher Errors PossibleError Free Tend to be instinctiveNo instincts ImaginativeNone High Learning CapabilitiesLimited InductiveNot Inductive

Some intricacies of a chess playing system  Should not play the same sequence of moves again  A player wins a match against the computer  Starts playing the same sequence of moves  Hence, a statistical element is required  Opponent can learn the algorithm used by computer  Hence, again the need for a statistical element  Different game play during different phases  Start Game  Mid Game  End Game

Conclusion  Computer chess as a search problem  Good enough decisions  Simulation of “skill” by “knowledge”  Limitations of computers to humans  Future work :  Better evaluation functions through learning  Need for different AI techniques to play chess

References  Claude E. Shannon: Programming a Computer for Playing Chess, Philosophical Magazine, Ser.7, Vol. 41, No. 314, March  S.Russel & P. Norvig: Artificial Intelligence: A Modern Approach 2/E, Prentice Hall, ISBN-10:  Wikipedia

Thank You