CIS 350 – I Game Programming Instructor: Rolf Lakaemper.

Slides:



Advertisements
Similar presentations
Anthony Cozzie. Quite possibly the nerdiest activity in the world But actually more fun than human chess Zappa o alpha-beta searcher, with a lot of tricks.
Advertisements

Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search CSE 473 University of Washington.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Adversarial Search Board games. Games 2 player zero-sum games Utility values at end of game – equal and opposite Games that are easy to represent Chess.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
State Space 4 Chapter 4 Adversarial Games. Two Flavors Games of Perfect Information ◦Each player knows everything that can be known ◦Chess, Othello Games.
Game Playing CSC361 AI CSC361: Game Playing.
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Adversarial Search and Game Playing Examples. Game Tree MAX’s play  MIN’s play  Terminal state (win for MAX)  Here, symmetries have been used to reduce.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Alpha-Beta Search. 2 Two-player games The object of a search is to find a path from the starting position to a goal position In a puzzle-type problem,
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
1 Computer Group Engineering Department University of Science and Culture S. H. Davarpanah
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
Minimax with Alpha Beta Pruning The minimax algorithm is a way of finding an optimal move in a two player game. Alpha-beta pruning is a way of finding.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Adversarial Games. Two Flavors  Perfect Information –everything that can be known is known –Chess, Othello  Imperfect Information –Player’s have each.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Game-playing AIs Part 2 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part II  The Minimax Rule  Alpha-Beta Pruning  Game-playing.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
Search CSC 358/ Outline Homework #6 Game search States and operators Issues Search techniques DFS, BFS Beam search A* search Alpha-beta.
Search: Games & Adversarial Search Artificial Intelligence CMSC January 28, 2003.
Game Playing Why do AI researchers study game playing?
Adversarial Search and Game-Playing
ADVERSARIAL GAME SEARCH: Min-Max Search
Iterative Deepening A*
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
Optimizing Minmax Alpha-Beta Pruning Real Time Decisions
State Space 4 Chapter 4 Adversarial Games.
Dakota Ewigman Jacob Zimmermann
Games & Adversarial Search
Games & Adversarial Search
Alpha-Beta Search.
Games & Adversarial Search
Games & Adversarial Search
NIM - a two person game n objects are in one pile
Alpha-Beta Search.
Instructor: Vincent Conitzer
The Alpha-Beta Procedure
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Alpha-Beta Search.
Alpha-Beta Search.
Mini-Max search Alpha-Beta pruning General concerns on games
Based on slides by: Rob Powers Ian Gent
Artificial Intelligence
Adversarial Search and Game Playing Examples
Games & Adversarial Search
Alpha-Beta Search.
Games & Adversarial Search
Games & Adversarial Search
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

CIS 350 – I Game Programming Instructor: Rolf Lakaemper

Classic AI: Search Trees

Overview Search Trees are the underlying techniques of round based games with a (very) limited number of moves per round, e.g. BOARDGAMES

Overview A search tree contains a certain game state (position) in a single node, the children contain the possible moves of the single active player. Let’s name player 1 ‘MAX’, player 2 ‘MIN’.

Search Trees MAX MIN MAX Min’s move Max’s move

Search Trees MAX MIN MAX Min’s move Max’s move The basic idea: compute (all) possible moves and evaluate the result (the leaves)

Search Trees 5732 MAX MIN MAX Min’s move Max’s move Now fill the tree bottom up, using the max. or min. values of the child nodes

Search Trees MAX MIN MAX Min’s move Max’s move Now fill the tree bottom up, using the max. or min. values of the child nodes A high value is good for MAX, so MIN would choose the min.-move !

Search Trees MAX MIN MAX Min’s move Max’s move Now fill the tree bottom up, using the max. or min. values of the child nodes A high value is good for MAX, so MAX would choose the max.-move !

Search Trees Problem: Lots of nodes to evaluate. Example: CHESS has an average branching factor of 35, so …

Search Trees Optimizing Tree Search Idea 1:Limit Depth The easiest idea, the worst playing skills.

Search Trees Optimizing Tree Search Idea 2:alpha-beta pruning A safe idea and a pure win !

Alpha beta pruning MAX MIN MAX Min’s move Max’s move What happens at this point ?

Alpha beta pruning MAX MIN MAX Min’s move Max’s move Since 3<5 and evaluation of the next nodes would only lead to values < 3, 3’s parent node would NEVER be chosen by MAX

Alpha beta pruning MAX MIN MAX Min’s move Max’s move We can stop searching for this branch !

Alpha beta pruning Alpha values: the best values achievable for MAX, hence the max. value so far Beta values: the best values achievable for MIN, hence the min. value so far At MIN level: compare result V of node to alpha value. If V>alpha, pass value to parent node and BREAK At MAX level: compare result V of node to beta value. If V<beta, pass value to parent node and BREAK

Alpha beta pruning Alpha Beta pruning is a pure win, but it’s highly depending on the move ordering !

Improvements Further Improvements: Quiescent search ‘Don’t leave a mess strategy’ For evaluating the leaves at depth 0, instead of the evaluation function a special function is called that evaluates special moves (e.g. captures) only down to infinit depth Guarantees e.g. that the Queen will not be captured at move in depth 0

Improvements Iterative deepening: First try depth n=1 If time left, try depth n+1 Order moves of depth n when trying depth n+1 ! Since alpha beta is order sensitive, this can speed up the process Fills time and doesn’t need predefined depth parameter Drawback: creates same positions over and over, but…

Improvements Example for multiply generated moves: Assumption: worst case: no alpha bet pruning. Branching factor 10 IterationStepsTotal … ======= position 123,450 positions 123,450 / 111,110 = 1.11 => only 11% additional pos. (worst case)

Improvements Improvement: Aspiration Windows Extension of iterative deepening Basic Idea: feed alpha beta values of previous search into current search Assumption: new values won’t differ too much Extend alpha beta by +/- window value

Improvements Improvement: Null Move Forward Pruning Idea: if the fighter can’t knock down the opponent with a free shot, the position should be pretty good ! Don’t evaluate player’s move, but opponent’s move again (free shot) If value is still good enough, don’t continue search Do this on a lower level than defined (D-2)