Artificial Intelligence for Games and Puzzles1 Artificially Narrow Alpha & Beta AlphaBeta cuts off the.

Slides:



Advertisements
Similar presentations
Anthony Cozzie. Quite possibly the nerdiest activity in the world But actually more fun than human chess Zappa o alpha-beta searcher, with a lot of tricks.
Advertisements

Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
For Friday Finish chapter 5 Program 1, Milestone 1 due.
October 1, 2012Introduction to Artificial Intelligence Lecture 8: Search in State Spaces II 1 A General Backtracking Algorithm Let us say that we can formulate.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Part2 AI as Representation and Search
CS 484 – Artificial Intelligence
Prepare for 4x4x4 tic-tac-toe
Search in AI.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Uncertain Reasoning in Games Dmitrijs Rutko Faculty of Computing University of Latvia LU and LMT Computer Science Days at Ratnieki, 2011.
Mahgul Gulzai Moomal Umer Rabail Hafeez
Intelligence for Games and Puzzles1 Berliner’s 1979 B* Algorithm: Motivations The B* algorithm.
Intelligence for Games and Puzzles1 Minimax to fixed depth Where the game tree is too large.
Game Playing CSC361 AI CSC361: Game Playing.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
Game-Playing Read Chapter 6 Adversarial Search. Game Types Two-person games vs multi-person –chess vs monopoly Perfect Information vs Imperfect –checkers.
Binary Search Introduction to Trees. Binary searching & introduction to trees 2 CMPS 12B, UC Santa Cruz Last time: recursion In the last lecture, we learned.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
COMP4031Artificial Intelligence for Games and Puzzles1 The Shape of a Game Tree Fragment Standard  -  is an algorithm which performs the same computation.
Intelligence for Games and Puzzles1 Berliner’s 1979 B* Algorithm: Motivations The B* algorithm.
A Solution to the GHI Problem for Best-First Search D. M. Breuker, H. J. van den Herik, J. W. H. M. Uiterwijk, and L. V. Allis Surveyed by Akihiro Kishimoto.
Game-Playing Read Chapter 6 Adversarial Search. State-Space Model Modified States the same Operators depend on whose turn Goal: As before: win or win.
State-Space Searches.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Team Othello Joseph Pecoraro Adam Friedlander Nicholas Ver Hoeve.
Lecture 5 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Recursion Bryce Boe 2013/11/18 CS24, Fall Outline Wednesday Recap Lab 7 Iterative Solution Recursion Binary Tree Traversals Lab 7 Recursive Solution.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
Chapter 6 Adversarial Search. Adversarial Search Problem Initial State Initial State Successor Function Successor Function Terminal Test Terminal Test.
Advance Data Structure 1 College Of Mathematic & Computer Sciences 1 Computer Sciences Department م. م علي عبد الكريم حبيب.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
CSC 221: Recursion. Recursion: Definition Function that solves a problem by relying on itself to compute the correct solution for a smaller version of.
INTELLIGENT SYSTEM FOR PLAYING TAROK
Cilk Pousse James Process CS534. Overview Introduction to Pousse Searching Evaluation Function Move Ordering Conclusion.
For Friday Finish chapter 6 Program 1, Milestone 1 due.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Adversarial Games. Two Flavors  Perfect Information –everything that can be known is known –Chess, Othello  Imperfect Information –Player’s have each.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Trees Chapter Chapter Contents Tree Concepts Hierarchical Organizations Tree Terminology Traversals of a Tree Traversals of a Binary Tree Traversals.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
Adversarial Search 2 (Game Playing)
Parallel Programming in Chess Simulations Part 2 Tyler Patton.
February 18, 2016Introduction to Artificial Intelligence Lecture 8: Search in State Spaces III 1 A General Backtracking Algorithm Sanity check function.
Keeping Binary Trees Sorted. Search trees Searching a binary tree is easy; it’s just a preorder traversal public BinaryTree findNode(BinaryTree node,
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Luca Weibel Honors Track: Competitive Programming & Problem Solving Partisan game theory.
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
CIS 350 – I Game Programming Instructor: Rolf Lakaemper.
Adversarial Search and Game-Playing
Recursive Objects (Part 4)
Iterative Deepening A*
Two-Player Games A4B33ZUI, LS 2016
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
Two-player Games (2) ZUI 2013/2014
Dakota Ewigman Jacob Zimmermann
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Based on slides by: Rob Powers Ian Gent
Presentation transcript:

Artificial Intelligence for Games and Puzzles1 Artificially Narrow Alpha & Beta AlphaBeta cuts off the search of the rest of a node’s children if/when it encounters one child whose minimaxed value exceeds  (hope). Reduced values of  result in more child nodes being cut off. Searching a tree - or any subtree - with a narrower window between  &  will cause more cutoffs. The search may be more efficient. But it may no longer return the right answer! Increased values of  become reduced values for  at the next deeper level; So increasing  results in more grandchildren being cut off.

Artificial Intelligence for Games and Puzzles2 Aspiration Search - at the root of a tree If you are about to search a tree to some depth D, And you have a fair idea V guess what the minimax value is likely to be,  (You can use the result of a search to depth D-1) Then you can search with a window V guess - , V guess +  instead of - , +  If you are right, you will obtain the true value V true with less search. The smaller  is, the less search you do, but the less likely you are right. If you are wrong, and the true value V true ≤ V guess -  Fail Low: Search again using window - , V guess If you are wrong, and the true value V true ≥ V guess +  Fail High: Search again using window V guess, + 

Artificial Intelligence for Games and Puzzles3 Iterative-Deepening Aspiration Search Algorithm {int guess = 0, depth = 1, delta = Pick_A_Number(); while Resources_Available() {int score, alpha=guess-delta, beta=guess+delta; score:=  (alpha,beta,depth); if score ≥ beta /* Fail High */ then score:=  (score,+ ,depth) else if score ≤ alpha /* Fail Low */ then score:=  (- ,score,depth); guess:=score; depth:=depth+1} return guess}

Artificial Intelligence for Games and Puzzles4 Null windows If  =  +1, (and if evaluations return integers not reals),  search will always either fail high or fail low. Such a search can be thought of as answering the boolean question: is V true ≤ V guess ? Such a search is used in NegaScout / PVS to confirm cheaply that subsequent move choices are in fact inferior to the move believed to be best but when subsequent moves are not in fact inferior, they must be searched again.

Artificial Intelligence for Games and Puzzles5 Negascout / PVS algorithm PVS( , , d) /*  =- ,  =+  at root node */ {if d=0 then return evaluation( ,  ) else {mk(1); score:=-PVS(- ,- ,d-1); unmk(1); if score <  then for m from 2 to Move_Count() {LB:=max( ,score); UB:=LB+1; mk(m); temp:=-PVS(-UB,-LB,d-1); if (temp≥UB and temp<  ) then temp:=-PVS(- ,-temp,d-1); unmk(m); score:=max(score,temp); if temp ≥  then break} return score} }

Artificial Intelligence for Games and Puzzles6 NegaScout / PVS effect (  =-20,  =+30) Suppose a negamax search of the suppressed subtrees would give the indicated values for the child nodes … the top node would have value +14

Artificial Intelligence for Games and Puzzles7 NegaScout / PVS effect (-30, +20)  (  =-20,  =+30)

Artificial Intelligence for Games and Puzzles8 NegaScout / PVS effect (-30, +20)  (  =-20,  =+30) UB=+9 +9

Artificial Intelligence for Games and Puzzles9 NegaScout / PVS effect (-30, +20)  -9 (-10, -9)  (  =-20,  =+30) UB=+9 +7

Artificial Intelligence for Games and Puzzles10 NegaScout / PVS effect (-30, +20)  -9 (-10, -9)  -6 (-10, -9)  (  =-20,  =+30) UB=+9 +6

Artificial Intelligence for Games and Puzzles11 NegaScout / PVS effect (-30, +20)  -9 (-10, -9)  -12 (-10, -9)  -6 (-10, -9)  (  =-20,  =+30) UB=+9 +12

Artificial Intelligence for Games and Puzzles12 NegaScout / PVS effect (-30, +20)  -9 (-10, -9)  -12 (-10, -9)  -6 (-10, -9)  (  =-20,  =+30) UB=+9 +12

Artificial Intelligence for Games and Puzzles13 NegaScout / PVS effect (-30, +20)  -9 (-30, +12)  -14 (-10, -9)  -12 (-10, -9)  -6 (-10, -9)  (  =-20,  =+30) UB=+9

Artificial Intelligence for Games and Puzzles14 NegaScout / PVS effect (-30, +20)  -9 (-30, +12)  -14 (-10, -9)  -12 (-10, -9)  -6 (-10, -9)  (  =-20,  =+30) UB=

Artificial Intelligence for Games and Puzzles15 NegaScout / PVS effect (-30, +20)  -9 (-30, +12)  -14 (-15, -14)  -8 (-10, -9)  -12 (-10, -9)  -6 (-10, -9)  (  =-20,  =+30) UB=+14 +8

Artificial Intelligence for Games and Puzzles16 The SCOUT algorithm Takes to an extreme the idea that minimal-window  searches are cheap Replaces general  with a divide-and-conquer approach First guess midway in range of possible values - say [-512,+512] Guess 0, use window [0,+1] A result of say +24 means only that the true result exceeds 0 oTry again, with window [+256,+257] oA result of say +24 means only that the true result is no more than 256 –Try again, window [+128,+129] * and so on, cutting range in half each time, * eventually finding the N such that value > N-1 & value ≤ N

Artificial Intelligence for Games and Puzzles17 MTD(f) Algorithm Scout’s divide-and-conquer approach does not take advantage of the result of a previous iterative-deepening search to make a good 1st guess. MTD(f) does, then creeps toward correct answer. score=0; d=0; while resources permit, {d:=d+1; LB=-  ; UB=+  ; while LB < UB {if score = LB then window:=score else window:=score-1; score:=  (window,window+1,d); if score <= window then UB:=score else LB:=score} } return score

Artificial Intelligence for Games and Puzzles18 Reflection on  variants Aspiration Search, NegaScout, Scout, MTD(f) All these variations on  may involve repeating searches with different choices for  and . This can make savings, but probably only if a transposition table is also used, to allow results of previous partial searches to be reused. For chess, MTD(f) seems to need on average iterations to converge, and saves 5%-15% in terms of number of nodes generated. [ref Schaeffer: ]  is fundamentally a depth-first tree search algorithm. Iterative Deepening gives an effect rather like breadth-first search.