Intelligence for Games and Puzzles1 Minimax to fixed depth Where the game tree is too large.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 5. Game Playing
Advertisements

Anthony Cozzie. Quite possibly the nerdiest activity in the world But actually more fun than human chess Zappa o alpha-beta searcher, with a lot of tricks.
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Computers playing games. One-player games Puzzle: Place 8 queens on a chess board so that no two queens attack each other (i.e. on the same row, same.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
For Friday Finish chapter 5 Program 1, Milestone 1 due.
Games & Adversarial Search
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
CS 484 – Artificial Intelligence
Adversarial Search: Game Playing Reading: Chapter next time.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search CSE 473 University of Washington.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Artificial Intelligence in Game Design
State Space 4 Chapter 4 Adversarial Games. Two Flavors Games of Perfect Information ◦Each player knows everything that can be known ◦Chess, Othello Games.
This time: Outline Game playing The minimax algorithm
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
MAE 552 – Heuristic Optimization Lecture 28 April 5, 2002 Topic:Chess Programs Utilizing Tree Searches.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
CIS 310: Visual Programming, Spring 2006 Western State College 310: Visual Programming Othello.
COMP4031Artificial Intelligence for Games and Puzzles1 The Shape of a Game Tree Fragment Standard  -  is an algorithm which performs the same computation.
Adversarial Search: Game Playing Reading: Chess paper.
Artificial Intelligence for Games and Puzzles1 Artificially Narrow Alpha & Beta AlphaBeta cuts off the.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Lecture 5 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Lecture 6: Game Playing Heshaam Faili University of Tehran Two-player games Minmax search algorithm Alpha-Beta pruning Games with chance.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Chapter 12 Adversarial Search. (c) 2000, 2001 SNU CSE Biointelligence Lab2 Two-Agent Games (1) Idealized Setting  The actions of the agents are interleaved.
Mark Dunlop, Computer and Information Sciences, Strathclyde University 1 Algorithms & Complexity 5 Games Mark D Dunlop.
Instructor: Vincent Conitzer
Minimax with Alpha Beta Pruning The minimax algorithm is a way of finding an optimal move in a two player game. Alpha-beta pruning is a way of finding.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Games 1 Alpha-Beta Example [-∞, +∞] Range of possible values Do DF-search until first leaf.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Chess and AI Group Members Abhishek Sugandhi Sanjeet Khaitan Gautam Solanki
For Friday Finish chapter 6 Program 1, Milestone 1 due.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
Adversarial Search. Regular Tic Tac Toe Play a few games. –What is the expected outcome –What kinds of moves “guarantee” that?
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
Search: Games & Adversarial Search Artificial Intelligence CMSC January 28, 2003.
CIS 350 – I Game Programming Instructor: Rolf Lakaemper.
Adversarial Search and Game-Playing
Instructor: Vincent Conitzer
CS Fall 2016 (Shavlik©), Lecture 11, Week 6
Pengantar Kecerdasan Buatan
State Space 4 Chapter 4 Adversarial Games.
Dakota Ewigman Jacob Zimmermann
Instructor: Vincent Conitzer
The Alpha-Beta Procedure
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Intelligence for Games and Puzzles1 Minimax to fixed depth Where the game tree is too large to be exhaustively searched, the fixed-depth minimax algorithm is a start. in Chess, assuming around 10 3 possibilities per white-and-black pair of moves, and around 40 moves per player in a typical game, means around 10 3*40 = possible games. even at one move per nanosecond, this would take around 3x = 30,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 years.

Intelligence for Games and Puzzles2 Minimax to fixed depth Where the game tree is too large to be exhaustively searched, the fixed-depth minimax algorithm is a start. in Chess, assuming around 10 3 possibilities per white-and-black pair of moves, and around 40 moves per player in a typical game, means around 10 3*40 = possible games. even at one game (choice) per nanosecond, this would take around 3x = 30,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 years. parallelism? Earth’s electrons say so only years

Intelligence for Games and Puzzles3 Minimax to fixed depth Where the game tree is too large to be exhaustively searched, the fixed-depth minimax algorithm is a start. in Chess, assuming around 10 3 possibilities per white-and-black pair of moves, and around 40 moves per player in a typical game, means around 10 3*40 = possible games. even at one move per nanosecond, this would take around 3x = 30,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 years. parallelism? Earth’s electrons say so only years speedups? 3 human generations say so only years

Intelligence for Games and Puzzles4 Minimax to fixed depth Where the game tree is too large to be exhaustively searched, the fixed-depth minimax algorithm is a start. in Chess, assuming around 10 3 possibilities per white-and-black pair of moves, and around 40 moves per player in a typical game, means around 10 3*40 = possible games. even at one move per nanosecond, this would take around 3x = 30,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 years. parallelism? Earth’s electrons say so only years speedups? 3 generations say so only years compare un-American estimate of age of universe: approx years

Intelligence for Games and Puzzles5 Minimax to fixed depth Minimax (node Node, int Height, bool Maxing) if Height = 0 or no moves are possible from Node then Return Evaluation (Node) /*Big values favour Maxer*/ else {real Temp, Score = if Maxing then -  else +  ; for each move M at node Node, {generate NewNode from Node & M; Temp:=Minimax(Newnode, Height - 1, not Maxing); Destroy NewNode; if Maxing then Score:= max (Temp, Score) else Score:= min (Temp, Score) } Return Score } Even at 1 nanosecond per move, searching just from height 8 would take around nanoseconds, around 20 minutes.

Intelligence for Games and Puzzles6 Negamax Negamax is a simple variation which treats both players alike. Though we may continue to call them Max and Min, this can be misleading for odd heights: they both seek to Maximise the negation of values at a lower level. Negamax (node N, int Height) if Height = 0 or no moves possible from Node then Return Evaluation (Node) /* From the perspective of the player to move! */ else {real Temp, Score=-  ; for each move M at node Mode, {generate NewNode from Node & M; Temp:= - Negamax (Newnode, Height-1); destroy NewNode; Score:= max (Score, Temp)} Return Score} -

Intelligence for Games and Puzzles7 Negamax game tree Maximiser’s choice

Intelligence for Games and Puzzles8 Negamax game tree Maximiser’s choice Minimiser’s choice

Intelligence for Games and Puzzles9 Negamax game tree Maximiser’s choice Minimiser’s choice Maximiser’s choice

Intelligence for Games and Puzzles10 Negamax game tree 8 Maximiser’s choice Minimiser’s choice Maximiser’s choice

Intelligence for Games and Puzzles11 Negamax game tree 86 Maximiser’s choice Minimiser’s choice Maximiser’s choice

Intelligence for Games and Puzzles12 Negamax game tree Maximiser’s choice Minimiser’s choice Maximiser’s choice

Intelligence for Games and Puzzles13 Negamax game tree Maximiser’s choice Minimiser’s choice Maximiser’s choice This one is better for Minimiser, who is the player “on the move”

Intelligence for Games and Puzzles14 Negamax game tree Maximiser’s choice Minimiser’s choice Maximiser’s choice

Intelligence for Games and Puzzles15 Negamax game tree Maximiser’s choice Minimiser’s choice Maximiser’s choice

Intelligence for Games and Puzzles16 Negamax game tree Maximiser’s choice Minimiser’s choice Maximiser’s choice

Intelligence for Games and Puzzles17 Negamax game tree Maximiser’s choice Minimiser’s choice Maximiser’s choice Every node bears the value to the player with a choice of moves from that node.

Intelligence for Games and Puzzles18 Negamax game tree Maximiser’s choice Minimiser’s choice Maximiser’s choice Which nodes have good move ordering of the tree beneath? X XX   ?

Intelligence for Games and Puzzles19 Iterative Deepening Minimax (and Negamax) require to be told a depth to which the tree should be searched. This is an arbitrary parameter. Iterative deepening (also “progressive deepening”) involves using a depth-first search technique like Minimax and friends, to a fixed depth, and then, if time allows, repeating it at a greater depth,  and then, if time allows, repeating it at still greater depth, and so on. Intuitively this seems crazy. It seems a waste to duplicate the work of searches at low depth limits.

Intelligence for Games and Puzzles20 Advantages of iterative deepening 1.Actually, very little work is wasted. If in chess there really are around possible, then searching to one extra ply involves generating about times more positions than before. Wasting the work of a search to one less ply wastes 1/30th-1/40th the work. 2.If there is time to perform a deeper search, fine, go ahead and do it; but if not, with iterative deepening you have a search result ready to play. 3.With Alpha-Beta search, coming up next, good move ordering gives much bigger savings than random move ordering. Iterative Deepening reveals the Principal Variation up to depth N, which can be used to order moves up to depth N in searching to depth N+1; And so usually actually saves time overall!

Intelligence for Games and Puzzles21 AlphaBeta pruning Alpha-Beta always gives the same value to the top node of a game tree as Minimax, typically at much less cost. It is an algorithm not a heuristic. It achieves efficiency by recognising situations where search of part of the game tree would not alter the value higher up - and pruning away the useless branches. Two parameters are maintained in a preorder traversal of a game tree:  - Best score known to be achievable by the choosing player  - Best score that can be hoped for by the choosing player Why play a move demonstrably worse than another? (  for choice) Why expect your opponent to let you off more lightly than possible? (  cutoff)

Intelligence for Games and Puzzles22 AlphaBeta pruning Alpha-Beta always gives the same value to the top node of a game tree as Minimax, typically at much less cost. It is an algorithm not a heuristic. It achieves efficiency by recognising situations where search of part of the game tree would not alter the value higher up - and pruning away the useless branches. Two parameters are maintained in a preorder traversal of a game tree:  - Best score known to be achievable by the choosing player  - Best score that can be hoped for by the choosing player Why play a move demonstrably worse than another? (  for choice) Why expect your opponent to let you off more lightly than possible? (  cutoff) (when given infinite initial bounds)

Intelligence for Games and Puzzles23 Alpha Beta algorithm  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } -  in NegaMax style

Intelligence for Games and Puzzles24  game tree - ,+  Maximiser’s choice  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } Minimiser’s choice Maximiser’s choice

Intelligence for Games and Puzzles25  game tree 8 - ,+  Maximiser’s choice  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } Minimiser’s choice

Intelligence for Games and Puzzles26  game tree 8 - ,+  - 8,+  Maximiser’s choice  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } Minimiser’s choice maximiser previously did not know any way to get more than -  but now knows that at least -8 can be obtained.

Intelligence for Games and Puzzles27  game tree 8 - ,+ 8 - ,+  - 8,+  Maximiser’s choice  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } Minimiser’s choice For Minimiser, any value >= 8 is equivalent, there is no point finding out how much better than +8 can be obtained here, he won’t be given the chance.

Intelligence for Games and Puzzles28  game tree 86 - ,+  - 8,+  Maximiser’s choice  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } Minimiser’s choice

Intelligence for Games and Puzzles29  game tree 86, +  6 - ,+  6,+  ,-6 Maximiser’s choice  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } Minimiser’s choice

Intelligence for Games and Puzzles30  game tree ,+  6,+  ,-6 Maximiser’s choice  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } Minimiser’s choice

Intelligence for Games and Puzzles31  game tree ,+  6,+  ,-6 Maximiser’s choice  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } Minimiser’s choice

Intelligence for Games and Puzzles32  game tree , +  Maximiser’s choice  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable } Minimiser’s choice

Intelligence for Games and Puzzles33  (node Node, int Ht, real Achievable, real Hope) if Ht=0 or no moves exist then Return Evaluation (Node) else {real Temp; for each move M at node Node, {generate NewNode from Node & M; Temp:= -  (Newnode, Ht-1, -Hope, -Achievable); Destroy NewNode; If Temp>=Hope then Return Temp; Achievable:=Max(Temp, Achievable) }} Return Achievable }  game tree ,+ 6 -6, +  6- ,+ 6 -6, +  -6-4 Maximiser’s choice Minimiser’s choice

Intelligence for Games and Puzzles34  game tree , +  6- ,+ 6 -6, +  -6-4 Maximiser’s choice Minimiser’s choice

Intelligence for Games and Puzzles35  game tree , +  6- ,+ 6 -6, +  ,+ 6 Maximiser’s choice Minimiser’s choice

Intelligence for Games and Puzzles36  game tree , +  6- ,+ 6 -6, +  Maximiser’s choice Minimiser’s choice

Intelligence for Games and Puzzles37  game tree , +  - , ,+ 6 -3, +  Maximiser’s choice Minimiser’s choice

Intelligence for Games and Puzzles38  game tree , +  4 6- ,+ 6 -3, +  Maximiser’s choice 4 Minimiser’s choice

Intelligence for Games and Puzzles39  game tree , +  4 6- , Maximiser’s choice 4 Minimiser’s choice

Intelligence for Games and Puzzles40  game tree , +  4 6+3, Maximiser’s choice Minimiser’s choice

Intelligence for Games and Puzzles41  game tree , +  +3, ,-3 3 Maximiser’s choice 4 Minimiser’s choice

Intelligence for Games and Puzzles42  game tree , +  1 6+3, ,-3 3 Maximiser’s choice 4 Minimiser’s choice

Intelligence for Games and Puzzles43  game tree , +  1 6+3, Maximiser’s choice 4 Minimiser’s choice

Intelligence for Games and Puzzles44  game tree , +  Maximiser’s choice 4 Minimiser’s choice

Intelligence for Games and Puzzles45  game tree Maximiser’s choice 4 Minimiser’s choice

Intelligence for Games and Puzzles46 Move Ordering Alpha-Beta achieved negligible savings in that example. If moves are fairly accurately sorted, with the best move for each player being usually the first considered, alpha-beta achieves much better pruning. In the best case, it requires, approximately, only the square root of the numbers of nodes generated and evaluated by minimax. (If the moves could be reliably sorted in best-first order, then no search would be needed: one could just pick the first move.) In iterative deepening, the principal variation found by a shallower alpha-beta search can be used to order the very best moves. Very little memory is required for this. Using more memory, all interior-node moves could be ordered.

Intelligence for Games and Puzzles47 Unordered negamax game tree

Intelligence for Games and Puzzles48 PV reordered at 1st level

Intelligence for Games and Puzzles49 PV reordered at 2nd level (no change)

Intelligence for Games and Puzzles50 PV reordered at 3rd level

Intelligence for Games and Puzzles51  with ideally reordered negamax tree

Intelligence for Games and Puzzles52 Horizon Effect When a minimax (or similar) search is carried out to a particular depth, the evaluation function is asked for a (numerical) assessment of the position. In a tactically quiet position this may be fair enough: (for chess) no threats of capture, no checks on a king The dust has settled, count up material and other advantages But in a tactically complex position it asks a lot of the evaluation function: For each possible capture, recaptures or unrelated captures or checks may follow For each move defending a king, further dangers may await The main purpose of a minimax search is to consider the possible outcomes in a tactically complex position, thereby to avoid the need for knowledge-intensive, computationally expensive, evaluation. Speed is prized in a static evaluation function.

Intelligence for Games and Puzzles53 Horizon Effect manifestations A search that stops at a particular depth has no information about moves beyond that depth except what the static evaluation function returns. “What the static evaluator don’t see, the search don’t grieve over.” The search compares the outcomes of different lines of play within its horizon. If one line of play results in a loss within the horizon, and another line of play results in a smaller loss within the horizon, the program will think that second line better. But this can result merely in delaying the inevitable; making ineffectual sacrifices; pushing the bad news over the horizon while still sailing towards it.

Intelligence for Games and Puzzles54 Delaying the inevitable In this chess position, with black to move, the loss of the bishop at a2 is inevitable. It can be delayed however by a pawn move to check the white king; the natural reply is KxP, another pawn can give check and delay the bishop capture even further. By losing pawns, black does not save the bishop, just throws more pawns away.

Intelligence for Games and Puzzles55 Delaying the inevitable In this chess position, with white to move, the loss of the queen at g7 is inevitable. It can be delayed quite a long time: five pawns in succession can delay the capture by 2 plies each. By losing pawns, black still does not save the queen, just throws away pawns. A program playing black may sacrifice these pawns. A program playing white may not realise that the queen is doomed.

Intelligence for Games and Puzzles56 What lies over the horizon? Quiescence Search is the accepted way to deal with this kind of problem. Upon reaching a limiting depth of a search, determine whether the position is quiet (usually, in chess, meaning no checks and no capture opportunities) If the position is quiet, apply the static evaluation function. If not, generate a partial * extra ply,  considering only * capture moves, checks, and escape-from-check moves,  and repeat the process (perhaps tolerating checks this time) and continue if necessary  (this will terminate since, at worst, all pieces are eventually captured) * these are lies in the case of Null-Move Quiescence Search

Intelligence for Games and Puzzles57 Selective Quiescence Search Quiesce (node Node, real Achievable, real Hope) {real Temp, Score; Score:= Evaluate(Node) if Score >= Hope then {return Score } else {for each interesting move M at node Node {generate NewNode from Node and M; Temp:= - Quiesce(NewNode,-Hope,-Achievable) Destroy NewNode; if ( Score >= Achievable ) then { Achievable := Score }; if ( Score >= Hope ) then { break } } } return Score}

Intelligence for Games and Puzzles58 Null-move Heuristic In  -  the Null-Move heuristic is a valuable technique in its own right. It also bears on quiescence search, and the lies just uttered. It applies even in games, like chess, where null moves (“passes”) are not legal! The idea is that you almost always do better by making a move than you would by allowing your opponent two moves in a row. In Chess there are rare zugzwang positions where all moves are undesirable compared to current situation. Usually they occur when few pieces remain. In Go players reach a stage where all further moves are either futile or counterproductive. Passing is legal, when both players pass the game ends. By imagining what the opponent could do with two moves in a row, you get a lower bound on the value of your position. If this lower bound is still greater than  (Hope), you get an early and cheap  cutoff. (since Null-Move generation costs virtually nothing)

Intelligence for Games and Puzzles59 Null Move in Quiescence Search This position, White to play, is not quiet, the White Knight can capture the Pawn. Then Black can recapture: bad for White. White does not have to capture the Pawn. Generally, even though there may be many possible captures, it is legitimate and perhaps better that no capture takes place. A way to handle this possibility is to consider the evaluation of the null move along with captures checks and escapes.

Intelligence for Games and Puzzles60 The Shape of a Game Tree Fragment Standard  -  is an algorithm which performs the same computation as minimax for a given tree - avoiding generating useless parts of that tree. Quiescence Search can be seen as a method for defining the shape of a tree by means other than truncating it at a fixed depth.

Intelligence for Games and Puzzles61 The Shape of a Game Tree Fragment Standard  -  is an algorithm which performs the same computation as minimax for a given tree - avoiding generating useless parts of that tree. Quiescence Search can be seen as a method for defining the shape of a tree by means other than truncating it at a fixed depth. Quiescence Search can be used as an evaluation function at the leaves of an  -  search (or any other for that matter, even itself: Second Order Quiescence).

Intelligence for Games and Puzzles62 The Shape of a Game Tree Fragment Standard  -  is an algorithm which performs the same computation as minimax for a given tree - avoiding generating useless parts of that tree. Quiescence Search can be seen as a method for defining the shape of a tree by means other than truncating it at a fixed depth. Quiescence Search can be used as an evaluation function at the leaves of an  -  search (or any other for that matter, even itself: Second Order Quiescence). It allows further search to be used as if it were a static evaluation function.

Intelligence for Games and Puzzles63 The Shape of a Game Tree Fragment Standard  -  is an algorithm which performs the same computation as minimax for a given tree - avoiding generating useless parts of that tree. Quiescence Search can be seen as a method for defining the shape of a tree by means other than truncating it at a fixed depth. Quiescence Search can be used as an evaluation function at the leaves of an  -  search (or any other for that matter, even itself: Second Order Quiescence). It allows further search to be used as if it were a static evaluation function. Null-Move Quiescence Search generates at least a fringe of null-moves one ply beyond the normal fixed depth of the tree (though null moves are very cheap)

Intelligence for Games and Puzzles64 See Don F. Beal (1990) A Generalised Quiescence Search Algorithm Artificial Intelligence 43, pp85-98 Donald E Knuth & Ronald W Moore (1975) An Analysis of Alpha-Beta Pruning Artificial Intelligence 6, pp