Adversarial Search Chapter 5 Adapted from Tom Lenaerts’ lecture notes.

Slides:



Advertisements
Similar presentations
Chapter 6, Sec Adversarial Search.
Advertisements

Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Artificial Intelligence 1: game playing Lecturer: Tom Lenaerts SWITCH, Vlaams Interuniversitair Instituut voor Biotechnologie.
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
This lecture topic: Game-Playing & Adversarial Search
Game playing. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
1 Game Playing. 2 Outline Perfect Play Resource Limits Alpha-Beta pruning Games of Chance.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search CSE 473 University of Washington.
Hoe schaakt een computer? Arnold Meijster. Why study games? Fun Historically major subject in AI Interesting subject of study because they are hard Games.
Adversarial Search Chapter 6.
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
This time: Outline Game playing The minimax algorithm
Lecture 02 – Part C Game Playing: Adversarial Search
1 Game Playing Chapter 6 Additional references for the slides: Luger’s AI book (2005). Robert Wilensky’s CS188 slides:
Game Playing CSC361 AI CSC361: Game Playing.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
1 DCP 1172 Introduction to Artificial Intelligence Lecture notes for Chap. 6 [AIMA] Chang-Sheng Chen.
Games & Adversarial Search Chapter 6 Section 1 – 4.
CHAPTER 6 : ADVERSARIAL SEARCH
Notes adapted from lecture notes for CMSC 421 by B.J. Dorr
PSU CS 370 – Introduction to Artificial Intelligence Game MinMax Alpha-Beta.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Lecture 6: Game Playing Heshaam Faili University of Tehran Two-player games Minmax search algorithm Alpha-Beta pruning Games with chance.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game-playing AIs Part 1 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part I (this set of slides)  Motivation  Game Trees  Evaluation.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Game-playing AIs Part 2 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part II  The Minimax Rule  Alpha-Beta Pruning  Game-playing.
CMSC 421: Intro to Artificial Intelligence October 6, 2003 Lecture 7: Games Professor: Bonnie J. Dorr TA: Nate Waisbrot.
Game Playing: Adversarial Search chapter 5. Game Playing: Adversarial Search  Introduction  So far, in problem solving, single agent search  The machine.
Adversarial Search Chapter 6 Section 1 – 4. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time.
Adversarial Search 2 (Game Playing)
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Chapter 5: Adversarial Search & Game Playing
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
Artificial Intelligence AIMA §5: Adversarial Search
Adversarial Search and Game-Playing
Adversarial Search Chapter 5.
Games & Adversarial Search
Games & Adversarial Search
Adversarial Search.
Game playing.
Games & Adversarial Search
Games & Adversarial Search
Game Playing Fifth Lecture 2019/4/11.
Games & Adversarial Search
Adversarial Search CMPT 420 / CMPG 720.
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4.
Presentation transcript:

Adversarial Search Chapter 5 Adapted from Tom Lenaerts’ lecture notes

Outline What are games? Optimal decisions in games –Which strategy leads to success?  -  pruning Games of imperfect information Games that include an element of chance

What are and why study games? Games are a form of multi-agent environment –What do other agents do and how do they affect our success? –Cooperative vs. competitive multi-agent environments. –Competitive multi-agent environments give rise to adversarial problems a.k.a. games Why study games? –Fun; historically entertaining –Interesting subject of study because they are hard –Easy to represent and agents restricted to small number of actions

Relation of Games to Search Search – no adversary –Solution is (heuristic) method for finding goal –Heuristics and CSP techniques can find optimal solution –Evaluation function: estimate of cost from start to goal through given node –Examples: path planning, scheduling activities Games – adversary –Solution is strategy (strategy specifies move for every possible opponent reply). –Time limits force an approximate solution –Evaluation function: evaluate “goodness” of game position –Examples: chess, checkers, Othello, backgammon

Types of Games

Game setup Two players: MAX and MIN MAX moves first and they take turns until the game is over. Winner gets award, loser gets penalty. Games as search: –Initial state: e.g. board configuration of chess –Successor function: list of (move,state) pairs specifying legal moves. –Terminal test: Is the game finished? –Utility function: Gives numerical value of terminal states. E.g. win (+1), lose (-1) and draw (0) MAX uses search tree to determine next move.

Partial Game Tree for Tic-Tac-Toe

Optimal strategies Find the contingent strategy for MAX assuming an infallible MIN opponent. Assumption: Both players play optimally !! Given a game tree, the optimal strategy can be determined by using the minimax value of each node: MINIMAX-VALUE(n)= UTILITY(n)If n is a terminal max s  successors(n) MINIMAX-VALUE(s) If n is a max node min s  successors(n) MINIMAX-VALUE(s) If n is a min node

Two-Ply Game Tree The minimax decision Minimax maximizes the worst-case outcome for max. MAX MIN

What if MIN does not play optimally? Definition of optimal play for MAX assumes MIN plays optimally: maximizes worst-case outcome for MAX. But if MIN does not play optimally, MAX will do even better. [Can be proved.]

Minimax Algorithm function MINIMAX-DECISION(state) returns an action inputs: state, current state in game v  MAX-VALUE(state) return the action in SUCCESSORS(state) with value v function MIN-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v  ∞ for a,s in SUCCESSORS(state) do v  MIN(v,MAX-VALUE(s)) return v function MAX-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v  -∞ for a,s in SUCCESSORS(state) do v  MAX(v,MIN-VALUE(s)) return v

Properties of minimax Complete? Yes (if tree is finite) Optimal? Yes (against an optimal opponent) Time complexity? O(b m ) Space complexity? O(bm) (depth-first exploration) For chess, b ≈ 35, m ≈100 for "reasonable" games  exact solution completely infeasible

Multiplayer games Games allow more than two players Single minimax values become vectors X (1,2,6) (-1,5,2) (6,1,2) (4,2,3)(7,4,-1) (5,4,5) (7,7,-1)(5,-1,-1) A A B C To move

Problem of minimax search Number of games states is exponential to the number of moves. –Solution: Do not examine every node ==> Alpha-beta pruning α = value of best choice found so far at any choice point along the MAX path β = value of best choice found so far at any choice point along the MIN path Revisit example …

Alpha-Beta Example [-∞, +∞] Range of possible values [,] Do DF-search until first leaf [-∞, 3] [3,+∞] 33 3 33 128

Alpha-Beta Example (continued) This node is worse for MAX [-∞, 3] [3,+∞] [3,2] XX 22 33

Alpha-Beta Example (continued) [-∞,2] [-∞, 3] [3,14] [3, +∞]  14 3 X X [3, +∞] [3,5][3,2] 55 22 worse for MAX

Alpha-Beta Algorithm function ALPHA-BETA-SEARCH(state) returns an action inputs: state, current state in game v  MAX-VALUE(state, - ∞, + ∞ ) return the action in SUCCESSORS(state) with value v

Alpha-Beta Algorithm function MAX-VALUE(state, ,  ) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v  - ∞ for a,s in SUCCESSORS(state) do v  MAX(v,MIN-VALUE(s, ,  )) if v ≥  then return v   MAX( ,v) return v function MIN-VALUE(state, ,  ) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v  + ∞ for a,s in SUCCESSORS(state) do v  MIN(v,MAX-VALUE(s, ,  )) if v ≤  then return v   MIN( ,v) return v

General alpha-beta pruning Consider a node n somewhere in the tree If player has a better choice (e.g. node m) at –Parent node of n –Or any choice point further up n will never be reached in actual play. Hence when enough is known about n, it can be pruned.

Final Comments about Alpha-Beta Pruning Pruning does not affect final results Entire subtrees can be pruned. Good move ordering improves effectiveness of pruning With “perfect ordering,” time complexity is O(b m/2 ) –Branching factor of !! –Alpha-beta pruning can look twice as far as minimax in the same amount of time Repeated states are again possible. –Store them in memory = transposition table

Games of imperfect information Minimax and alpha-beta pruning require too much leaf-node evaluations. May be impractical within a reasonable amount of time. SHANNON (1950): –Cut off search earlier (replace TERMINAL- TEST by CUTOFF-TEST) –Apply heuristic evaluation function EVAL (replacing utility function of alpha-beta)

Cutting off search Change: –if TERMINAL-TEST(state) then return UTILITY(state) into –if CUTOFF-TEST(state,depth) then return EVAL(state) Introduces a fixed-depth limit depth –Is selected so that the amount of time will not exceed what the rules of the game allow. When cuttoff occurs, the evaluation is performed.

Heuristic EVAL Idea: produce an estimate of the expected utility of the game from a given position. Performance depends on quality of EVAL. Requirements: –EVAL should order terminal-nodes in the same way as UTILITY. –Computation may not take too long. –For non-terminal states the EVAL should be strongly correlated with the actual chance of winning. Only useful for quiescent (no wild swings in value in near future) states

Heuristic EVAL example

Most evaluation functions compute separate scores from each feature then combine them. –Material value for chess: 1 for each pawn, 3 for each knight and bishop, 5 for a rook, and 9 for the queen –Other features: “good pawn structure”, “king safety” Eval(s) = w 1 f 1 (s) + w 2 f 2 (s) + … + w n f n (s) –f i : feature i in the state s –w i : weight of f i –independent assumption (of state, other features…)

Heuristic difficulties Heuristic counts pieces won, but the second try to take the queen.

Horizon effect Fixed depth search thinks it can attack by the rook to avoid the queening move

Games that include chance Possible moves (5-10,5-11), (5-11,19-24),(5-10,10-16) and (5-11,11-16) chance nodes

Games that include chance [1,1], [6,6] chance 1/36, all other chance 1/18 Can not calculate definite minimax value, only expected value chance nodes

Expected minimax value EXPECTED-MINIMAX-VALUE(n)= UTILITY(n)If n is a terminal max s  successors(n) MINIMAX-VALUE(s)If n is a max node min s  successors(n) MINIMAX-VALUE(s)If n is a min node  s  successors(n) P(s)  EXPECTEDMINIMAX(s)If n is a chance node These equations can be backed-up recursively all the way to the root of the game tree.

Position evaluation with chance nodes Outcome of evaluation function may not change when values are scaled differently. Behavior is preserved only by a positive linear transformation of EVAL. MAX DICE MIN

Games of imperfect information Idea: choose the action with highest expected value over all deals. E.g. card games.

Commonsense example –Road A leads to a small heap of gold pieces –Road B leads to a fork: take the left fork and you'll find a mound of jewels; take the right fork and you'll be run over by a bus. –Road A leads to a small heap of gold pieces –Road B leads to a fork: take the left fork and you'll be run over by a bus; take the right fork and you'll find a mound of jewels. –Road A leads to a small heap of gold pieces –Road B leads to a fork: guess correctly and you'll find a mound of jewels; guess incorrectly and you'll be run over by a bus. Day1 Day2 Day3  What to do in Day3?

Proper analysis Intuition that the value of an action is the average of its values in all actual states is WRONG One should collect more information during the game.

Summary Games are fun (and dangerous) They illustrate several important points about AI –Perfection is unattainable -> approximation –Good idea to think about what to think about –Uncertainty constrains the assignment of values to states Games are to AI as grand prix racing is to automobile design.