February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.

Slides:



Advertisements
Similar presentations
Chapter 6, Sec Adversarial Search.
Advertisements

Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
1 Game Playing. 2 Outline Perfect Play Resource Limits Alpha-Beta pruning Games of Chance.
Adversarial Search CSE 473 University of Washington.
Adversarial Search Chapter 6.
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
This time: Outline Game playing The minimax algorithm
Lecture 02 – Part C Game Playing: Adversarial Search
1 Game Playing Chapter 6 Additional references for the slides: Luger’s AI book (2005). Robert Wilensky’s CS188 slides:
Game Playing CSC361 AI CSC361: Game Playing.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
CSC 412: AI Adversarial Search
CHAPTER 6 : ADVERSARIAL SEARCH
Notes adapted from lecture notes for CMSC 421 by B.J. Dorr
Adversarial Search Chapter 5 Adapted from Tom Lenaerts’ lecture notes.
Game Playing.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Adversarial Search Chapter 6 Section 1 – 4. Search in an Adversarial Environment Iterative deepening and A* useful for single-agent search problems What.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Paula Matuszek, CSC 8520, Fall Based in part on aima.eecs.berkeley.edu/slides-ppt 1 CS 8520: Artificial Intelligence Adversarial Search Paula Matuszek.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
CMSC 421: Intro to Artificial Intelligence October 6, 2003 Lecture 7: Games Professor: Bonnie J. Dorr TA: Nate Waisbrot.
Adversarial Search Chapter 6 Section 1 – 4. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
Chapter 5 Adversarial Search. 5.1 Games Why Study Game Playing? Games allow us to experiment with easier versions of real-world situations Hostile agents.
5/4/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 9, 5/4/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Artificial Intelligence AIMA §5: Adversarial Search
Game Playing Why do AI researchers study game playing?
Adversarial Search and Game-Playing
PENGANTAR INTELIJENSIA BUATAN (64A614)
Adversarial Search Chapter 5.
Games & Adversarial Search
Games & Adversarial Search
Adversarial Search.
Artificial Intelligence
Game playing.
Games & Adversarial Search
Games & Adversarial Search
Artificial Intelligence
Game Playing Fifth Lecture 2019/4/11.
Games & Adversarial Search
Adversarial Search CMPT 420 / CMPG 720.
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4.
Presentation transcript:

February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science Kent State University

February 7, 2006AI: Chapter 6: Adversarial Search2 Games Multiagent environment Cooperative vs. competitive –Competitive environment is where the agents’ goals are in conflict –Adversarial Search Game Theory –A branch of economics –Views the impact of agents on others as significant rather than competitive (or cooperative).

February 7, 2006AI: Chapter 6: Adversarial Search3 Properties of Games Game Theorists –Deterministic, turn-taking, two-player, zero-sum games of perfect information AI –Deterministic –Fully-observable –Two agents whose actions must alternate –Utility values at the end of the game are equal and opposite In chess, one player wins (+1), one player loses (-1) It is this opposition between the agents’ utility functions that makes the situation adversarial

February 7, 2006AI: Chapter 6: Adversarial Search4 Why Games? Small defined set of rules Well defined knowledge set Easy to evaluate performance Large search spaces –Too large for exhaustive search Fame and Fortune –e.g. Chess and Deep Blue

February 7, 2006AI: Chapter 6: Adversarial Search5 Games as Search Problems Games have a state space search –Each potential board or game position is a state –Each possible move is an operation to another state –The state space can be HUGE!!!!!!! Large branching factor (about 35 for chess) Terminal state could be deep (about 50 for chess)

February 7, 2006AI: Chapter 6: Adversarial Search6 Games vs. Search Problems Unpredictable opponent Solution is a strategy –Specifying a move for every possible opponent reply Time limits –Unlikely to find the goal…agent must approximate

February 7, 2006AI: Chapter 6: Adversarial Search7 Types of Games DeterministicChance Perfect Information Chess, checkers, go, othello Backgammon, monopoly Imperfect Information Bridge, poker, scabble, nuclear war

February 7, 2006AI: Chapter 6: Adversarial Search8 Example Computer Games Chess – Deep Blue (World Champion 1997) Checkers – Chinook (World Champion 1994) Othello – Logistello –Beginning, middle, and ending strategy –Generally accepted that humans are no match for computers at Othello Backgammon – TD-Gammon (Top Three) Go – Goemate and Go4++ (Weak Amateur) Bridge (Bridge Barron 1997, GIB 2000) –Imperfect information –multiplayer with two teams of two

February 7, 2006AI: Chapter 6: Adversarial Search9 Optimal Decisions in Games Consider games with two players (MAX, MIN) Initial State –Board position and identifies the player to move Successor Function –Returns a list of (move, state) pairs; each a legal move and resulting state Terminal Test –Determines if the game is over (at terminal states) Utility Function –Objective function, payoff function, a numeric value for the terminal states (+1, -1) or (+192, -192)

February 7, 2006AI: Chapter 6: Adversarial Search10 Game Trees The root of the tree is the initial state –Next level is all of MAX’s moves –Next level is all of MIN’s moves –… Example: Tic-Tac-Toe –Root has 9 blank squares (MAX) –Level 1 has 8 blank squares (MIN) –Level 2 has 7 blank squares (MAX) –… Utility function: –win for X is +1 –win for O is -1

February 7, 2006AI: Chapter 6: Adversarial Search11 Game Trees

February 7, 2006AI: Chapter 6: Adversarial Search12 Minimax Strategy Basic Idea: –Choose the move with the highest minimax value best achievable payoff against best play –Choose moves that will lead to a win, even though min is trying to block Max’s goal: get to 1 Min’s goal: get to -1 Minimax value of a node (backed up value): –If N is terminal, use the utility value –If N is a Max move, take max of successors –If N is a Min move, take min of successors

February 7, 2006AI: Chapter 6: Adversarial Search13 Minimax Strategy

February 7, 2006AI: Chapter 6: Adversarial Search14 Minimax Algorithm

February 7, 2006AI: Chapter 6: Adversarial Search15 Properties of Minimax Complete –Yes if the tree is finite (e.g. chess has specific rules for this) Optimal –Yes, against an optimal opponent, otherwise??? Time –O(b m ) Space –O(bm) depth first exploration of the state space

February 7, 2006AI: Chapter 6: Adversarial Search16 Resource Limits Suppose there are 100 seconds, explore 10 4 nodes / second 10 6 nodes per move Standard approach –Cutoff test – depth limit quiesence search – values that do not seem to change –Change the evaluation function

February 7, 2006AI: Chapter 6: Adversarial Search17 Evaluation Functions Example Chess: –Typical evaluation function is a linear sum of features –Eval(s) = w 1 f 1 (s) + w 2 f 2 (s) + … + w n f n (s) w 1 = 9 f 1 (s) = number of white queens) – number of black queens etc.

February 7, 2006AI: Chapter 6: Adversarial Search18 Alpha-Beta Pruning The problem with minimax search is that the number of game states is has to examine is exponential in the number of moves Use pruning to eliminate large parts of the tree from consideration Alpha-Beta Pruning

February 7, 2006AI: Chapter 6: Adversarial Search19 Alpha-Beta Pruning Recognize when a position can never be chosen in minimax no matter what its children are –Max (3, Min(2,x,y) …) is always ≥ 3 –Min (2, Max(3,x,y) …) is always ≤ 2 –We know this without knowing x and y!

February 7, 2006AI: Chapter 6: Adversarial Search20 Alpha-Beta Pruning Alpha = the value of the best choice we’ve found so far for MAX (highest) Beta = the value of the best choice we’ve found so far for MIN (lowest) When maximizing, cut off values lower than Alpha When minimizing, cut off values greater than Beta

February 7, 2006AI: Chapter 6: Adversarial Search21 Alpha-Beta Pruning Example

February 7, 2006AI: Chapter 6: Adversarial Search22 Alpha-Beta Pruning Example

February 7, 2006AI: Chapter 6: Adversarial Search23 Alpha-Beta Pruning Example

February 7, 2006AI: Chapter 6: Adversarial Search24 Alpha-Beta Pruning Example

February 7, 2006AI: Chapter 6: Adversarial Search25 Alpha-Beta Pruning Example

February 7, 2006AI: Chapter 6: Adversarial Search26 A Few Notes on Alpha-Beta Effectiveness depends on order of successors (middle vs. last node of 2-ply example) If we can evaluate best successor first, search is O(b d/2 ) instead of O(b d ) This means that in the same amount of time, alpha-beta search can search twice as deep!

February 7, 2006AI: Chapter 6: Adversarial Search27 A Few More Notes on Alpha-Beta Pruning does not affect the final result Good move ordering improves effectiveness of pruning With “perfect ordering”, time complexity O(b m/2 ) –doubles the depth of search –can easily reach depth of 8 and play good chess (branching factor of 6 instead of 35)

February 7, 2006AI: Chapter 6: Adversarial Search28 Optimizing Minimax Search Use alpha-beta cutoffs –Evaluate most promising moves first Remember prior positions, reuse their backed-up values –Transposition table (like closed list in A*) Avoid generating equivalent states (e.g. 4 different first corner moves in tic tac toe) But, we still can’t search a game like chess to the end!

February 7, 2006AI: Chapter 6: Adversarial Search29 Cutting Off Search Replace terminal test (end of game) by cutoff test (don’t search deeper) Replace utility function (win/lose/draw) by heuristic evaluation function that estimates results on the best path below this board –Like A* search, good evaluation functions mean good results (and vice versa) Replace move generator by plausible move generator (don’t consider “dumb” moves)

February 7, 2006AI: Chapter 6: Adversarial Search30 Alpha-Beta Algorithm

February 7, 2006AI: Chapter 6: Adversarial Search31 Nondeterministic Games In nondeterministic games, chance is introduced by dice, card shuffling Simplified example with coin flipping.

February 7, 2006AI: Chapter 6: Adversarial Search32 Nondeterministic Games

February 7, 2006AI: Chapter 6: Adversarial Search33 Algorithm for Nondeterministic Games Expectiminimax give perfect play –Just like Minimax except it has to handle chance nodes if state is a MAX node then –return highest Expectiminimax – Value of Successors(state) if state is a MIN node then –return lowest Expectiminimax – Value of Successors(state) if state is a CHANCE node then –return average Expectiminimax – Value of Successors(state)

February 7, 2006AI: Chapter 6: Adversarial Search34 Summary Games are fun to work on! (and dangerous) They illustrate several important points about AI –Perfection is unattainable -> must approximate –Good idea to “think about what to think about” –Uncertainty constrains the assignment of values to states Games are to AI as the Grand Prix is to automobile design