Adversarial Search Chapter 5.

Slides:



Advertisements
Similar presentations
Chapter 6, Sec Adversarial Search.
Advertisements

Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4. Warm Up Let’s play some games!
CSC 8520 Spring Paula Matuszek CS 8520: Artificial Intelligence Solving Problems by Searching Paula Matuszek Spring, 2010 Slides based on Hwee Tou.
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
1 Game Playing. 2 Outline Perfect Play Resource Limits Alpha-Beta pruning Games of Chance.
Adversarial Search Game Playing Chapter 6. Outline Games Perfect Play –Minimax decisions –α-β pruning Resource Limits and Approximate Evaluation Games.
Adversarial Search CSE 473 University of Washington.
Adversarial Search Chapter 6.
Artificial Intelligence for Games Game playing Patrick Olivier
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
CSC 412: AI Adversarial Search
1 Game Playing Why do AI researchers study game playing? 1.It’s a good reasoning problem, formal and nontrivial. 2.Direct comparison with humans and other.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Adversarial Search Chapter 6 Section 1 – 4. Search in an Adversarial Environment Iterative deepening and A* useful for single-agent search problems What.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Paula Matuszek, CSC 8520, Fall Based in part on aima.eecs.berkeley.edu/slides-ppt 1 CS 8520: Artificial Intelligence Adversarial Search Paula Matuszek.
Turn-Based Games Héctor Muñoz-Avila sources: Wikipedia.org Russell & Norvig AI Book; Chapter 5 (and slides)
Adversarial Search Chapter 6 Section 1 – 4. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
5/4/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 9, 5/4/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Artificial Intelligence AIMA §5: Adversarial Search
Game Playing Why do AI researchers study game playing?
Adversarial Search and Game-Playing
Games and adversarial search (Chapter 5)
Game Playing Why do AI researchers study game playing?
EA C461 – Artificial Intelligence Adversarial Search
Announcements Homework 1 Full assignment posted..
4. Games and adversarial search
PENGANTAR INTELIJENSIA BUATAN (64A614)
Games and adversarial search (Chapter 5)
CS 460 Spring 2011 Lecture 4.
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
Games & Adversarial Search
Games & Adversarial Search
Adversarial Search.
Artificial Intelligence
Game playing.
Games & Adversarial Search
Games & Adversarial Search
Lecture 1C: Adversarial Search(Game)
Artificial Intelligence
Pruned Search Strategies
Minimax strategies, alpha beta pruning
Game Playing Fifth Lecture 2019/4/11.
Games & Adversarial Search
Adversarial Search CMPT 420 / CMPG 720.
Adversarial Search CS 171/271 (Chapter 6)
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4.
Presentation transcript:

Adversarial Search Chapter 5

Outline Optimal decisions for deterministic, zero-sum game of perfect information: minimax α-β pruning Imperfect, real-time decisions: cut-off Stochastic games

Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits  unlikely to find goal, must approximate

Game tree (2-player, deterministic, turns): tic-tac-toe

Minimax Perfect play for deterministic games E.g., 2-ply game:

Minimax algorithm

Properties of minimax Complete? Yes (if tree is finite) Optimal? Yes (against an optimal opponent) Time complexity? O(bm) Space complexity? O(bm) (depth-first exploration) For chess, b ≈ 35, m ≈100 for "reasonable" games  exact solution completely infeasible

More than two players 3 players: utility(s) is a vector <u1, u2, u3> representing the value of state s for p1, p2, p3

α-β pruning example

α-β pruning example

α-β pruning example

α-β pruning example

α-β pruning example

Properties of α-β Pruning does not affect final result Good move ordering improves effectiveness of pruning With "perfect ordering," time complexity = O(bm/2)  doubles depth of search A simple example of the value of reasoning about which computations are relevant (a form of metareasoning)

Why is it called α-β? α is the value of the best (i.e., highest-value) choice found so far at any choice point along the path for max If v is worse than α, max will avoid it  prune that branch Define β similarly for min

The α-β algorithm

The α-β algorithm

Resource limits – imperfect real-time decisions Suppose we have 100 secs, explore 104 nodes/sec  106 nodes per move Standard approach: cutoff test: e.g., depth limit (perhaps add quiescence search) evaluation function Eval(s) = estimated desirability of position s

Cutting off search H-Minimax is identical to Minimax except Terminal? is replaced by Cutoff? Utility is replaced by Eval Does it work in practice? bm = 106, b=35  m=4 4-ply lookahead is a hopeless chess player! 4-ply ≈ human novice 8-ply ≈ typical PC, human master 12-ply ≈ Deep Blue, Kasparov

Eval(s) = w1 f1(s) + w2 f2(s) + … + wn fn(s) Evaluation functions For chess, typically linear weighted sum of features Eval(s) = w1 f1(s) + w2 f2(s) + … + wn fn(s) e.g., w1 = 9 with f1(s) = (number of white queens) – (number of black queens), etc.

How to get a good evaluation function Get good features (patterns, counts of pieces, etc.) Find good ways to combine the features (linear combination? Non-linear combination?) Machine learning could be used to learn New features Good ways (or good coefficients, like w1, …, wn) to combine the known features

Search versus look-up Game playing programs often use table look-ups (instead of game tree search) for: Opening moves – good moves from game books, human expert experience End-game (when there are a few pieces left) For chess: tables for all end-games with at most 5 pieces are available (online)

Deterministic games in practice Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used a precomputed endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 444 billion positions. Chess: Deep Blue defeated human world champion Garry Kasparov in a six-game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply. Othello: human champions refuse to compete against computers, who are too good. Go: In go, b > 300, and brute-force search is hopeless. In March 2016, the AlphaGo program by Google/Deep-Mind won a match (4 to 1) against Lee Sedol, one of the world’s best Go players. https://en.wikipedia.org/wiki/AlphaGo

Stochastic games Game of Backgammon

Game tree for Backgammon

Order-preserving transform of leaf values could change the move selection

Summary Games are fun to work on! They illustrate several important points about AI perfection is unattainable  must approximate good idea to think about what to think about