Ch. 5 – Adversarial Search

Slides:



Advertisements
Similar presentations
Seminar on Game Playing in AI. 1/21/2014University College of Engineering2 Definition…. Game Game playing is a search problem defined by: 1. Initial state.
Advertisements

Game Playing CS 63 Chapter 6
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Computers playing games. One-player games Puzzle: Place 8 queens on a chess board so that no two queens attack each other (i.e. on the same row, same.
This lecture topic: Game-Playing & Adversarial Search
Ch. 5 – Adversarial Search Supplemental slides for CSE 327 Prof. Jeff Heflin.
Tic Tac Toe Architecture CSE 5290 – Artificial Intelligence 06/13/2011 Christopher Hepler.
Games: Expectimax Andrea Danyluk September 25, 2013.
Ch. 2 – Intelligent Agents Supplemental slides for CSE 327 Prof. Jeff Heflin.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Adversarial Search: Game Playing Reading: Chess paper.
Adversarial Search CS30 David Kauchak Spring 2015 Some material borrowed from : Sara Owsley Sood and others.
AD FOR GAMES Lecture 4. M INIMAX AND A LPHA -B ETA R EDUCTION Borrows from Spring 2006 CS 440 Lecture Slides.
1 Computer Group Engineering Department University of Science and Culture S. H. Davarpanah
Adversarial Search CS311 David Kauchak Spring 2013 Some material borrowed from : Sara Owsley Sood and others.
Ch. 3 – Search Supplemental slides for CSE 327 Prof. Jeff Heflin.
Quiz 4 : Minimax Minimax is a paranoid algorithm. True
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Games as adversarial search problems Dynamic state space search.
CSE373: Data Structures & Algorithms Lecture 23: Intro to Artificial Intelligence and Game Theory Based on slides adapted Luke Zettlemoyer, Dan Klein,
Ch. 3 – Search Supplemental slides for CSE 327 Prof. Jeff Heflin.
CMSC 421: Intro to Artificial Intelligence October 6, 2003 Lecture 7: Games Professor: Bonnie J. Dorr TA: Nate Waisbrot.
Ch. 4 – Informed Search Supplemental slides for CSE 327 Prof. Jeff Heflin.
Adversarial Search. Regular Tic Tac Toe Play a few games. –What is the expected outcome –What kinds of moves “guarantee” that?
AIMA Code: Adversarial Search CSE-391: Artificial Intelligence University of Pennsylvania Matt Huenerfauth February 2005.
Games: Efficiency Andrea Danyluk September 23, 2013.
Ch. 7 – Logical Agents Supplemental slides for CSE 327 Prof. Jeff Heflin.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
Chapter 5 Adversarial Search. 5.1 Games Why Study Game Playing? Games allow us to experiment with easier versions of real-world situations Hostile agents.
1 Decisions in games Minimax algorithm  -  algorithm Tic-Tac-Toe game Decisions in games Minimax algorithm  -  algorithm Tic-Tac-Toe game.
Artificial Intelligence AIMA §5: Adversarial Search
Sequential Games and Adversarial Search
CSE 4705 Artificial Intelligence
Adversarial Search and Game-Playing
Announcements Homework 1 Full assignment posted..
Instructor: Vincent Conitzer
Games and Adversarial Search
Game-Playing & Adversarial Search
PENGANTAR INTELIJENSIA BUATAN (64A614)
Ch. 2 – Intelligent Agents
Adversarial Search.
David Kauchak CS52 – Spring 2016
Adversarial Search Chapter 5.
Expectimax Lirong Xia. Expectimax Lirong Xia Project 2 MAX player: Pacman Question 1-3: Multiple MIN players: ghosts Extend classical minimax search.
Game-Playing & Adversarial Search Alpha-Beta Pruning, etc.
Artificial Intelligence
Supplemental slides for CSE 327 Prof. Jeff Heflin
CMSC 471 Fall 2011 Class #8-9 Tue 9/27/11 – Thu 9/29/11 Game Playing
Chapter 6 : Game Search 게임 탐색 (Adversarial Search)
Artificial Intelligence
Game Playing COSC 4550/5550 Prof. D. Spears.
Instructor: Vincent Conitzer
Expectimax Lirong Xia.
Minimax strategies, alpha beta pruning
Instructor: Vincent Conitzer
Game Playing: Adversarial Search
Based on slides by: Rob Powers Ian Gent
Artificial Intelligence
Game Playing Chapter 5.
Adversarial Search and Game Playing Examples
Warm Up How would you search for moves in Tic Tac Toe?
Game Trees and Minimax Algorithm
Adversarial Search CMPT 420 / CMPG 720.
Minimax strategies, alpha beta pruning
Supplemental slides for CSE 327 Prof. Jeff Heflin
Adversarial Search Game Theory.
Ch. 5 – Adversarial Search
Ch. 2 – Intelligent Agents
CS51A David Kauchak Spring 2019
Supplemental slides for CSE 327 Prof. Jeff Heflin
Presentation transcript:

Ch. 5 – Adversarial Search Supplemental slides for CSE 327 Prof. Jeff Heflin

Two-player Zero Sum Games S0 – initial state Player(s) – whose turn it is in state s Action(s) – legal moves in state s Result(s,a) – transition model Terminal-Test(s) – is game over? Utility(s,p) – utility of p in terminal state s

Tic-Tac-Toe Transition Model X O O to top-left O to top-center O to top-right O to bottom-center O X O X O X X O

Minimax Algorithm function Minimax-Decision(state) returns an action return arg maxa  ACTIONS(s) Min-Value(Result(state,a)) function Max-Value(state) returns a utility value if Terminal-Test(state) then return Utility(state) v  - for each a in Actions(state) do v  Max(v, Min-Value(Result (s,a))) return v function Min-Value(state) returns a utility value if Terminal-Test(state) then return Utility(state) v  + for each a in Actions(state) do v  Min(v, Max-Value(Result (s,a))) return v From Figure 5.3, p. 166

Utility-Based Agent Environment Agent sensors State What the world is like now How the world evolves What it will be like if I do action A What my actions do Environment How happy will I be in such a state Utility What action I should do now Agent actuators

Minimax with Cutoff Limit function Minimax-Decision(state) returns an action return arg maxa  ActionS(s) Min-Value(Result(state,a),0) function Max-Value(state,depth) returns a utility value if Cutoff-Test(state,depth) then return Eval(state) v  - for each a in Actions(state) do v  Max(v, Min-Value(Result(s,a)), depth+1) return v function Min-Value(state,depth) returns a utility value if Cutoff-Test(state,depth) then return Eval(state) v  + for each a in Actions(state) do v  Min(v, Max-Value(Result(s,a)), depth+1) return v