Game playing Types of games Deterministic vs. chance

Slides:



Advertisements
Similar presentations
Game Playing CS 63 Chapter 6
Advertisements

Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
CPS 100, Fall Backtracking by image search.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
CompSci 100e Program Design and Analysis II March 3, 2011 Prof. Rodger CompSci 100e, Spring
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Adversarial Search CS30 David Kauchak Spring 2015 Some material borrowed from : Sara Owsley Sood and others.
Game-playing AIs Part 1 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part I (this set of slides)  Motivation  Game Trees  Evaluation.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
CPS 100, Fall GridGame APT How would you solve this problem using recursion?
CPS Backtracking, Search, Heuristics l Many problems require an approach similar to solving a maze  Certain mazes can be solved using the “right-hand”
CompSci 100e 6.1 Plan for the week l More recursion examples l Backtracking  Exhaustive incremental search  When we a potential solution is invalid,
Game tree search Chapter 6 (6.1 to 6.3 and 6.6) cover games. 6.6 covers state of the art game players in particular. 6.5 covers games that involve uncertainty.
CompSci 100 Prog Design and Analysis II Oct 26, 2010 Prof. Rodger CPS 100, Fall
CompSci Backtracking, Search, Heuristics l Many problems require an approach similar to solving a maze ä Certain mazes can be solved using the.
CPS Backtracking, Search, Heuristics l Many problems require an approach similar to solving a maze ä Certain mazes can be solved using the “right-hand”
CPS 100, Spring Search, Trees, Games, Backtracking l Trees help with search  Set, map: binary search tree, balanced and otherwise  Quadtree.
CPS 100, Spring Search, Backtracking,Heuristics l How do you find a needle in a haystack?  How does a computer play chess?  Why would you write.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
Chapter 5 Adversarial Search. 5.1 Games Why Study Game Playing? Games allow us to experiment with easier versions of real-world situations Hostile agents.
Game Playing Why do AI researchers study game playing?
Adversarial Search and Game-Playing
Backtracking, Search, Heuristics
Announcements Homework 1 Full assignment posted..
Games and Adversarial Search
Intro to Computer Science II
PENGANTAR INTELIJENSIA BUATAN (64A614)
CS Fall 2016 (Shavlik©), Lecture 11, Week 6
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
David Kauchak CS52 – Spring 2016
Adversarial Search Chapter 5.
CSCI 5582 Artificial Intelligence
Games & Adversarial Search
Games & Adversarial Search
Artificial Intelligence
Game playing.
Alpha-Beta Search.
Games & Adversarial Search
Games & Adversarial Search
Game Playing: Adversarial Search
NIM - a two person game n objects are in one pile
Artificial Intelligence
Alpha-Beta Search.
Alpha-Beta Search.
Minimax strategies, alpha beta pruning
Game Playing Fifth Lecture 2019/4/11.
Alpha-Beta Search.
Game Playing: Adversarial Search
CSE (c) S. Tanimoto, 2007 Search 2: AlphaBeta Pruning
Game Playing Chapter 5.
Games & Adversarial Search
Adversarial Search CMPT 420 / CMPG 720.
Alpha-Beta Search.
Backtracking, Search, Heuristics
Games & Adversarial Search
Minimax strategies, alpha beta pruning
Data Structures and Algorithms
CS51A David Kauchak Spring 2019
CS51A David Kauchak Spring 2019
Backtracking, Search, Heuristics
Games & Adversarial Search
Unit II Game Playing.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Game playing Types of games Deterministic vs. chance Perfect vs. imperfect information Active area of research Why? Clear criteria for success Interesting, hard problems Fun Typical game 2-player, zero sum game Players alternate moves Perfect information, no chance Examples?

Classic problem: N queens Can queens be placed on a chess board so that no queens attack each other? Easily place two queens What about 8 queens? Make the board NxN, this is the N queens problem Place one queen/column # different tries/column? Backtracking Use “current” row in a col If ok, try next col If fail, back-up, next row

Backtracking idea with N queens Try to place a queen in each column in turn Try first row in column C, if ok, move onto next column If solved, great, otherwise try next row in column C, place queen, move onto the next column Must unplace the placed queen to keep going What happens when we start in a column, where to start? If we fail, move back to previous column (which remembers where it is/failed) When starting in a column anew, start at beginning When backing up, try next location, not beginning Backtracking in general, record an attempt go forward If going forward fails, undo the record and backup

Basic ideas in backtracking search We need to be able to enumerate all possible choices/moves We try these choices in order, committing to a choice If the choice doesn’t pan out we must undo the choice This is the backtracking step, choices must be undoable Process is inherently recursive, so we need to know when the search finishes When all columns tried in N queens When we have found the exit in a maze When every possible moved tried in Tic-tac-toe or chess? Is there a difference between these games? Summary: enumerate choices, try a choice, undo a choice, this is brute force search: try everything

N queens backtracking: nqueens.cpp bool Queens::SolveAtCol(int col) // pre: queens placed at columns 0,1,...,col-1 // post: returns true if queen can be placed in column col // and N queen problem solved (N is square board size) { int k; int rows = myBoard.numrows(); if (col == rows) return true; for(k=0; k < rows; k++) { if (NoQueensAttackingAt(k,col)) { myBoard[k][col] = true; // place a queen if (SolveAtCol(col+1)) { return true; } myBoard[k][col] = false; // unplace the queen return false;

Computer v. Human in Games Computers can explore a large search space of moves quickly How many moves possible in chess, for example? Computers cannot explore every move (why) so must use heuristics Rules of thumb about position, strategy, board evaluation Try a move, undo it and try another, track the best move What do humans do well in these games? What about computers? What about at Duke?

Backtracking, minimax, game search We’ll use tic-tac-toe to illustrate the idea, but it’s a silly game to show the power of the method What games might be better? Problems? Minimax idea: two players, one maximizes score, the other minimizes score, search complete/partial game tree for best possible move In tic-tac-toe we can search until the end-of-the game, but this isn’t possible in general, why not? Use static board evaluation functions instead of searching all the way until the game ends Minimax leads to alpha-beta search, then to other rules and heuristics

Minimax for tic-tac-toe (see ttt.cpp) Players alternate, one might be computer, one human (or two computer players) Simple rules: win scores +10, loss scores –10, tie is zero X maximizes, O minimizes Assume opponent plays smart What happens otherwise? As game tree is explored is there redundant search? What can we do about this? O X X

Backtracking/Mini-max from ttt.cpp int Game::bestMove(Board::Player p, int & move) { // check for game over or too deep in search first int best = (p == Board::X ? COMPUTER_WIN : HUMAN_WIN); int score; int dontCareMove; for(k=0; k < myBoard.size(); k++) { if (myBoard.isClear(k)) { // can we move here? myBoard.place(k,p); score = bestMove(opposite(p),dontCareMove); myBoard.unplace(k); if (scoreIsBetter(score, best,p)) { best = score; move = k; } return best;

Caching or Memoization In Tic-Tac-Toe do we see the same board more than once? X O . X ? . X ? . X O . . . . . . . Repercussions in terms of search tree? Does avoiding search result in significant savings? How can we easily do this? Hint: maps! Lessons applied more widely More storage results in lower runtime, general tradeoff Can we have too much of a good thing?

Heuristics Can do pruning - see alpha-beta World will still be too big Checkers: ~10^40 states Chess: ~10^120 states A heuristic is a rule of thumb, doesn’t always work, isn’t guaranteed to work, but useful in many/most cases Search problems that are “big” often can be approximated or solved with the right heuristics Checkers: Chinook, Chess: Deep Blue, Othello: TD-gammon

Anita Borg 1949-2003 “Dr. Anita Borg tenaciously envisioned and set about to change the world for women and for technology. … she fought tirelessly for the development technology with positive social and human impact.” “Anita Borg sought to revolutionize the world and the way we think about technology and its impact on our lives.” Founded Systers in 1987, http://www.iwt.org in 1997