November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 1 Decision Trees Many classes of problems can be formalized as search.

Slides:



Advertisements
Similar presentations
Heuristic Search techniques
Advertisements

Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
BackTracking Algorithms
Computers playing games. One-player games Puzzle: Place 8 queens on a chess board so that no two queens attack each other (i.e. on the same row, same.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
AI for Connect-4 (or other 2-player games) Minds and Machines.
October 1, 2012Introduction to Artificial Intelligence Lecture 8: Search in State Spaces II 1 A General Backtracking Algorithm Let us say that we can formulate.
Games & Adversarial Search
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Part2 AI as Representation and Search
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
University College Cork (Ireland) Department of Civil and Environmental Engineering Course: Engineering Artificial Intelligence Dr. Radu Marinescu Lecture.
September 26, 2012Introduction to Artificial Intelligence Lecture 7: Search in State Spaces I 1 After our “Haskell in a Nutshell” excursion, let us move.
Adversarial Search CSE 473 University of Washington.
Announcements Assignment #4 is due tonight. Last lab program is going to be assigned this Wednesday. ◦ A backtracking problem.
Two-player games overview Computer programs which play 2-player games – –game-playing as search – –with the complication of an opponent General principles.
B ACKTRACK SEARCH ALGORITHM. B ACKTRACKING Suppose you have to make a series of decisions, among various choices, where You don’t have enough information.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Artificial Intelligence in Game Design
CPSC 322 Introduction to Artificial Intelligence October 25, 2004.
This time: Outline Game playing The minimax algorithm
1 Using Search in Problem Solving Part II. 2 Basic Concepts Basic concepts: Initial state Goal/Target state Intermediate states Path from the initial.
Game Playing CSC361 AI CSC361: Game Playing.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
MAE 552 – Heuristic Optimization Lecture 28 April 5, 2002 Topic:Chess Programs Utilizing Tree Searches.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Games & Adversarial Search Chapter 6 Section 1 – 4.
ICS-270a:Notes 5: 1 Notes 5: Game-Playing ICS 270a Winter 2003.
Ocober 10, 2012Introduction to Artificial Intelligence Lecture 9: Machine Evolution 1 The Alpha-Beta Procedure Example: max.
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
CSC 412: AI Adversarial Search
Busby, Dodge, Fleming, and Negrusa. Backtracking Algorithm Is used to solve problems for which a sequence of objects is to be selected from a set such.
Lecture 5 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Artificial Intelligence Lecture 9. Outline Search in State Space State Space Graphs Decision Trees Backtracking in Decision Trees.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing.
Backtracking. N-Queens The object is to place queens on a chess board in such a way as no queen can capture another one in a single move –Recall that.
Do you drive? Have you thought about how the route plan is created for you in the GPS system? How would you implement a cross-and- nought computer program?
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Lecture 5: Backtracking Depth-First Search N-Queens Problem Hamiltonian Circuits.
For Wednesday Read Weiss, chapter 12, section 2 Homework: –Weiss, chapter 10, exercise 36 Program 5 due.
Instructor: Vincent Conitzer
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
Minimax with Alpha Beta Pruning The minimax algorithm is a way of finding an optimal move in a two player game. Alpha-beta pruning is a way of finding.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
Backtracking and Games Eric Roberts CS 106B January 28, 2013.
Outline Intro to Representation and Heuristic Search Machine Learning (Clustering) and My Research.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
Ricochet Robots Mitch Powell Daniel Tilgner. Abstract Ricochet robots is a board game created in Germany in A player is given 30 seconds to find.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Search in State Spaces Problem solving as search Search consists of –state space –operators –start state –goal states A Search Tree is an efficient way.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
February 11, 2016Introduction to Artificial Intelligence Lecture 6: Search in State Spaces II 1 State-Space Graphs There are various methods for searching.
February 18, 2016Introduction to Artificial Intelligence Lecture 8: Search in State Spaces III 1 A General Backtracking Algorithm Sanity check function.
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Luca Weibel Honors Track: Competitive Programming & Problem Solving Partisan game theory.
Iterative Deepening A*
Intro to Computer Science II
The Alpha-Beta Procedure
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
A General Backtracking Algorithm
Haskell Tips You can turn any function that takes two inputs into an infix operator: mod 7 3 is the same as 7 `mod` 3 takeWhile returns all initial.
A General Backtracking Algorithm
Unit II Game Playing.
Presentation transcript:

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 1 Decision Trees Many classes of problems can be formalized as search problems in decision trees. We are going to look at two different applications of search trees: First, we will solve the n-queens problem using backtracking search. Second, we will discuss how you can write a program that beats you at your favorite board game.

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 2 Decision Trees Example I: The n-queens problem How can we place n queens on an n  n chessboard so that no two queens can capture each other? A queen can move any number of squares horizontally, vertically, and diagonally. Here, the possible target squares of the queen Q are marked with an x.

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 3 Decision Trees Let us consider the 4-queens problem. Question: How many possible configurations of 4  4 chessboards containing 4 queens are there? Answer: There are 16!/(12!  4!) = (13  14  15  16)/(2  3  4) = 13  7  5  4 = 1820 possible configurations. Shall we simply try them out one by one until we encounter a solution? No, it is generally useful to think about a search problem more carefully and discover constraints on the problem’s solutions. Such constraints can dramatically speed up the search process.

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 4 Decision Trees Obviously, in any solution of the n-queens problem, there must be exactly one queen in each column of the board. Otherwise, the two queens in the same column could capture each other. Therefore, we can describe the solution of this problem as a sequence of n decisions: Decision 1: Place a queen in the first column. Decision 2: Place a queen in the second column.... Decision n: Place a queen in the n-th column.

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 5 Backtracking in Decision Trees There are problems that require us to perform an exhaustive search of all possible sequences of decisions in order to find the solution. We can solve such problems by constructing the complete decision tree and then find a path from its root to a leave that corresponds to a solution of the problem (breadth-first search often requires the construction of an almost complete decision tree). In many cases, the efficiency of this procedure can be dramatically increased by a technique called backtracking (depth-first search).

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 6 Backtracking in Decision Trees Idea: Start at the root of the decision tree and move downwards, that is, make a sequence of decisions, until you either reach a solution or you enter a situation from where no solution can be reached by any further sequence of decisions. In the latter case, backtrack to the parent of the current node and take a different path downwards from there. If all paths from this node have already been explored, backtrack to its parent. Continue this procedure until you find a solution or establish that no solution exists (there are no more paths to try out).

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 7 Backtracking in Decision Trees Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q place 1 st queen place 2 nd queen place 3 rd queen place 4 th queen empty board

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 8 Two-Player Games with Complete Trees Example II: Let us use similar search algorithms to write “intelligent” programs that play games against a human opponent. Just consider this extremely simple (and not very exciting) game: At the beginning of the game, there are seven coins on a table. At the beginning of the game, there are seven coins on a table. One move consists of removing 1, 2, or 3 coins. One move consists of removing 1, 2, or 3 coins. Player 1 makes the first move, then player 2, then player 1 again, and so on. Player 1 makes the first move, then player 2, then player 1 again, and so on. The player who removes all remaining coins wins. The player who removes all remaining coins wins.

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 9 Two-Player Games with Complete Trees Let us assume that the computer has the first move. Then, the game can be described as a series of decisions, where the first decision is made by the computer, the second one by the human, the third one by the computer, and so on, until all coins are gone. The computer wants to make decisions that guarantee its victory (in this simple game). The underlying assumption is that the human always finds the optimal move.

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 10 Two-Player Games with Complete Trees CC142C3 352C H C H C HHH HHH CCC 3 H

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 11 Two-Player Games with Complete Trees The previous example showed how we can use a variant of backtracking to determine the computer’s best move in a given situation and determine the computer’s best move in a given situation and prune the search tree, that is, avoid unnecessary computation. prune the search tree, that is, avoid unnecessary computation. However, for more interesting games such as chess or go, it is impossible to check every possible sequence of moves. The computer player then only looks ahead a certain number of moves and estimates the chance of winning after each possible sequence.

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 12 Two-Player Games Therefore, we need to define a static evaluation function e(p) that tells the computer how favorable the current game position p is from its perspective. In other words, the value of e(p) will be positive if a position is likely to result in a win for the computer, and negative if it predicts its defeat. In any given situation, the computer will make a move that guarantees a maximum value for e(p) after a certain number of moves. Again, the opponent is assumed to make optimal moves.

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 13 Two-Player Games For example, let us consider Tic-Tac-Toe (although it would still be possible to search the complete game tree for this game). What would be a suitable evaluation function for this game? We could use the number of lines that are still open for the computer (X) minus the ones that are still open for its opponent (O).

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 14 Two-Player Games e(p) = 8 – 8 = 0 XOX e(p) = 6 – 2 = 4 OOXXO X e(p) = 2 – 2 = 0 shows the weak- ness of this e(p) How about these? OOXX X e(p) =  e(p) = XXOOO X e(p) = -  e(p) = - 

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 15 Two-Player Games

November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 16 Two-Player Games In summary, we need three functions to make a computer play our favorite game: a function that generates all possible moves in a given situation, a function that generates all possible moves in a given situation, a static evaluation function e(p), and a static evaluation function e(p), and a tree search algorithm. a tree search algorithm. This concept underlies most game-playing programs. It is very powerful, as demonstrated, for example, by the program Deep Blue beating the world chess champion Gary Kasparov in 1997.