Quoridor and Artificial Intelligence

Slides:



Advertisements
Similar presentations
Artificial Intelligence Presentation
Advertisements

Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
CMSC 671 Fall 2001 Class #8 – Thursday, September 27.
CS 484 – Artificial Intelligence
Application of Artificial intelligence to Chess Playing Capstone Design Project 2004 Jason Cook Bitboards  Bitboards are 64 bit unsigned integers, with.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Artificial Intelligence in Game Design
Mahgul Gulzai Moomal Umer Rabail Hafeez
Problem Solving Using Search Reduce a problem to one of searching a graph. View problem solving as a process of moving through a sequence of problem states.
CPSC 322 Introduction to Artificial Intelligence October 25, 2004.
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
Game Playing CSC361 AI CSC361: Game Playing.
November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 1 Decision Trees Many classes of problems can be formalized as search.
Shallow Blue Project 2 Due date: April 5 th. Introduction Second in series of three projects This project focuses on getting AI opponent Subsequent project.
Applying Genetic Programming to Stratego Ryan Albarelli.
HEURISTIC SEARCH. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Portion of the state space for tic-tac-toe.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
GoogolHex CS4701 Final Presentation Anand Bheemarajaiah Chet Mancini Felipe Osterling.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Quoridor Classic Game Manager Kevin Dickerson April 2004.
CSC 412: AI Adversarial Search
Hex Combinatorial Search in Game Strategy by Brandon Risberg May 2006Menlo School.
Lecture 5CSE Intro to Cognitive Science1 Algorithmic Thinking III.
CISC 235: Topic 6 Game Trees.
Lecture 5 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing. Introduction Why is game playing so interesting from an AI point of view? –Game Playing is harder then common searching The search space.
CPSC 322 Introduction to Artificial Intelligence November 12, 2004.
Game Playing.
Othello Artificial Intelligence With Machine Learning
Development of a Machine-Learning-Based AI For Go By Justin Park.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Agents that can play multi-player games. Recall: Single-player, fully-observable, deterministic game agents An agent that plays Peg Solitaire involves.
Game-playing AIs Part 1 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part I (this set of slides)  Motivation  Game Trees  Evaluation.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
Instructor: Vincent Conitzer
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
Connect Four AI Robert Burns and Brett Crawford. Connect Four  A board with at least six rows and seven columns  Two players: one with red discs and.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
HPC Checkers By Andy Block Ryan Egan. Table of Contents Overview of Checkers ▫Overview of Checkers ▫Rules of the Game AI Theory The Program ▫Board ▫Sample.
Machine Learning for an Artificial Intelligence Playing Tic-Tac-Toe Computer Systems Lab 2005 By Rachel Miller.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
DEEP RED An Intelligent Approach to Chinese Checkers.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Tetris Agent Optimization Using Harmony Search Algorithm
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
George F Luger ARTIFICIAL INTELLIGENCE 5th edition Structures and Strategies for Complex Problem Solving HEURISTIC SEARCH Luger: Artificial Intelligence,
CMSC 421: Intro to Artificial Intelligence October 6, 2003 Lecture 7: Games Professor: Bonnie J. Dorr TA: Nate Waisbrot.
Adversarial Search. Regular Tic Tac Toe Play a few games. –What is the expected outcome –What kinds of moves “guarantee” that?
CS-424 Gregory Dudek Lecture 10 Annealing (final comments) Adversary Search Genetic Algorithms (genetic search)
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
An AI Game Project. Background Fivel is a unique hybrid of a NxM game and a sliding puzzle. The goals in making this project were: Create an original.
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Teaching Computers to Think:
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Christopher Ballinger, Sushil Louis
Artificial Intelligence AIMA §5: Adversarial Search
Instructor: Vincent Conitzer
Adversarial Search.
Games with Chance Other Search Algorithms
Kevin Mason Michael Suggs
NIM - a two person game n objects are in one pile
Instructor: Vincent Conitzer
Instructor: Vincent Conitzer
Artificial Intelligence
Unit II Game Playing.
Presentation transcript:

Quoridor and Artificial Intelligence Jeremy Alberth

Quoridor Quoridor is played on a 9x9 grid. Starting positions are shown for two players.

Quoridor Red moves his pawn down. The objective for both players is to be first to reach the opposite side.

Quoridor Blue moves his pawn up. Players may either move their pawn or place a wall on a move.

Quoridor Red places a wall horizontally in front of blue’s pawn. Walls must block movement from four squares.

Quoridor Blue moves his pawn left so that he is no longer impeded on his journey upward.

Quoridor Blue places a wall vertically to the right of red’s pawn. Wall orientations can be horizontal or vertical.

Quoridor Red moves his pawn down.

Quoridor Blue places a wall horizontally in front of red’s pawn. Players are limited to ten walls.

Quoridor Red moves his pawn left, continuing on his shortest path to his goal row.

Quoridor Blue places a wall to the left of red’s pawn, continuing his devious wall-placing behavior.

Quoridor Blue eventually wins the game when he paths his pawn to the opposite side of the board.

My Work Created an implementation of Quoridor Implemented AI players using the minimax algorithm Modified minimax and AI strategies Analyzed performance of computer players against one another and against a random player

Minimax Minimax is a method which finds the best move by using adversarial tree search. The game tree represents every possible move for both players. Branching factor is the number of moves at each step (here, branching factor = 3)

Static Evaluation In complex games, a depth limited search will be used. Upon reaching a depth cutoff, the search will employ a static evaluation function. This function must give a value to a game state, often revolving around a board state and the player to move.

Managing the Tree Branching factor is initially 132. 5 moves ahead: 132^5 = 40074642432 states Minimax must be modified to make use of a restricted move set. The branching factor can be reduced to a manageable size of ~10. 5 moves ahead: 10^5 = 100000 states

Wall Selection Best strategy for shrinking the move set is reducing the number of walls considered. Use a heuristic to determine which. Walls close to or directly next to the opposing player are a way to prevent an opponent’s quick victory. Might not consider wall placements by the opponent.

Problem

Problem

Problem

Solution Computer players may not consider wall placements by the opponent. Considerations should be made for repeated states. Minimax can avoid repeating game states by assigning undesirable values to them. The game can prevent this by forcing a draw after a certain number of repeated states.

Strategies and Evaluations Strategies for computer players were reliant on their static evaluators. [P] Shortest path: Considered shortest path values for both players. [B] Bird’s eye: Considered the distance to the goal row without regard to walls. [C] Close distance: Only one player’s path. [PR] Shortest path with random element [BR] Bird’s eye with random element

Do We Consider Opponent’s Wall Placement? P, wall B, wall C, wall PR,wall BR,wall P, no 183 199 167 195 B, no 21 117 74 147 159 C, no 133 172 198 187 196 PR, no 23 82 136 BR, no 14 53 72 137 120 No.

AI Effectiveness

AI Outcomes Strategies with random elements were the worst, followed by the bird’s eye strategy. Shortest path and “close distance” strategies outperformed the others. P B C PR BR 48 98 55 99 42 31 67 79 43 88 47 97 1 32 15 3 39

Data Trends AIs using wall heuristic not successful Considered walls that were not useful Repeated state flag generated more non-draw outcomes Shortest path was the most effective Players not considering opponent’s walls were able to path more successfully Randomness added variation but often removed effectiveness

References Abramson, B. 1989. Control strategies for two-player games. ACM Comput. Surv. 21, 2 (Jun. 1989), 137-161. DOI= http://doi.acm.org/10.1145/66443.66444 Thuente, D. J. and Jones, R. P. 1991. Beyond minimaxing for games with draws. In Proceedings of the 19th Annual Conference on Computer Science (San Antonio, Texas, United States). CSC '91. ACM Press, New York, NY, 551-557. DOI= http://doi.acm.org/10.1145/327164.328771 Slagle, J. R. and Dixon, J. E. 1969. Experiments With Some Programs That Search Game Trees. J. ACM 16, 2 (Apr. 1969), 189-207. DOI= http://doi.acm.org/10.1145/321510.321511

Previous Quoridor Software Work Xoridor (Java Quoridor Project) Glendenning: Genetic algorithms research Mertenz: AI Comparisons Used different board representation, strategies, evaluations, and random elements