Xi Breakthrough Player CSE 486 B Fall 2012 Miami University.

Slides:



Advertisements
Similar presentations
Project M.AI.S. Multi-threaded AI system Per Erskjäns game engineer.
Advertisements

Xi Breakthrough Player An extremely intelligent Breakthrough AI. You can tell by the use of X. CSE 486 B Fall 2012 Miami University.
4.1 Powers of 10.
Local Search Jim Little UBC CS 322 – CSP October 3, 2014 Textbook §4.8
CPSC 322, Lecture 14Slide 1 Local Search Computer Science cpsc322, Lecture 14 (Textbook Chpt 4.8) Oct, 5, 2012.
Anthony Cozzie. Quite possibly the nerdiest activity in the world But actually more fun than human chess Zappa o alpha-beta searcher, with a lot of tricks.
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Computers playing games. One-player games Puzzle: Place 8 queens on a chess board so that no two queens attack each other (i.e. on the same row, same.
For Friday Finish chapter 5 Program 1, Milestone 1 due.
Tic Tac Toe Architecture CSE 5290 – Artificial Intelligence 06/13/2011 Christopher Hepler.
Search in AI.
Application of Artificial intelligence to Chess Playing Capstone Design Project 2004 Jason Cook Bitboards  Bitboards are 64 bit unsigned integers, with.
Adversarial Search CSE 473 University of Washington.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
Adversarial Search Board games. Games 2 player zero-sum games Utility values at end of game – equal and opposite Games that are easy to represent Chess.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Maximizing the Chance of Winning in Searching Go Game Trees Presenter: Ling Zhao March 16, 2005 Author: Keh-Hsun Chen Accepted by Information Sciences.
Game Playing CSC361 AI CSC361: Game Playing.
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
Game Tree Search based on Russ Greiner and Jean-Claude Latombe’s notes.
Adversarial Search and Game Playing Examples. Game Tree MAX’s play  MIN’s play  Terminal state (win for MAX)  Here, symmetries have been used to reduce.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Computer Go : A Go player Rohit Gurjar CS365 Project Proposal, IIT Kanpur Guided By – Prof. Amitabha Mukerjee.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
1 Computer Group Engineering Department University of Science and Culture S. H. Davarpanah
Chapter 12 Adversarial Search. (c) 2000, 2001 SNU CSE Biointelligence Lab2 Two-Agent Games (1) Idealized Setting  The actions of the agents are interleaved.
Game-playing AIs Part 1 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part I (this set of slides)  Motivation  Game Trees  Evaluation.
Connect Four AI Robert Burns and Brett Crawford. Connect Four  A board with at least six rows and seven columns  Two players: one with red discs and.
Pitch Playing Agent Project for Into to AI Jody Ammeter.
Minimax with Alpha Beta Pruning The minimax algorithm is a way of finding an optimal move in a two player game. Alpha-beta pruning is a way of finding.
1 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
For Friday Finish chapter 6 Program 1, Milestone 1 due.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
1 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Game tree search Chapter 6 (6.1 to 6.3 and 6.6) cover games. 6.6 covers state of the art game players in particular. 6.5 covers games that involve uncertainty.
Parallel Programming in Chess Simulations Tyler Patton.
Game-playing AIs Part 2 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part II  The Minimax Rule  Alpha-Beta Pruning  Game-playing.
AIMA Code: Adversarial Search CSE-391: Artificial Intelligence University of Pennsylvania Matt Huenerfauth February 2005.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Parallel Programming in Chess Simulations Part 2 Tyler Patton.
An AI Game Project. Background Fivel is a unique hybrid of a NxM game and a sliding puzzle. The goals in making this project were: Create an original.
CIS 350 – I Game Programming Instructor: Rolf Lakaemper.
Game Playing Why do AI researchers study game playing?
PENGANTAR INTELIJENSIA BUATAN (64A614)
CS 460 Spring 2011 Lecture 4.
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Optimizing Minmax Alpha-Beta Pruning Real Time Decisions
Dakota Ewigman Jacob Zimmermann
Games & Adversarial Search
CS621: Artificial Intelligence
Chapter 6 : Game Search 게임 탐색 (Adversarial Search)
Announcements Homework 3 due today (grace period through Friday)
Kevin Mason Michael Suggs
NIM - a two person game n objects are in one pile
Minimax strategies, alpha beta pruning
Approaches to search Simple search Heuristic search Genetic search
Adversarial Search and Game Playing
Based on slides by: Rob Powers Ian Gent
Artificial Intelligence
Adversarial Search and Game Playing Examples
Minimax strategies, alpha beta pruning
Adversarial Search Game Theory.
CS51A David Kauchak Spring 2019
Unit II Game Playing.
Presentation transcript:

Xi Breakthrough Player CSE 486 B Fall 2012 Miami University

Team Mike Jacobs Jiang Nuo Reuben Smith

Components Search Threading Depth control Storage optimization Features Feature weight optimization Funny messages

Search Negamax with alpha-beta – Negamax is minimax “simplified” – State evaluation is from the perspective of the player associated with the current depth – Scores are negated as search moves up the tree

Threading Threading is used to split up the search workload evenly between search workers – The number of workers created is based on the number of processor cores available less one Workers use copies of state, so no concurrency issues are possible

Depth Control We dynamically adjust depth limit after each move targetMoveTime = gameTimeLimit / averageGameMoves – targetMoveTime is calculated at the start of each game – averageGameMoves is updated after each game

Depth Control, cont. If search finishes in less than targetMoveTime, then the depth limit increases If search finishes in more than targetMoveTime, then the depth limit decreases Depth control uses coefficients to affect how easily the depth limit can be changed again This sometimes allows us to search to a max observed depth of 25+, we can also potentially search as shallow as 1 deep

Storage Optimization Instead of creating a move object for each possible move at every node in the search tree, move data is encoded inside a short – Bits 0-2 represent starting row – Bits 3-5 represent starting column – Bits 6-8 represent ending row – Bits 9-11 represent ending column – Bits represent move flags (like capture)

Features 7 features implemented Notable: – Squares owned – Penetration – Spread – Conflict – Cover

Parametric Feature Weights Feature weights can also be passed into the object This can be used to easily set a difficulty for the engine – Ex: easy = only 1 feature on hard = multiple features optimally weighted

Feature Weight Optimization To optimize feature weights, we could use hill climbing – We would be hill climbing in two different places 1.Hill climb to determine optimal weight for each feature in a set of features 2.Hill climb to choose the set of features that are turned on (we will choose sets of 2, 3, or 4 features) – We would be using hill climb #1 to find a high peak for scoring a trial in hill climb #2 – Momentum would be used to terminate an instance of a climb This would take a lot of computational power and a lot of time… so we didn’t do it…

Funny Taunts In conclusion, we also send intimidating messages to your AI at the beginning of the game – “Hello, %s. Nice to beat you.“ – “Spill the blood of the innocent!”

?