1/27 High-level Representations for Game-Tree Search in RTS Games Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia October 3, 2014.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
1 University of Southern California Keep the Adversary Guessing: Agent Security by Policy Randomization Praveen Paruchuri University of Southern California.
RL for Large State Spaces: Value Function Approximation
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
PSMAGE: Balanced Map Generation for StarCraft Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia 1/34 August 11, 2013.
A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft Santiago Ontanon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David.
Artificial Intelligence in Real Time Strategy Games Dan Li.
1/38 Game-Tree Search over High-Level Game States in RTS Games Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia October 6, 2014.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Game Playing (Tic-Tac-Toe), ANDOR graph By Chinmaya, Hanoosh,Rajkumar.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Artificial Intelligence for Games Game playing Patrick Olivier
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Artificial Intelligence in Game Design Intelligent Decision Making and Decision Trees.
SLIQ: A Fast Scalable Classifier for Data Mining Manish Mehta, Rakesh Agrawal, Jorma Rissanen Presentation by: Vladan Radosavljevic.
This time: Outline Game playing The minimax algorithm
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
CS 561, Sessions Administrativia Assignment 1 due tuesday 9/24/2002 BEFORE midnight Midterm exam 10/10/2002.
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
Generalized Threats Search Paper Review Paper Author: T. Cazenave Review by: A. Botea.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Game-Playing Read Chapter 6 Adversarial Search. Game Types Two-person games vs multi-person –chess vs monopoly Perfect Information vs Imperfect –checkers.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
Kiting in RTS Games Using Influence Maps Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia 1/26 October 9, 2012.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Sushil Louis and Monica Nicolascu
Upper Confidence Trees for Game AI Chahine Koleejan.
StarCraft Learning Algorithms By Logan Yarnell, Steven Raines, and Dean Antel.
Optimal XOR Hashing for a Linearly Distributed Address Lookup in Computer Networks Christopher Martinez, Wei-Ming Lin, Parimal Patel The University of.
Artificial Intelligence in Game Design N-Grams and Decision Tree Learning.
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
Quiz 4 : Minimax Minimax is a paranoid algorithm. True
Lecture 3: Uninformed Search
1 Solving problems by searching 171, Class 2 Chapter 3.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
CSE473 Winter /04/98 State-Space Search Administrative –Next topic: Planning. Reading, Chapter 7, skip 7.3 through 7.5 –Office hours/review after.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Sushil Louis and Monica Nicolascu
RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games.
1 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
CMSC 421: Intro to Artificial Intelligence October 6, 2003 Lecture 7: Games Professor: Bonnie J. Dorr TA: Nate Waisbrot.
RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games.
Optimal Relay Placement for Indoor Sensor Networks Cuiyao Xue †, Yanmin Zhu †, Lei Ni †, Minglu Li †, Bo Li ‡ † Shanghai Jiao Tong University ‡ HK University.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Chapter 5 Adversarial Search. 5.1 Games Why Study Game Playing? Games allow us to experiment with easier versions of real-world situations Hostile agents.
1/23 A Benchmark for StarCraft Intelligent Agents Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia November 15, 2015.
Lecture 3 Problem Solving through search Uninformed Search
Automatic Learning of Combat Models for RTS Games
Keep the Adversary Guessing: Agent Security by Policy Randomization
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Improving Monte Carlo Tree Search Policies in StarCraft
Announcements Homework 1 Full assignment posted..
Automatic Learning of Combat Models for RTS Games
Stochastic tree search and stochastic games
Last time: search strategies
CS 460 Spring 2011 Lecture 4.
Improving Terrain Analysis and Applications to RTS Game AI
Movement in a full and dynamic environment using a limited influence map Paulo Lafeta Ferreira Artificial Intelligence for Games – CS 580 Professor: Steve.
Artificial Intelligence in Game Design
Games with Chance Other Search Algorithms
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
UNINFORMED SEARCH -BFS -DFS -DFIS - Bidirectional
Search.
Search.
Presentation transcript:

1/27 High-level Representations for Game-Tree Search in RTS Games Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia October 3, 2014

2/27 Outline  Motivation  High-level Abstraction in RTS Games  High-level Game-Tree Search  Evaluation  Bot Performance  Simulation Accuracy  Conclusions

3/27 Motivation RTS properties  Simultaneous moves  “Real-time”  Partially observable  Non deterministic

4/27 Game complexity State-Space Complexity Number of legal game positions reachable from the initial position of the game. StarCraft map: 128x128 Maximum number of units: 400 Considering only unit positions: (128x128) 400 = ≈

5/27 Motivation Game-Tree Complexity Number of leaf nodes in the minimal solution depth of a full- width search tree. Estimation Using the branching factor (b) and the depth (d) of a game: b d Units: 50 – 200 Actions per unit: 30 Branching factor: Length of a game: 25 minutes 25 min x 60 sec x 24 iteration per sec = = 36000

6/27 High-level Abstraction in RTS games Levels of decisions  Strategy. The whole army and buildings.  Tactics. Group of units.  Reactive Control. One unit. We focused on tactical decisions!!

7/27 High-level Abstraction in RTS games Two different abstractions: 1. Map abstraction

8/27 High-level Abstraction in RTS games Two different abstractions: 1. Map abstraction Perkins’ algorithm to decompose a map into regions and chokepoints.

9/27 High-level Abstraction in RTS games Two different abstractions: 2. Unit group abstraction  Hit Points (shield)  Position  Order:  move, attack, stop, patrol  repair, build, siege  Size  Damage (points and type)

10/27 High-level Abstraction in RTS games Two different abstractions: 2. Unit group abstraction  Player. Which player controls this group  Type. Type of units in this group  Size. Number of units forming this group  Region. Which region is this group in  Order. Which order is currently performing  Move, Attack, Idle  Target. The ID of the target region  End. In which game frame is the order estimated to finish

11/27 High-level Abstraction in RTS games Two different abstractions: 2. Unit group abstraction GroupPlayerTypeSizeRegionOrderTargetEnd 11354Move Move Idle Move13350

12/27 High-level Abstraction in RTS games Experiments with 4 different abstractions: 1. A-RC Regions, Chokepoints, NO Buildings

13/27 High-level Abstraction in RTS games Experiments with 4 different abstractions: 2. A-RCB Regions, Chokepoints, Buildings

14/27 High-level Abstraction in RTS games Experiments with 4 different abstractions: 3. A-R Regions, NO Chokepoints, NO Buildings

15/27 High-level Abstraction in RTS games Experiments with 4 different abstractions: 4. A-RB Regions, NO Chokepoints, Buildings

16/27 High-level Game-Tree Search Alpha-Beta MCTS

17/27 High-level Game-Tree Search Alpha-Beta MCTS ABCD UCTCD MCTSCD

18/27 High-level Game-Tree Search MCTSCD

19/27 High-level Game-Tree Search MCTSCD 1.State forwarding (simulator) We estimate in which game frame the group finish their order. Moving: velocity + distance to region Attack: DPS between groups

20/27 High-level Game-Tree Search MCTSCD 1.State forwarding (simulator) We estimate in which game frame the group finish their order. Moving: velocity + distance to region Attack: DPS between groups 2.State evaluation

21/27 Evaluation settings Games limited to 20 minutes (28,800 frames) MCTSCD called every 400 frames MCTSCD parameters Tree policy: e-greedy (e=0.2) Default policy: random move selection Simultaneous move: Alt policy Tree policy depth: limited to 10 1,000 playouts limited to 2,880 game frames No fog of war (future work)

22/27 Bot Performance MCTSCD with different abstractions

23/27 Simulation accuracy Jaccard index computed each 400 frames

24/27 Simulation accuracy Jaccard index computed each 400 frames

25/27 Simulation accuracy Jaccard index computed each 400 frames

26/27 Conclusions and Future Work Conclusions Future work Robust methodology to evaluate the accuracy of a simulator it is better to keep the abstraction simple in order to get better predictions (no chokepoints) Improve the game tree search algorithm different bandit strategies deal with partial observability More abstractions and their tradeoffs Improve the game simulator by learning during the course of a game

27/27 High-level Representations for Game-Tree Search in RTS Games Alberto Uriarte Santiago Ontañón