1/23 A Benchmark for StarCraft Intelligent Agents Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia November 15, 2015.

Slides:



Advertisements
Similar presentations
METAGAMER: An Agent for Learning and Planning in General Games Barney Pell NASA Ames Research Center.
Advertisements

Soar and StarCraft By Alex Turner. What is StarCraft: Brood War? A Real-Time Strategy (RTS) computer game released in A sci-fi war simulation Continually.
A New World Or People Keep Telling Me This is Ambitious By Jeremiah Lewis.
Artificial Intelligence Presentation
1 CO Games Development 2 Week 15 Terrain Analysis Gareth Bellaby.
PSMAGE: Balanced Map Generation for StarCraft Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia 1/34 August 11, 2013.
A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft Santiago Ontanon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David.
Constructing Complex NPC Behavior via Multi- Objective Neuroevolution Jacob Schrum – Risto Miikkulainen –
Multi-tiered AI Patrick Schmid. Multi-tiered AI We already saw a presentation about this topic Remember Ke‘s presentation? Short recap.
Artificial Intelligence in Real Time Strategy Games Dan Li.
Machine Learning in Computer Games Learning in Computer Games By: Marc Ponsen.
1/38 Game-Tree Search over High-Level Game States in RTS Games Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia October 6, 2014.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Using Cellular Automata and Influence Maps in Games
Andrew DePriest 4/12/2011 CSCE 390 – 001 Professional Issues in Computer Science and Engineering (Article found in the March 14, 2011 edition of ACM TechNews,
RED DEAD REVOLVER Artificial Intelligence Critique By Mitchell C. Dodes CIS 588.
November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 1 Decision Trees Many classes of problems can be formalized as search.
INSTITUTO DE SISTEMAS E ROBÓTICA Minimax Value Iteration Applied to Robotic Soccer Gonçalo Neto Institute for Systems and Robotics Instituto Superior Técnico.
Reinforcement Learning in Real-Time Strategy Games Nick Imrei Supervisors: Matthew Mitchell & Martin Dick.
AI – Week 8 AI + 2 Player Games Lee McCluskey, room 3/10
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Nicholas Mifsud.  Behaviour Trees (BT) proposed as an improvement over Finite State Machines (FSM)  BTs are simple to design, implement, easily scalable,
Task decomposition, dynamic role assignment and low-bandwidth communication for real-time strategic teamwork Peter Stone, Manuela Veloso Presented by Radu.
Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling.
A Study of Computational and Human Strategies in Revelation Games 1 Noam Peled, 2 Kobi Gal, 1 Sarit Kraus 1 Bar-Ilan university, Israel. 2 Ben-Gurion university,
Effect of Display Size on a Player’s Situational Awareness in Real-Time Strategy Games Peter Miralles Chris Fernandes CSC 499.
10.3 Understanding Pattern Recognition Methods Chris Kramer.
Kiting in RTS Games Using Influence Maps Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia 1/26 October 9, 2012.
Current Situation and Future Plans Abdelrahman Al-Ogail & Omar Enayet October
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Sushil Louis and Monica Nicolascu
Reinforcement Learning and Markov Decision Processes: A Quick Introduction Hector Munoz-Avila Stephen Lee-Urban
林偉楷 Taiwan Evolutionary Intelligence Laboratory.
More precisely called Branch of AI behind it.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Christopher Ballinger, Sushil Louis
Strategic Planning for Unreal Tournament© Bots Héctor Muñoz-Avila Todd Fisher Department of Computer Science and Engineering Lehigh University USA Héctor.
Introduction to AI Engine & Common Used AI Techniques Created by: Abdelrahman Al-Ogail Under Supervision of: Dr. Ibrahim Fathy.
What is Probability?  Hit probabilities  Damage probabilities  Personality (e.g. chance of attack, run, etc.)  ???  Probabilities are used to add.
C ASE -B ASED P LANNER P LATFORM FOR RTS G AMES An Introduction Abdelrahman Al-Ogail Ahmed Atta.
Chapter 2 Risk Measurement and Metrics. Measuring the Outcomes of Uncertainty and Risk Risk is a consequence of uncertainty. Although they are connected,
TEMPLATE DESIGN © Last Resort Animation, Modeling, AI, Networking, and Backend Alex Bunch, Nick Hunter, Austin Lohr, Robert.
StarCraft Learning Algorithms By Logan Yarnell, Steven Raines, and Dean Antel.
Multi-Player Uğur Büyükköy. Overview Single-player video games are such a solitary pursuit. Single-player computer games present the illusion of intreraction.
CS160: Interactive Prototype Gary Wu - Jordan Berk - Mike Kendall - Mohammed Ali - Hao Luo.
1 S ystems Analysis Laboratory Helsinki University of Technology Kai Virtanen, Tuomas Raivio and Raimo P. Hämäläinen Systems Analysis Laboratory Helsinki.
1/27 High-level Representations for Game-Tree Search in RTS Games Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia October 3, 2014.
Real time strategy game AI.  Developed by Blizzard Entertainment (my favourite game company).  Three races: Terran, Zerg, Protoss.  Basic controls:
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Evolving Reactive NPCs for the Real-Time Simulation Game.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Sushil Louis and Monica Nicolascu
Artificial intelligence IN NPCs. Early Role Playing games Npcs in early role playing games were very limited in terms of being “Intelligent”. For instance,
RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games.
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
Artificial Intelligence in Games
1 S ystems Analysis Laboratory Helsinki University of Technology Manuscript “On the Use of Influence Diagrams in a One-on-One Air Combat Game” in Kai Virtanen’s.
RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games.
Qualitative Spatial Analysis Part 2 Chris Mansley.
Designing Intelligence Logical and Artificial Intelligence in Games Lecture 2.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Christopher Ballinger, Sushil Louis
The Game Development Process: Artificial Intelligence.
Automatic Learning of Combat Models for RTS Games
Improving Monte Carlo Tree Search Policies in StarCraft
Automatic Learning of Combat Models for RTS Games
Improving Terrain Analysis and Applications to RTS Game AI
Artificial Intelligence
Tobias Mahlmann and Mike Preuss
Interaction with artificial intelligence in games
Tobias Mahlmann and Mike Preuss
Artificial Intelligence
Emir Zeylan Stylianos Filippou
CO Games Concepts Week 12 Collision Detection
Tobias Mahlmann and Mike Preuss
Presentation transcript:

1/23 A Benchmark for StarCraft Intelligent Agents Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia November 15, 2015

2/23 Outline  Motivation  Metrics  Benchmark Scenarios  Reactive Control  Tactics  Strategy  Experiments & Conclusions  Future Work

3/23 Motivation Lots of tournaments. Perfect to engage new researchers and overall bot performance. How to evaluate a specific AI technique?

4/23 Motivation Each researcher uses their own maps, scripts and metrics to evaluate an algorithm. Q-Learning Genetic Algorithm Hard to compare results!!

5/23 Motivation The lack of a uniform benchmark suite was a topic in the last AIIDE (2014) AI in Adversarial Real-Time Games Workshop.

6/23 Goal Uniform benchmark suite Set of StarCraft 1 scenarios that capture different aspects of RTS game play. Standard way to compare  Set of metrics to evaluate the performance.  Set of scenarios simulating RTS specific problems. 1 StarCraft has emerged as the main test-bed for RTS research

7/23 Metrics Survivor’s life The sum of the square root of hit points remaining of each unit divided by amount of time it took to complete the scenario. Normalized by bounds. Lower bound is when player A is defeated in the minimum time and without dealing any damage to player B Metrics designed to be normalized: [0,1] or [-1,1]

8/23 Metrics Time survived The time the agent survived normalized by a predefined timeout. Time needed Start a timer when a certain event happens (building destroyed). Stop it after a timeout or after a condition is triggered. Units lost Difference in units lost.

9/23 Benchmarks Scenarios Problems:  Reactive control (short-term decision making)  Tactics (medium-term)  Strategy (long-term) Scenario constraints:  Regular Game (start with a base and workers)  Melee Game (only military units)

10/23 Benchmarks Scenarios – Reactive Control Short-term decision-making Custom handcrafted scenarios  Unit formation (Young et al. 2012; Danielsiek et al. 2008)  Unit survivability (Uriarte and Ontañón 2012; Nguyen, Wang, and Thawonmas 2013)  Target selection (Churchill and Buro 2013) Maps from professional players (Young and Hawes 2014)

11/23 Benchmarks Scenarios – Reactive Control RC1: Perfect Kiting Test if is able to exploit the mobility and range attack against a stronger but slower unit. It is possible to win without taking any damage. Metric: Survivor’s life Map layer: A (big area), B (3 connected regions) Configurations: VZ, V6Z, 3V6Z, V9Zg

12/23 Benchmarks Scenarios – Reactive Control RC2: Kiting Test if is able to exploit the mobility and range attack against a stronger but slower unit. It is NOT possible to win without taking any damage. Metric: Survivor’s life Map layer: A (big area) Configurations: 3D3Z, 2D3H

13/23 Benchmarks Scenarios – Reactive Control RC3: Sustained Kiting NO chance to win. Stay alive as much time as possible Metric: Time survived Map layer: C (two regions, one with resources)

14/23 Benchmarks Scenarios – Reactive Control RC4: Symmetric armies In equal conditions positioning and target selection are key aspects that can determine a player’s success in a battle. Metric: Survivor’s life Map layer: A (big area) Configurations:  5 Vultures (Terran range)  9 Zealots (Protos melee)  12 Dragons (Protos range)  12 Mutalisks (Zerg air range with splash damage)  20 Marines and 8 Medics (Terran range)  5 Zealots and 8 Dragoons (Protos melee + range)

15/23 Benchmarks Scenarios – Tactics Medium-term decision-making  Solve qualitative navigation problems (Hagelbäck 2012)  Terrain analysis to exploit chokepoints or reasoning about the immediate enemy thread (Muñoz- Avila, Dannenhauer, and Cox 2015)  Optimizing resource gathering (Christensen et al. 2012; de Oliveira, Goldbarg, and Goldbarg 2014)  Building placement (Certicky 2013; Richoux, Uriarte, and Ontañón 2014)

16/23 Benchmarks Scenarios – Tactics T1: Dynamic obstacles Measures how well an agent can navigate when chokepoints are blocked by dynamic obstacles (e.g., neutral buildings). Metric: Time needed to reach a starting position Map layer: Heartbreak Ridge START GOAL BLOCKED PATH CHOKE POINT

17/23 Benchmarks Scenarios – Strategy S1: Building placement This scenario simulates a Zealot rush and is designed to test whether the agent will be able to stop it (intuitively, it seems the only option is to build a wall). Metric: Units lost Map layer: C (two regions, one with resources)

18/23 Benchmarks Scenarios – Strategy S2: Plan recovery Test if the AI is able to recover from the opponent disrupting its build order (destroy refinery after built) Metric: Time spent Map layer: C (two regions, one with resources) All Benchmark scenarios are available on-line

19/23 Experiments & Conclusions All scenarios were tested with bots that participated in previous AIIDE tournaments. (Not all bots can play melee maps!!)

20/23 Experiments & Conclusions All scenarios were tested with bots that participated in previous AIIDE tournaments. (Not all bots can play melee maps!!)  Newer bots improved micromanagement (FreScBot was the winner of micro AIIDE tournament 2010).  Nova perform well in kiting scenarios (as expected).  None of the bots passed the Tactics or Strategy scenarios.

21/23 Experiments & Conclusions All scenarios were tested with bots that participated in previous AIIDE tournaments. (Not all bots can play melee maps!!)  Newer bots improved micromanagement (FreScBot was the winner of micro AIIDE tournament 2010).  Nova perform well in kiting scenarios (as expected).  None of the bots passed the Tactics or Strategy scenarios. Other researchers are already using the benchmark!! “Q-learnings in RTS game's micro-management” Angel Camilo Palacios Garzón 2015

22/23 Future work More scenarios: Gas stealing, more scripted AIs rather than default script, use transports to avoid well protected choke points, optimizing mining, … More metrics: bot blocked by supplies, money unspent, … Deterministic VS Stochastic

23/23 A Benchmark for StarCraft Intelligent Agents Alberto Uriarte Santiago Ontañón