Automatic Learning of Combat Models for RTS Games

Slides:



Advertisements
Similar presentations
PSMAGE: Balanced Map Generation for StarCraft Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia 1/34 August 11, 2013.
Advertisements

A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft Santiago Ontanon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David.
EGR 141 Computer Problem Solving in Engineering and Computer Science
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
Us vs. It. Tanks vs. Robot ● Cooperative “Boss Fight” ● Tank players must destroy the Robot before it reaches the city limits. ● Robot is controlled by.
1/38 Game-Tree Search over High-Level Game States in RTS Games Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia October 6, 2014.
The Lanchester Equations of Warfare Explained Larry L. Southard
Space Rescue Chad Seippel Cory VanHooser. Story 2050 brand new International Space Station Distress call from ISS about “alien attack” No further communication.
Historical Study Of Distributed Operations Operation Vigilant Resolve Fallujah, Iraq 4 APR 2004 – 9 APR 2004.
Artificial Intelligence in Game Design Introduction to Learning.
Us vs. It. Tanks vs. Robot ● Cooperative “Boss Fight” ● Tank players must destroy the Robot before it reaches the city limits. ● Robot is controlled by.
Multimedia Indexing and Retrieval Kowshik Shashank Project Advisor: Dr. C.V. Jawahar.
RED DEAD REVOLVER Artificial Intelligence Critique By Mitchell C. Dodes CIS 588.
Memory Management A memory manager should take care of allocating memory when needed by programs release memory that is no longer used to the heap. Memory.
Kiting in RTS Games Using Influence Maps Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia 1/26 October 9, 2012.
Introduction to AI Engine & Common Used AI Techniques Created by: Abdelrahman Al-Ogail Under Supervision of: Dr. Ibrahim Fathy.
Artificial Intelligence in Game Design
Artificial Intelligence in Game Design Complex Steering Behaviors and Combining Behaviors.
1/27 High-level Representations for Game-Tree Search in RTS Games Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia October 3, 2014.
Artificial intelligence IN NPCs. Early Role Playing games Npcs in early role playing games were very limited in terms of being “Intelligent”. For instance,
Senior Project – Computer Science – 2015 Effect of Display Size on a Player’s Situational Awareness in Real-Time Strategy Games Peter Miralles Advisor.
Warship C++: An entity of Battleship
Artificial Intelligence in Game Design Lecture 8: Complex Steering Behaviors and Combining Behaviors.
Game AI Matthew Hsieh Meng Tran. Computer Games Many different genres  Action  Role Playing  Adventure  Strategy  Simulation  Sports  Racing Each.
Mark Bell Matthew Mayer Eric Seminoff Raytheon BBN Technologies: Boomerang.
Resilient Treats OOF as Knocked Down. Hard to kill Treat Obviously Dead result as a Knock Down instead. Wary Add 1d6 when taking In Sight tests. Bad Shot.
1/23 A Benchmark for StarCraft Intelligent Agents Alberto Uriarte and Santiago Ontañón Drexel University Philadelphia November 15, 2015.
Reviewed by Mara and Paul
Improving Monte Carlo Tree Search Policies in StarCraft
Automatic Learning of Combat Models for RTS Games
Stochastic tree search and stochastic games
CS 134 Video Game “AI”.
Chapter 7: Sampling Distributions
Improving Terrain Analysis and Applications to RTS Game AI
AlphaGo with Deep RL Alpha GO.
Probability Tree Diagrams.
Deep reinforcement learning
Vietnam War: Escalation under LBJ
CIS 487 LeRoy Eberly & Zev Lopez
Chapter 9: Sampling Distributions
Andromeda M31 By Adventurers Inc. Group members:.
The Alpha-Beta Procedure
Archery/Hunter Education Outdoor Education
Chapter 7: Sampling Distributions
Probability Tree Diagrams.
Test Drop Rules: If not:
Chapter 7: Sampling Distributions
Chapter 7: Sampling Distributions
Us vs. It.
Chapter 7: Sampling Distributions
Appearance monsters' camp computer's base player's base player's
Chapter 9: Sampling Distributions
Chapter 7: Sampling Distributions
Chapter 7: Sampling Distributions
Chapter 7: Sampling Distributions
Chapter 7: Sampling Distributions
Chapter 7: Sampling Distributions
Probability Tree Diagrams.
Us vs. It.
The Practice of Statistics – For AP* STARNES, YATES, MOORE
Chapter 7: Sampling Distributions
Us vs. It.
Chapter 7: Sampling Distributions
Chapter 7: Sampling Distributions
Autophagy: Recycling inside cells
What Are Performance Counters?
Aaron Stokes Game Pitch CIS /17/07
Chapter 7: Sampling Distributions
Report 7 Brandon Silva.
Probability Tree Diagrams.
Presentation transcript:

Automatic Learning of Combat Models for RTS Games Alberto Uriarte albertouri@cs.drexel.edu Santiago Ontañón santi@cs.drexel.edu Combat Parameters Abstraction Motivation & Goal Combat parameters can be learned or hardcoded. Game-tree search algorithms require a forward model or “simulator”. In some games (like StarCraft) we don’t have such model. The most complex part of a forward model for RTS games is the combat. In this paper, our goal is to obtain a fast and high-level combat model. Unit DPF Hardcoded Computed using the weapon damage and the time between shots. Learned When a unit is killed compute the (unit’s HP / time attacking unit) / number of attackers group Player Type Size g1 red Worker 1 g2 Marine 2 g3 Tank 3 g4 blue g5 4 g6 Why fast? To use algorithms like MCTS we need to simulate thousands of combats really quick. Target Policy Hardcoded Sort unit by kill score (resources cost metric). Learned Used the Borda count method to give points towards a unit type each time we make a choice. Why high-level? Even an “attrition game” (an abstraction of a combat game where units cannot move) is EXPTIME. So this is already a hard problem. A high-level model reduces branching factor. Combat Records Professional Player extract Game Replays Parameters Model High-level combat prediction StarCraft Game abstraction learn hardcoded Results The similarity between the prediction of our forward model (B′), and the actual outcome of the combat in the dataset (B) is defined as: Combat Parser Combat Models Start tracking a new combat if a military unit is aggressive or exposed and not already in a combat. aggressive when it has the order to attack or is inside a transport. exposed if it has an aggressive enemy unit in attack range. The filled squares are the units involved in a combat triggered by u. Model accuracy after learning from more than 1,500 combats extracted from replays Sustained DPF model Compute how much time each army needs to destroy the other using the Damage Per Frame (DPF) of each group. Remove the army that took longer to destroy enemy. Remove casualties from winner army using a target policy. Simpler and Faster. Hardcoded Learned Sustained Model 0.861 0.848 Decreased Model 0.905 0.888 Model accuracy and time compared with a low-level model Accuracy Time (sec) Sustained Model 0.874 0.033 Decreased Model 0.885 0.039 SparCraft (AC) 0.891 1.681 SparCraft (NOK-AV) 0.875 1.358 SparCraft (KC) 0.850 6.873 Decreased DPF model Compute how much time to kill one enemy’s unit. Remove the unit killed and reduce HP of survivors. Back to point 1 until one army is destroyed. Can be stopped at any time to have a prediction after X frames. More accurate predictions. 43 x faster!!!