Download presentation
Presentation is loading. Please wait.
Published byHugo Maxwell Modified over 9 years ago
1
Introduction to the Project AbdelRahman Al OgailOmar Khaled Enayet Under the Supervision Of : Dr. Ibrahim Fathy Moawad
2
What’s Game AI? Why AI Engine? Structure of AI Engine What are RTS Games? Elements That Need AI in RTS Games Areas That Need Improvement in RTS AI Common Used Techniques in AI Engine So why working on that project (what’s new)?
3
Why is AI Development slow in RTS Games. AI Areas needing more research in RTS Games. Latest Research Introduction. Research Papers and Theses. ▪ Introduction ▪ The Papers : Intro ▪ Case-Based Planning. ▪ Reinforcement Learning. ▪ Genetic Algorithms. ▪ Hybrid Approaches. ▪ Opponent Modeling Approaches. ▪ Misc. Approaches.
4
Let the computer think Goal of Game AI: Entertainment NOT perfection How that guy finds the right answer? Deeper Blue Example
5
AI Engine AI Engine That’s our guy
7
Real-Time-Strategy (RTS) games can be viewed as simplified military simulations. Several players struggle over resources scattered over a terrain by setting up an economy, building armies, and guiding them into battle in real-time. The current AI performance in commercial RTS games is poor by human standards. They are characterized by enormous state spaces, large decision spaces, and asynchronous interactions. RTS games also require reasoning at several levels of granularity, production-economic facility (usually expressed as resource management and technological development) and tactical skills necessary for combat confrontation.
8
Workers (peons, gatherers) Individual units (soldiers, tanks…) Town building: how to build my town to max. benefits Pathfinding What’s the best (not shortest) way to get from A to B
9
Low level strategies Which pathfinding algorithm I should use? Medium level strategies How to achieve high level strategies? High level strategies What are my goals?
10
Terrain Analysis (keep track of your enemy) Opponent Modeling (know your enemy) Resource Management (take the control) Diplomacy Systems (always have allies)
11
Determine when AI element is stuck Opponent modeling More strategies less tactics Construct consistent army (solders, tanks, planes) Think about support lines How to retreat Setup and detect ambushes
12
Learning Some areas of learning: ▪ AI opponent get in the same trap repeatedly ▪ Know safe map locations and get away from kill zones ▪ Know how human player attacks and which units he favors ▪ Does the player rushes ? ▪ Does the player rely on units that require certain resources? ▪ Does he frequently build a number of critical structures in a poorly defensive place? ▪ Are his attacks balanced? ( rock, paper, scissors example)
13
Categories of Used Techniques: Decision Making Other Techniques ▪ Data-Driven Techniques ▪ Perception Techniques ▪ Communication Techniques
14
When to use: to represent states
15
When to use: to represent several states at the same time
16
Used to find best solutions to a given problems Genetic Process Rely on the idea of reproduction Example of using: finding best optimal # of peons working in each areas (area = building, money, wood, stone…)
17
Rely on simulating human brain Used in: Classification Opponent modeling
18
ALife is about searching to find “governing principles” to the life Newton theorem Alife Techniques: Cellular Automata Steering Behaviors Add the creativity to the AI opponent
19
Simple rules that produces emergent behaviors Boids research by Craig Reynolds Used: To simulate real life In producing emergent behavior Provide autonomies agents
20
Planning is, deciding upon a course of action before acting Usage in games: Pathfinding algorithms Set plans to high level strategies in RTS Games anticipating ambushes Some of Planning Techniques: A*, Mean & Analysis, Patch Recalculation, Minimax
21
As prolog Used techniques: Forward changing Backward changing
22
Complex IF-ELSE Statements represented as tree Usage if games: Player modeling High level strategies
23
Scripting Systems Using an external resource (not coded) that controls the AI opponent Advantage Add extendibility to the game
24
Supplies communication between game objects
25
LBI: Location Based Information Systems It’s a perception technique Keeps track of the world attributes Common techniques: Influence Maps Terrain Analysis Smart Terrain
26
Usage in games: Helps with obstacle avoidance Detecting player, resources places Danger specification (keep track of kill zones) Discover critical points in the world (as bridges)
27
Future of wars is going to be more robotic Sharing & validating plans MIT asks for researches in this area (10-2008) Alex J.C. said: “there’s no real learning and adaptation in commercial games” Researches in this area is so active! Papers Range from 2003-2009
28
RTS game worlds feature many objects, imperfect information, micro actions, and fast-paced action. By contrast, World–class AI players mostly exist for slow– paced, turn–based, perfect information games in which the majority of moves have global consequences and planning abilities therefore can be outsmarted by mere enumeration. Market dictated AI resource limitations. Up to now popular RTS games have been released solely by game companies who naturally are interested in maximizing their profit. Because graphics is driving games sales and companies strive for large market penetration only about 15% of the CPU time and memory is currently allocated for AI tasks. On the positive side, as graphics hardware is getting faster and memory getting cheaper, this percentage is likely to increase – provided game designers stop making RTS game worlds more realistic. Lack of AI competition. In classic two–player games tough competition among programmers has driven AI research to unmatched heights. Currently, however, there is no such competition among real–time AI researchers in games other than computer soccer. The considerable man– power needed for designing and implementing RTS games and the reluctance of game companies to incorporate AI APIs in their products are big obstacles to AI competition in RTS games.
29
Adversarial real–time planning. In fine–grained realistic simulations, agents cannot afford to think in terms of micro actions such as “move one step North”. Instead, abstractions of the world state have to be found that allow AI programs to conduct forward searches in a manageable abstract space and to translate found solutions back into action sequences in the original state space. Because the environment is also dynamic, hostile, and smart — adversarial real–time planning approaches need to be investigated. Decision making under uncertainty. Initially, players are not aware of the enemies’ base locations and intentions. It is necessary to gather intelligence by sending out scouts and to draw conclusions to adapt. If no data about opponent locations and actions is available yet, plausible hypotheses have to be formed and acted upon. Opponent modeling, learning. One of the biggest shortcomings of current (RTS) game AI systems is their inability to learn quickly. Human players only need a couple of games to spot opponents’ weaknesses and to exploit them in future games. New efficient machine learning techniques have to be developed to tackle these important problems.
30
Spatial and temporal reasoning. Static and dynamic terrain analysis as well as understanding temporal relations of actions is of utmost importance in RTS games — and yet, current game AI programs largely ignore these issues and fall victim to simple common–sense reasoning. Resource management. Players start the game by gathering local resources to build up defenses and attack forces, to upgrade weaponry, and to climb up the technology tree. At any given time the players have to balance the resources they spend in each category. For instance, a player who chooses to invest too many resources into upgrades, will become prone to attacks because of an insufficient number of units. Proper resource management is therefore a vital part of any successful strategy
31
Collaboration. In RTS games groups of players can join forces and intelligence. How to coordinate actions effectively by communication among the parties is a challenging research problem. For instance, in case of mixed human/AI teams, the AI player often behaves awkwardly because it does not monitor the human’s actions, cannot infer the human’s intentions, and fails to synchronize attacks.
33
Current Implementation of RTS Games applies extensive usage of FSM that makes them highly predictable. Adaptation is achieved either through Learning or planning or a mixture of both Planning is beginning to appear in commercial games such as DemiGod and Latest Total War Game. Learning has limited success so far. Developers are experimenting on replacing the ordinary decision making systems (FSM, FUSM, Scripting, Decision Trees, and Markov Systems) with Learning Techniques
34
More than 30 papers/theses talk about Planning and Learning in RTS Games The Major 3 approaches to AI Research in RTS-GAMES concerning Learning and Planning are Case-Based Planning, Reinforcement Learning with its different techniques and Genetic Algorithms. Some Papers use a Hybrid approach of these techniques. Others use other planning algorithms like PDDL or opponent modeling techniques and other misc. techniques. 3 papers encourage the research in this field. 9 papers use Case-Based Planning Approach from 2003-2009,1 uses a Hybrid CBR/GA approach in 2008,1 uses a Hybrid CBR/RL approach in 2007 10 papers use Reinforcement Learning with its different forms (Monte-Carlo, Dynamic Scripting and TD-Learning),1 uses TD- Learning with GA,1 uses Dynamic Scripting with GA 3 Papers use Genetic Algorithms. 3 Papers apply opponent modeling techniques.
35
RTS Games and Real–Time AI Research – 2003 RTS Gaines A New AI Research Challenge – 2003 Call for AI Research in RTS Games - 2004
36
Case-based planning is the reuse of past successful plans in order to solve new planning problems. It’s an application of Case-Based Reasoning in planning.
37
The David Aha Research Thread : On the Role of Explanation for Hierarchical Case-Based Planning in RTS Games - after 2004 Learning to Win - Case-Based Plan Selection in a RTS Game- 2005 Defeating Novel Opponents in a Real-Time Strategy Game – 2005 The Santiago Ontanon Research Thread : Case-Based Planning and Execution for RTS Games – 2007 Learning from Human Demonstrations for Real-Time Case-Based Planning – 2008 On-Line Case-Based Plan Adaptation for RTS Games- 2008 Situation Assessment for Plan Retrieval in RTS Games – 2009 Other Papers Case-based plan recognition for RTS games - after 2003 Mining Replays of RTS Games to learn player strategies – 2007
38
It is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub- optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration and exploitation.
39
Dynamic Scripting : Goal-Directed Hierarchical Dynamic Scripting for RTS Games – 2006 Automatically Acquiring Domain Knowledge For Adaptive Game AI Using Evolutionary Learning – 2008 Monte-Carlo Planning : UCT(Monte-Carlo) for Tactical Assault Battles in Real-Time Strategy Games. – 2003 Monte Carlo Planning in RTS Games - After 2004 Temporal-Difference Learning : Learning Unit Values in Wargus Using Temporal Differences – 2005 Establishing an Evaluation Function for RTS games - After 2005 Dynamic Scripting VS Monte-Carlo Planning: Adaptive reinforcement learning agents in RTS games – 2008 Hierarchical Reinforcement Learning Hierarchical Reinforcement Learning in Computer Games - After 2006 Hierarchical Reinforcement Learning with Deictic repr. in a computer game- After 2006
40
Genetic algorithms are a particular class of evolutionary algorithms (EA) that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover.
41
Human-like Behavior in RTS Games – 2003 Co-evolving Real-Time Strategy Game Playing Influence Map Trees with genetic algorithms Co-Evolution in Hierarchical AI for Strategy Games - after 2004
42
Genetic Algorithms + Dynamic Scripting : Improving Adaptive Game AI With Evolutionary Learning – 2004 Automatically Acquiring Domain Knowledge For Adaptive Game AI using Evolutionary Learning – 2005 Genetic Algorithms + TD-Learning : Neural Networks in RTS AI – 2001 Genetic Algorithms + Case-Based Planning : Stochastic Plan Optimization in Real-Time Strategy Games – 2008 Case-Based Reasoning + Reinforcement Learning : Transfer Learning in Real-Time Strategy Games Using Hybrid CBR-RL - 2007
43
Hierarchical Opponent Models for Real-Time Strategy Games – 2007 Opponent modeling in real-time strategy games - after 2007 Design of Autonomous Systems - Learning Adaptive playing a RTS game - 2009
44
Supervised Learning : Player Adaptive Cooperative Artificial Intelligence for RTS Games – 2007 PDDL : A First Look at Build-Order Optimization in RTS games - after 2006 Finite-State Machines : SORTS - A Human-Level Approach to Real-Time Strategy AI – 2007 Others : Real-time challenge balance in an RTS game using rtNEAT – 2008 AI Techniques in RTS Games -September 2006
45
Thank You !
46
Books : AI Game Engine Programming -2009 Artificial Intelligence for Games – 2009 Papers: RTS Games and Real–Time AI Research –Michael Buro & Timothy M. Furtak - 2003 Call for AI Research in RTS Games - Michael Buro – 2004 Web Resources : AIGameDev Forums. GameDev.Net Forums. Wikipedia. Others
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.