Download presentation
Presentation is loading. Please wait.
5
More precisely called Branch of AI behind it
7
are Interactive games an area of Human-level AI research ? is AI used in Interactive games ?
8
Picture Courtesy : Google Images
9
Human-Level capabilities Real-time response Robust Autonomous intelligent interaction with environment Planning Communication with natural Language Common sense reasoning Creativity Learning
10
are Interactive games an area of Human-level AI research ? is AI used in Interactive games ?
12
ACTION Games ROLE PLAYING Games ADVENTURE Games STRATEGY Games GOD Games TEAM SPORTS Games INDIVIDUAL Games
13
Tactical enemies Partners Support Characters Story directors Strategic opponents Units Commentators
14
Search Planning Logic
15
Computer Games PlanningVisionSearchLogicLearning
16
Focus : Game Tactics How AI is used to enhance Game Tactics AI tools used Evolutionary computation & Reinforcement Learning Evolutionary computation & Reinforcement Learning Real-time Strategy Games
17
AI Components used Evolutionary Computation Reinforcement Learning Genetic Algorithm A learning technique with a mathematical reward function.
19
Player needs to control armies to defeat all opposing forces in a virtual battlefield. Key to winning lies in efficiently collecting and managing resources., and appropriately allocating these resources over various action elements. Famous examples : Age Of Empires, World of Warcraft. Picture Courtesy : http://www.igniq.com/images/age_of_empires_3
20
Key terms State 2State 3State1 Improve Weaponry Attack Tactics Sequence consisting of one or more primitive actions in any game state. Strategy Sequence of tactics used to play the entire game. Action Atomic transformation of game state.
21
AI Components in the Game AI in RTS games determines all decisions of the computer opponents. Encoded in the form of scripts. Called STATIC SCRIPTS State 3Tactic C State 2Tactic B State 1Tactic A
22
Dynamic Scripting Each state has multiple possible tactics. Tactics have relative weight assigned to them. Highest weight means best tactic. Weight Adjustment to adapt to given situation. Evolve new tactics on the fly.
23
Example State 1State 2 Tactic A 0.40.7 Tactic B 0.60.3
24
State 1State 2 Tactic A 0.80.7 Tactic B 0.20.3 Example
25
State 1State 2 Tactic A 0.80.7 Tactic B 0.20.3 Example
26
Another Real Example I don’t care about available resources. Attack at earliest !!! Ha Ha Ha!! I have to first well develop my army, then only I can attack. This will take a while. HUMAN AI Picture Courtesy : World Of Warcraft
27
I have suffered heavy losses. Now I need to increase my strength first. Small attacks are of no use. AI is gathering resources and preparing for heavy assault. HUMAN AI Another Real Example Picture Courtesy : World Of Warcraft
28
Dynamic Scripting
29
C end is a parameter and is set less than 0.5. Contribution of State Reward is kept larger than Global Reward. P max and R max are the maximum penalty and maximum reward respectively.
30
Automatically Generating Tactics Evolutionary State Based Tactics Generator (ESTG) Genetic Algorithm Application !!! Counter Strategies are “played” against training scripts, only the fittest are allowed to the next generation.
31
Chromosome encoding Genetic operators Fitness function
32
Chromosome Encoding EA works with a population of chromosomes. Each represents a static strategy. The chromosome is divided into the m states. Start State 1 State 2 State m End
33
States include a state marker followed by the state number and a series of genes. Chromosome Encoding A Gene Parameter values
34
4 types of genes Genes Build genesResearch genesEconomy genesCombat genes ID BREC Followed by values of parameters needed by the gene.
35
Partial example of a chromosome. Chromosome Encoding
36
Fitness Function
37
Our goal is to generate a chromosome with a fitness exceeding a target value. When such a chromosome is found, the evolution process ends. This is the fitness- stop criterion. Because there is no guarantee that a chromosome exceeding the target value will be found, evolution also ends after it has generated a maximum number of chromosomes. This is the run-stop criterion. The choices for the fitness-stop and run-stop criteria can be determined by experimentations.
38
Genetic Operators selects two parents and copies states from either parent to the child chromosome State Crossover selects one parent, and replaces economy, research, or combat genes with a 25% probability Gene-replace Mutation selects one parent and mutates parameters for existing economy or combat genes with a 50% probability Gene-biased Mutation randomly generates a new chromosome Randomization
39
Genetic Operators
40
KT: State-based Knowledge Transfer The possible tactics during a game mainly depend on the available units and technology, which in RTS games typically depend on the buildings that the player possesses. Thus, we distinguish tactics using the Wargus states. All genes grouped in an activated state (which includes at least one activated gene) in the chromosomes are considered to be a single tactic. Evolved Chromosomes State-specific Knowledge Bases tactics
41
Extracting Tactics for a state The example chromosome displays two tactics. State 1 Gene 1.1 (a combat gene that trains a defensive army) Gene 1.2 (a build gene that constructs a blacksmith). This tactic will be inserted into the knowledge base for state 1. Gene 1.2 spawns a state change, the next genes will be part of a tactic for state 3 (i.e., constructing a blacksmith causes a transition to state 3, as indicated by the state marker in the example chromosome).
42
Performance of Dynamic Scripting Experiment Scenario The performa nce of the adaptive agent (controll ed by dynamic scripting using the evolved knowledg e bases) in Wargus is evaluated by playing it against a static agent. Each game lasted until one of the agents was defeated, or until a certain period of time had elapsed. If the game ended due to the time restriction, the agent with the highest score was considered to have won. After each game, the adaptive agent’s policy was adapted.
43
A sequence of 100 games constituted one experiment. We ran 10 experiments each against four different strategies for the static agent. Small Balanced Land Attack (SBLA) small map Large Balanced Land Attack (LBLA) large map Soldier’s Rush (SR) Knight’s Rush (KR)
44
Performance Analysis The three bars that reached 100 represent runs where no RTP was found (e.g., dynamic scripting was unable to statistically outperform the specified opponent). The opponent strategies Average RTP value RTP is the number of the first game in which the adaptive agent outperforms the static agent. low RTP value indicates good efficiency for dynamic scripting
45
Human-Level capabilities Real-time response Robust Autonomous intelligent interaction with environment Planning Communication with natural Language Common sense reasoning Creativity Learning Achieved Not Achieved
46
Picture Courtesy : Prince Of Persia, Google Images
47
Giving undue advantages to AI agents. Removing the “cheating” factor from Interactive games. Introduction of Creativity in AI agents. Capability of AI agents to reason with human-like Common Sense.
48
Ponsen,M. & Spronck,P.(2006). Automatically Generating Game Tactics via Evolutionary Learning. Spronck,P., Sprinkhuizen Kuyper,I. & Postma,E. (2004).Online adaptation of game opponent AI with dynamic scripting. Sutton,R., & Barto,A.(1998). Reinforcement learning : an introduction.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.