Download presentation
Presentation is loading. Please wait.
Published byAmice Bell Modified over 8 years ago
1
RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games
2
Overview I. Introduction II. Related Work III. Method IV. Experiments & Results V. Conclusion
3
I. Introduction II. Related Work III. Method IV. Experiments & Results V. Conclusion
4
Domain RTS games Resource Production Tactical Planning Tactical Assault battles
5
RTS game - Wargus Screenshot of a typical battle scenario in Wargus Battle 1 Battle 2 Enemy group Friendly group
6
Planning problem Large state space Temporal actions Spatial reasoning Concurrency Stochastic actions Changing goals
7
I. Introduction II. Related Work III. Method IV. Experiments & Results V. Conclusion
8
Related Work Board games – bridge, poker, Go etc., Monte Carlo simulations RTS games Resource Production Means-ends analysis Tactical Planning Monte Carlo simulations Nash strategies Reinforcement learning Bandit-based problems, Go UCT
9
Our Approach Monte Carlo simulations UCT algorithm Advantage Complex plans from simple abstract actions Exploration/Exploitation tradeoff Changing goals
10
I. Introduction II. Related Work III. Method IV. Experiments & Results V. Conclusion
11
Method Planning architecture UCT Algorithm Search space formulation Monte Carlo simulations Challenges
12
Online Planning Framework UCT planner Action dispatcher Stratagus engine Current game state: Unit locations and hit points Ground actions: Move (unit1, pos1, pos2) Attack (unit1, unit2) Abstract game state: Group locations and hit points Abstract actions: Join (f1, f2) Attack (f1, e1)
13
Abstraction Abstract state space Grouping of units Abstract actions Join(G) Attack(f,e) f1 f2 f3 e1 e2
14
UCT Algorithm Monte Carlo simulation – get subsequent states Search tree Root node – current state Edges – available actions Intermediate nodes – subsequent states Leaf nodes – terminal states Rollout-based construction Value estimates Exploration/Exploitation tradeoff
15
UCT Source: Achieving Master Level Play in Computer Go – Sylvain Gelly and David Silver 40 3 10 45 30 40 50 48 10 40 4 1 1 5 71 12
16
UCT Algorithm - Formulae Action Selection: Value Updation:
17
Exploitation Source: Achieving Master Level Play in Computer Go – Sylvain Gelly and David Silver 40 3 10 45 30 40 50 48 10 40 4 1 1 5 71 12
18
Exploration Source: Achieving Master Level Play in Computer Go – Sylvain Gelly and David Silver 40 3 10 45 30 40 50 48 10 40 4 1 1 5 71 12
19
Search Space Join Actions f1 f2 f3 e1 e2 f1 f2’ e1 e2 Join (f1, f2)
20
Search Space Attack Actions f1 f2’ e1 e2 f1 f2’’ e1 Attack (f2’, e2)
21
Monte Carlo Simulations Domain-specific Actual game play Join actions Attack actions Reward calculation – objective function Time Hit points
22
Domain-specific Challenges State space abstraction Grouping of units (proximity-based) Concurrency Aggregation of actions Join actions – simple Attack actions – complex (partial simulations)
23
Planning problem - revisited Large state space Temporal actions Spatial reasoning Stochastic actions Changing goals Concurrency - Abstraction - Monte Carlo simulations - UCT (online planning) - UCT (objective functions) - Aggregation of actions
24
I. Introduction II. Related Work III. Method IV. Experiments & Results V. Conclusion
25
Experiments # Scenario Name # of friendly groups Friendly groups composition # of enemy groups Enemy groups composition # of possible ‘Join’ actions # of possible ‘Attack’ actions Total # of possible actions 12vs22{6,6}2{5,5}145 23vs23{6,2,4}2{5,5}369 34vs2_14{2,4,2,4}2{5,5}6814 44vs2_24{2,4,2,4}2{5,5}6814 54vs2_34{2,4,2,4}2{5,5}6814 64vs2_44{2,4,2,4}2{5,5}6814 74vs2_54{2,4,2,4}2{5,5}6814 84vs2_64{2,4,2,4}2{5,5}6814 94vs2_74{3,3,6,4}2{5,9}6814 104vs2_84{3,3,3,6}2{5,8}6814 112vs4_12{9,9}4{4,5,5,4}189 122vs4_22{9,9}4{5,5,5,5}189 132vs4_32{9,9}4{5,5,5,5}189 142vs5_12{9,9}5{5,5,5,5,5}11011 152vs5_22{10,10}5{5,5,5,5,5}11011 163vs43{12,4,4}4{5,5,5,5}31215 Table 1: Details of the different game scenarios
26
Planners UCT Planners UCT(t) UCT(hp) Number of rollouts – 5000 Averaged over – 5 runs - minimize time - maximize hit points
27
Planners Baseline Planners Random Attack-Closest Attack-Weakest Stratagus-AI Human
28
Video – Planning in action Simple scenario – 2 vs 2 UCT(t) – optimize timeUCT(hp) – optimize hit points
29
Video – Planning in action Complex scenario – 3 vs 4 UCT(t) – optimize timeUCT(hp) – optimize hit points
30
Results Figure 1: Time results for UCT(t) and baselines.
31
Results Figure 2: Hit point results for UCT(t) and baselines.
32
Results Figure 3: Time results for UCT(hp) and baselines.
33
Results Figure 4: Hit point results for UCT(hp) and baselines.
34
Results - Comparison Figures 1, 2, 3 & 4: Comparison between UCT(t) and UCT(hp) metrics Time resultsHit point results U C T (t) U C T (hp)
35
Results Figure 5: Time results for UCT(t) with varying rollouts.
36
I. Introduction II. Related Work III. Method IV. Experiments & Results V. Conclusion
37
Conclusion Hard planning problem Less expert knowledge Different objective functions Future Work Computational time – engineering aspects Machine Learning techniques Beyond Tactical Assault
38
Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.