Download presentation
Presentation is loading. Please wait.
Published byAda Parks Modified over 9 years ago
1
Peer-to-Peer Support for Massively Multiplayer Games Zone Federation of Game Servers : a Peer-to-Peer Approach to Scalable Multi-player Online Games [INFOCOM 2004]: Bjorn Knutsson, Honghui Lu, Wei Xu, Bryan Hopkins, UPenn [SIGCOMM’04] Takuji limura, Hiroaki Hazeyama, Youki Kadobayashi, Nara Institue of Science and Technology
2
Outline One-line summary Motivation Solution Approach 1 Peer-to-Peer Support for Massively Multiplayer Games Solution Approach 2 Zone Federation of Game Servers Experiment Critique
3
One-line comment This papers presents peer-to-peer overlay network for Massively Multiplayer Games (MMG) to solve scalability problem using locality of interest.
4
Motivation Massively Multiplayer Games (MMG) Almost MMG is a RPG Ex> In the “Lord of the rings”, you are 레골라스. >_</ Ex> Lineage, World of Warcraft, etc 2M players, 180K concurrent players
5
Motivation Existing Clustered Server-Client architecture Zone-based partition One point of failure Flash crowd Over Provisioning Lack of flexibility Server-Client
6
Motivation Characteristics of MMG Large shared game world Immutable landscape information (terrain) Mutable object (food, tools, NPCs) Locality of interest Limited vision & sensing capabilities Limited movement Interaction with near object and players Self-organizing group by location Party play in RPG Ex> 반지원정대 in “Lord of the Rings”
7
Solution Approach P2P overlay Scale up and down dynamically Self-organizing decentralized system Divide entire game into several region (peer group) Hash region name into P2P key space Coordinator manages region Coordination of shared object Root of multicast tree Distribution server of map Also, one of player SCRIBE (Multicast support) PASTRY (P2P overlay) MMG Region 1 Region 2 Region 3 Player Object (NPCs or food) Coordinator Multicast Player Direct Connection
8
Scenario : mapping region to coordinator Regions and Player Machines are mapped to key space A region is managed by successor machine in key space 1 12 5 3 14 8 10 B A C 314 ABC A B C Node key :
9
Scenario : interaction between nodes 1 12 5 3 14 8 10 Blue arrow means communication between player & Coordinator Except E, F node, every node is both coordinator & player B A C 10 314 ABC C B A
10
Scenario : node join Player D on node 7 joins Rely on DHT to relocate peer-server 1 12 5 3 14 8 10 B A C 7 314 ABC 7 D D A B C D
11
Scenario : node leave A 9 Node leave Peer-server relocated to succeeding node 8 1 12 5 3 14 8 9 7 3 BC 7 D B A C D 10 A A A B C D
12
Scenario : Replica and Coordinator migrate New Coordinator forwards update to old one until the game state transfer is completed Recovery time depend on both the size of game state and network 1 12 5 3 14 8 7 11 10 9 Game states are Replicated at replica (succeeding nodes) Coordinator keep consistency at every updates
13
Solution Approach Division of coordinator Zone owner Sending state change to zone member Aggregating modification from zone member Consistency of changes Data holder Zone name Zone owner Zone data coordinator
14
Solution Approach Division of Zoning layer from DHT network Flexibility of zone owner One node can own several data holder (really?) Enable dynamic zone allocation Direct connection Reduce latency No crossing several hops on DHT
15
Scenario : zone owner and data holder A 9 Data holder Same location with coordinator Zone owner Who firstly updates data of data holder 8 1 12 5 3 14 8 9 7 3 BC 7 D B A C D 10 A A A B C D Data updates Lookup owner
16
Experimental assumption Prototype Implementation of “SimMud” Game states Modeling RPG games to generate own trace position change Multicast in group (region) : every 150 ms Ex> Quake 2 : every 50 ms player-object interaction (coordinator – player) Eat every 20 sec player-player interaction Fight every 20 sec changing region (multicasting group) Every 40 sec Region 200x300 grid Map and object size : 2* 6KB Maximum simulation size constrained by memory to 4000 virtual nodes (players) No player join & leave, no optimization
17
Experimental result around 6 hops Maximum delay about 400ms average message 70/sec 10 position updates * 7 /sec in same density, population growth does not make difference delay also increase slowly [log n]
18
Experimental result 99% messages are position updates Region changes take most bandwidth Effect of Population Density @ Ran with 1000 players, 25 regions @ Position updates increases linearly per node @ Non – uniform player distribution hurts performance Message rate of object updates higher than player-player = Object updates multicast in region = Object update sent to replica = Player-player interaction effects only players
19
Experimental result
20
Summary of result Feasibility of design Average delay : 150ms Bandwidth requirement 7.2KB/sec average 22.34KB/sec peak
21
Critique – first paper Strong Point P2P approach of MMG architecture design Good evaluation Simmud Real surveys about several game player Estimation of features about RPG games Weak Point Scalability Static zone partition Coordinator node has too much burden Heterogeneous peer nodes
22
Critique – second paper Strong point Enabled dynamic zone allocation One node can own several data holder (zone) Reduce latency between zone owner and members Weak point Robustness Owner fault :State missing Replica? Lack experiment Zone-owner change Old owner update cost New owner node download cost Time delay of succeeding Cheating Problem Node who updates zone data becomes a zone owner
23
Critique New idea Dynamic division of zone Data holder coverage is statically assigned Data owner coverage is dynamic Sometimes users crowd into a specific place EX> Thrall attack group consists of 300 users Ex> also, Defending group also consists of a 120 users One zone needs more than one owner Split data holder (zone) Allocate several zone owner to one zone Several instance of one zone No world-server division crowd zone may generate into two parallel zone Several “Caribbean Bay” instance!! Already presented by “Guild War”, Arena net However It is not a P2P approach!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.