Peer-to-Peer Support for Massively Multiplayer Games Zone Federation of Game Servers : a Peer-to-Peer Approach to Scalable Multi-player Online Games [INFOCOM 2004]: Bjorn Knutsson, Honghui Lu, Wei Xu, Bryan Hopkins, UPenn [SIGCOMM’04] Takuji limura, Hiroaki Hazeyama, Youki Kadobayashi, Nara Institue of Science and Technology
Outline One-line summary Motivation Solution Approach 1 Peer-to-Peer Support for Massively Multiplayer Games Solution Approach 2 Zone Federation of Game Servers Experiment Critique
One-line comment This papers presents peer-to-peer overlay network for Massively Multiplayer Games (MMG) to solve scalability problem using locality of interest.
Motivation Massively Multiplayer Games (MMG) Almost MMG is a RPG Ex> In the “Lord of the rings”, you are 레골라스. >_</ Ex> Lineage, World of Warcraft, etc 2M players, 180K concurrent players
Motivation Existing Clustered Server-Client architecture Zone-based partition One point of failure Flash crowd Over Provisioning Lack of flexibility Server-Client
Motivation Characteristics of MMG Large shared game world Immutable landscape information (terrain) Mutable object (food, tools, NPCs) Locality of interest Limited vision & sensing capabilities Limited movement Interaction with near object and players Self-organizing group by location Party play in RPG Ex> 반지원정대 in “Lord of the Rings”
Solution Approach P2P overlay Scale up and down dynamically Self-organizing decentralized system Divide entire game into several region (peer group) Hash region name into P2P key space Coordinator manages region Coordination of shared object Root of multicast tree Distribution server of map Also, one of player SCRIBE (Multicast support) PASTRY (P2P overlay) MMG Region 1 Region 2 Region 3 Player Object (NPCs or food) Coordinator Multicast Player Direct Connection
Scenario : mapping region to coordinator Regions and Player Machines are mapped to key space A region is managed by successor machine in key space B A C 314 ABC A B C Node key :
Scenario : interaction between nodes Blue arrow means communication between player & Coordinator Except E, F node, every node is both coordinator & player B A C ABC C B A
Scenario : node join Player D on node 7 joins Rely on DHT to relocate peer-server B A C ABC 7 D D A B C D
Scenario : node leave A 9 Node leave Peer-server relocated to succeeding node BC 7 D B A C D 10 A A A B C D
Scenario : Replica and Coordinator migrate New Coordinator forwards update to old one until the game state transfer is completed Recovery time depend on both the size of game state and network Game states are Replicated at replica (succeeding nodes) Coordinator keep consistency at every updates
Solution Approach Division of coordinator Zone owner Sending state change to zone member Aggregating modification from zone member Consistency of changes Data holder Zone name Zone owner Zone data coordinator
Solution Approach Division of Zoning layer from DHT network Flexibility of zone owner One node can own several data holder (really?) Enable dynamic zone allocation Direct connection Reduce latency No crossing several hops on DHT
Scenario : zone owner and data holder A 9 Data holder Same location with coordinator Zone owner Who firstly updates data of data holder BC 7 D B A C D 10 A A A B C D Data updates Lookup owner
Experimental assumption Prototype Implementation of “SimMud” Game states Modeling RPG games to generate own trace position change Multicast in group (region) : every 150 ms Ex> Quake 2 : every 50 ms player-object interaction (coordinator – player) Eat every 20 sec player-player interaction Fight every 20 sec changing region (multicasting group) Every 40 sec Region 200x300 grid Map and object size : 2* 6KB Maximum simulation size constrained by memory to 4000 virtual nodes (players) No player join & leave, no optimization
Experimental result around 6 hops Maximum delay about 400ms average message 70/sec 10 position updates * 7 /sec in same density, population growth does not make difference delay also increase slowly [log n]
Experimental result 99% messages are position updates Region changes take most bandwidth Effect of Population Ran with 1000 players, 25 Position updates increases linearly per Non – uniform player distribution hurts performance Message rate of object updates higher than player-player = Object updates multicast in region = Object update sent to replica = Player-player interaction effects only players
Experimental result
Summary of result Feasibility of design Average delay : 150ms Bandwidth requirement 7.2KB/sec average 22.34KB/sec peak
Critique – first paper Strong Point P2P approach of MMG architecture design Good evaluation Simmud Real surveys about several game player Estimation of features about RPG games Weak Point Scalability Static zone partition Coordinator node has too much burden Heterogeneous peer nodes
Critique – second paper Strong point Enabled dynamic zone allocation One node can own several data holder (zone) Reduce latency between zone owner and members Weak point Robustness Owner fault :State missing Replica? Lack experiment Zone-owner change Old owner update cost New owner node download cost Time delay of succeeding Cheating Problem Node who updates zone data becomes a zone owner
Critique New idea Dynamic division of zone Data holder coverage is statically assigned Data owner coverage is dynamic Sometimes users crowd into a specific place EX> Thrall attack group consists of 300 users Ex> also, Defending group also consists of a 120 users One zone needs more than one owner Split data holder (zone) Allocate several zone owner to one zone Several instance of one zone No world-server division crowd zone may generate into two parallel zone Several “Caribbean Bay” instance!! Already presented by “Guild War”, Arena net However It is not a P2P approach!