Bandwidth Requirement

Slides:



Advertisements
Similar presentations
A Trajectory-Preserving Synchronization Method for Collaborative Visualization Lewis W.F. Li* Frederick W.B. Li** Rynson W.H. Lau** City University of.
Advertisements

Lecture 1: Logical, Physical & Casual Time (Part 2) Anish Arora CSE 763.
The SMART Way to Migrate Replicated Stateful Services Jacob R. Lorch, Atul Adya, Bill Bolosky, Ronnie Chaiken, John Douceur, Jon Howell Microsoft Research.
The Zebra Striped Network File System Presentation by Joseph Thompson.
Peer-to-Peer Support for Massively Multiplayer Games Bjorn Knutsson, Honghui Lu, Wei Xu, Bryan Hopkins Presented by Mohammed Alam (Shahed)
JaDE: A JXTA Support for Distributed Virtual Environments Luca Genovali Laura Ricci Luca Genovali, Laura Ricci Università degli Studi di Pisa JaDE: A JXTA.
Internet Networking Spring 2006 Tutorial 12 Web Caching Protocols ICP, CARP.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Distributed Snapshots –Termination detection Election algorithms –Bully –Ring.
M ERCURY : A Scalable Publish-Subscribe System for Internet Games Ashwin R. Bharambe, Sanjay Rao & Srinivasan Seshan Carnegie Mellon University.
CS 582 / CMPE 481 Distributed Systems
Scalable Application Layer Multicast Suman Banerjee Bobby Bhattacharjee Christopher Kommareddy ACM SIGCOMM Computer Communication Review, Proceedings of.
1 IMPROVING RESPONSIVENESS BY LOCALITY IN DISTRIBUTED VIRTUAL ENVIRONMENTS Luca Genovali, Laura Ricci, Fabrizio Baiardi Lucca Institute for Advanced Studies.
Session - 14 CONCURRENCY CONTROL CONCURRENCY TECHNIQUES Matakuliah: M0184 / Pengolahan Data Distribusi Tahun: 2005 Versi:
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms.
Lecture 2-1 CS 425/ECE 428 Distributed Systems Lecture 2 Time & Synchronization Reading: Klara Nahrstedt.
Applying Real-time Simulation to Real-time Collaboration Bart Miller.
Assembly Simulation on Collaborative Haptic Virtual Environments Rosa Iglesias, Elisa Prada Sara Casado, Teresa Gutierrez Ainhoa Uribe, Alejandro Garcia-Alonso.
An Efficient and Secure Event Signature (EASES) Protocol for Peer-to-Peer Massively Multiplayer Online Games Mo-Che Chan, Shun-Yun Hu and Jehn-Ruey Jiang.
Ophelia User-friendly Network Multi-player Game Engine Albert Öhrling.
Parallel and Distributed Simulation Synchronizing Wallclock Time.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms –Bully algorithm.
Serverless Network File Systems Overview by Joseph Thompson.
Low Cost Commit Protocols for Mobile Computing Environments Marc Perron & Baochun Bai.
A Low-bandwidth Network File System Athicha Muthitacharoen et al. Presented by Matt Miller September 12, 2002.
A P2P-Based Architecture for Secure Software Delivery Using Volunteer Assistance Purvi Shah, Jehan-François Pâris, Jeffrey Morgan and John Schettino IEEE.
Research of P2P Architecture based on Cloud Computing Speaker : 吳靖緯 MA0G0101.
Physical clock synchronization Question 1. Why is physical clock synchronization important? Question 2. With the price of atomic clocks or GPS coming down,
Distributed Computing Systems CSCI 4780/6780. Scalability ConceptExample Centralized servicesA single server for all users Centralized dataA single on-line.
Alex Chee Daniel LaBare Mike Oster John Spann Bryan Unbangluang Collaborative Document Sharing In Conjunction With.
Minimizing Commit Latency of Transactions in Geo-Replicated Data Stores Paper Authors: Faisal Nawab, Vaibhav Arora, Divyakant Argrawal, Amr El Abbadi University.
COMP8330/7330/7336 Advanced Parallel and Distributed Computing Communication Costs in Parallel Machines Dr. Xiao Qin Auburn University
Fall, 2001CS 6401 Switching and Routing Outline Routing overview Store-and-Forward switches Virtual circuits vs. Datagram switching.
Reliable multicast Tolerates process crashes. The additional requirements are: Only correct processes will receive multicasts from all correct processes.
Accelerating Peer-to-Peer Networks for Video Streaming
Chapter Name Replication and Mobile Databases Transparencies
Coral: A Peer-to-peer Content Distribution Network
Heitor Moraes, Marcos Vieira, Italo Cunha, Dorgival Guedes
Distributed Computing
Mohammad Malli Chadi Barakat, Walid Dabbous Alcatel meeting
Dynamo: Amazon’s Highly Available Key-value Store
Time and Clock Primary standard = rotation of earth
Nache: Design and Implementation of a Caching Proxy for NFSv4
Nache: Design and Implementation of a Caching Proxy for NFSv4
Clock-SI: Snapshot Isolation for Partitioned Data Stores
Multiprocessor Cache Coherency
CSE 486/586 Distributed Systems Time and Synchronization
Internet Networking recitation #12
Dept. of Computer Science
Consistency in Distributed Systems
Precision Time Protocol
CMSC 611: Advanced Computer Architecture
Comparison of LAN, MAN, WAN
Taeweon Suh § Hsien-Hsin S. Lee § Shih-Lien Lu † John Shen †
Chat Refs: RFC 1459 (IRC).
Introduction There are many situations in which we might use replicated data Let’s look at another, different one And design a system to work well in that.
Autonomous Aggregate Data Analytics in Untrusted Cloud
A Case for Mutual Notification
Energy Efficient Scheduling in IoT Networks
Software models - Software Architecture Design Patterns
Distributed Publish/Subscribe Network
Outline Announcements Lab2 Distributed File Systems 1/17/2019 COP5611.
Distributed Systems CS
Physical clock synchronization
Outline Review of Quiz #1 Distributed File Systems 4/20/2019 COP5611.
Cryptographic Protocols
The SMART Way to Migrate Replicated Stateful Services
ENSC 427: COMMUNICATION NETWORKS SPRING 2018
CSE 486/586 Distributed Systems Time and Synchronization
Last Class: Naming Name distribution: use hierarchies DNS
Presentation transcript:

Bandwidth Requirement Bandwidth Requirement & State Consistency in Three Multiplayer Game Architectures Joseph Pellegrino - University of Delaware Constantinos Dovrolis - Georgia Tech

BZflag 1/17/2019

Overview Review two existing gaming architectures: Client-Server (CS) Peer-to-Peer (PP) Evaluate two key factors: Bandwidth requirement at players/server State consistency resolution mechanism & latency Introduce a new architecture: Peer-to-Peer with Central Arbiter (PP-CA) Experimental results using BZflag 1/17/2019

Existing Architectures Client-Server (CS) Simpler state consistency resolution Low client bandwidth requirement Server: single point of failure High bandwidth requirement @ server (scalability issue) Server increases communication latency Peer-to-Peer (PP) Lower communication latency Distributed state consistency resolution Requires clock synchronization Less admin-control for subscription, anti-cheating, etc 1/17/2019

Proposed Architecture Peer-to-Peer with Central Arbiter (PP-CA) Simple consistency resolution (as in CS model) Remove server’s bandwidth bottleneck Avoid extra communication delay of CS model 1/17/2019

State Consistency Resolution State Si(t): set of all game attributes as seen by player i at time t Ideally, all players should have same state at all times: Network latency prevents ideal state consistency Goal: minimize duration of inconsistency between any two players TI: Maximum Inconsistency Period (MIP) 1/17/2019

Components of Inconsistency Resolution Period Player Loop: Read input, receiving updates, compute local view, rendering Loop duration: Player Update Period = TU TU is fairly stable at same machine Server Latency: Read player update, check consistency, relay update to all players Round-Trip Time (RTT): Minimum network latency between player and server (without any client or server latency) 1/17/2019

Experimental Methodology Experiments were conducted using BZflag Multiplayer open-source 3D tank battle game Developed in 1992 Uses client-server model/multicast if available No state consistency provided Pentium-III based PCs Redhat Linux 7.0 Fast Ethernet switch Four players – 10 minute game – 10000 transactions 1/17/2019

Modifications to BZflag Original implementation: CS with “smart clients” No consistency resolution Created PP implementation Easy to do with smart clients Implemented PP-CA architecture Include state consistency checks at server/CA Make sure that players do not occupy same position Expand headers with timestamps to measure RTT and inconsistency resolution latency 1/17/2019

Client-Server Architecture Server distributes updates between players Server maintains global state Server resolves any inconsistencies by accepting/rejecting player updates 1/17/2019

CS Bandwidth Requirement Players send an update (Lu) to server in every update period Tu Server relays updates to each of the N players in every update period Player bandwidth requirement: Server bandwidth requirement: Key point: server bandwidth requirement increases quadratically with number of players 1/17/2019

Bandwidth Measurements at Server Bandwidth at server increases quadratically with number of players 1/17/2019

Consistency Resolution in CS Server receives updates from all players Server compares requested moves to current state Build consistent new global state, and return update to players MIP: TI = maxi{TU + TS + Ri,S} 1/17/2019

BZflag Measurements Player Update Period: 24-26 ms Server Latency: 6 – 8 s 1/17/2019

RTT & Inconsistency Period Measurements RTT: 120 s TI = max{TU + TS + Ri,S} MIP = 27 ms 1/17/2019

Peer-To-Peer Architecture Players communicate directly with each other State consistency requires distributed agreement protocol 1/17/2019

PP Bandwidth Requirement A player sends an update to every other player A player receives an update from every other player Bandwidth Requirement: 1/17/2019

Consistency Resolution in PP A player i sends an update to all other players Player j checks if player i’s update is consistent with j’s state If not, player j sends an update to all players, including player i, invalidating i’s update MIP: 1/17/2019

Peer-to-Peer with Central Arbiter Players communicate with each other as in PP Central Arbiter performs three tasks: Receive updates from all players Compute new global state If a player’s update creates an inconsistency, send corrected update to all players 1/17/2019

Bandwidth Requirement at PP-CA Player bandwidth requirement: Upstream - Downstream - Server bandwidth requirement Upstream - Downstream - depends on inconsistency rate Best Case – Worst Case - 1/17/2019

PP-CA Bandwidth Measurements Inconsistencies: location conflicts Introduce 50% inconsistency rate 1/17/2019

Maximum Inconsistency Period in PP-CA Inconsistencies detected as in CS model Same MIP as in CS TI = maxi {TU + TCA + Ri,CA} 1/17/2019

Conclusions PP-CA model combines best features of CS and PP models Lower player-player latency (PP) Simple state consistency resolution protocol (CS) Improved bandwidth scalability, as long as state inconsistencies are rare 1/17/2019