Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter Five - Managing Dynamic Shared State The Consistency-Throughput Tradeoff 101 Proof of the Tradeoff 104Proof of the Tradeoff 104 Design Implications.

Similar presentations


Presentation on theme: "Chapter Five - Managing Dynamic Shared State The Consistency-Throughput Tradeoff 101 Proof of the Tradeoff 104Proof of the Tradeoff 104 Design Implications."— Presentation transcript:

1 Chapter Five - Managing Dynamic Shared State The Consistency-Throughput Tradeoff 101 Proof of the Tradeoff 104Proof of the Tradeoff 104 Design Implications of the Tradeoff 105Design Implications of the Tradeoff 105 The Consistency-Throughput Tradeoff 101 Proof of the Tradeoff 104Proof of the Tradeoff 104 Design Implications of the Tradeoff 105Design Implications of the Tradeoff 105 Maintaining Shared State Inside Centralized Repositories 107 A File Repository 108A File Repository 108 A Repository in Server Memory 110A Repository in Server Memory 110 Virtual Repositories 112Virtual Repositories 112 Advantages and Drawbacks to Centralized RepositoriesAdvantages and Drawbacks to Centralized Repositories

2 Chapter Five - Managing Dynamic Shared State Reducing Coupling through Frequent State Regeneration 117 Explicit Entity Ownership 118Explicit Entity Ownership 118 Systems Using Frequent State Regeneration 122Systems Using Frequent State Regeneration 122 Reducing the Broadcast Scope 124Reducing the Broadcast Scope 124 Reducing Coupling through Frequent State Regeneration 117 Explicit Entity Ownership 118Explicit Entity Ownership 118 Systems Using Frequent State Regeneration 122Systems Using Frequent State Regeneration 122 Reducing the Broadcast Scope 124Reducing the Broadcast Scope 124 Advantages and Drawbacks of Frequent State Regeneration 125Advantages and Drawbacks of Frequent State Regeneration 125

3 Chapter Five - Managing Dynamic Shared State Dead Reckoning of Shared State 127 Prediction and Convergence 128Prediction and Convergence 128 Prediction Using Derivative Polynomials 129Prediction Using Derivative Polynomials 129 Object-Specialized Prediction 134Object-Specialized Prediction 134 Convergence Algorithms 137Convergence Algorithms 137 Dead Reckoning of Shared State 127 Prediction and Convergence 128Prediction and Convergence 128 Prediction Using Derivative Polynomials 129Prediction Using Derivative Polynomials 129 Object-Specialized Prediction 134Object-Specialized Prediction 134 Convergence Algorithms 137Convergence Algorithms 137 Addressing Consistency- Throughput through Nonregular Transmissions 140Addressing Consistency- Throughput through Nonregular Transmissions 140 Advantages and Limitations of Dead Reckoning 142Advantages and Limitations of Dead Reckoning 142 Conclusion 143 References 144

4 Overview Dynamic Shared State Consistency-Throughput Tradeoff Centralized/Shared Repository Frequent State Regeneration (Blind Broadcast) Dead Reckoning Dynamic Shared State Consistency-Throughput Tradeoff Centralized/Shared Repository Frequent State Regeneration (Blind Broadcast) Dead Reckoning

5 What is dynamic shared state? The dynamic information that multiple hosts must maintain about the net-VE Accurate dynamic shared state is fundamental to creating realistic virtual environments. It is what makes a VE “multi-user”. Management is one of the most difficult challenges facing the net-VE designer. The trade off is between resources and realism. The dynamic information that multiple hosts must maintain about the net-VE Accurate dynamic shared state is fundamental to creating realistic virtual environments. It is what makes a VE “multi-user”. Management is one of the most difficult challenges facing the net-VE designer. The trade off is between resources and realism.

6 The Problem- Latency Player A Sends Update Here Player B Player A is here 100 ms later Update arrives after 100 ms

7 Consistency-Throughput Tradeoff “It is impossible to allow dynamic shared state to change frequently and guarantee that all hosts simultaneously access identical versions of that state.” We can have either a dynamic world or a consistent world, but not both. Reliable (Gets there) Scalable (Group size) Real-time (On time)

8 Another Example Player A moves and generates a location update. To ensure consistency, player A must await acknowledgments. If network lag is 100 ms, acknowledgment comes no earlier than 200 ms. Therefore, player A can only update his state 5 times per second. Player A moves and generates a location update. To ensure consistency, player A must await acknowledgments. If network lag is 100 ms, acknowledgment comes no earlier than 200 ms. Therefore, player A can only update his state 5 times per second.

9 Design Implications For a highly dynamic shared state, hosts must transmit more frequent data updates. To guarantee consistent views of the shared state, hosts must employ reliable data delivery. Available network bandwidth must be split between these two constraints. For a highly dynamic shared state, hosts must transmit more frequent data updates. To guarantee consistent views of the shared state, hosts must employ reliable data delivery. Available network bandwidth must be split between these two constraints.

10 Tradeoff Spectrum System Characteristic View consistency Dynamic data support Network infrastructure requirements Number of participants supported Absolute Consistency Identical at all host Low: Limited by consistency protocol Low latency, high reliability, limited variability Low High Update Rate Determined by data received at each host High: Limited only by available bandwidth Heterogeneous network possible Potentially high

11 Managing Shared States Shared RepositoriesBlind BroadcastDead Reckoning Techniques More DynamicMore Consistent

12 Centralized / Shared Repositories Maintain shared state data in a centralized location. Protect shared states via a lock manager to ensure ordered writes. Three Techniques Shared File DirectoryShared File Directory Repository in Server MemoryRepository in Server Memory Distributed Repository(Virtual Repository)Distributed Repository(Virtual Repository) Maintain shared state data in a centralized location. Protect shared states via a lock manager to ensure ordered writes. Three Techniques Shared File DirectoryShared File Directory Repository in Server MemoryRepository in Server Memory Distributed Repository(Virtual Repository)Distributed Repository(Virtual Repository)

13 Shared File Directory Absolute Consistency! Only one host can write data to the same file at a time. Must have locks. Slow! Does not support many users. Absolute Consistency! Only one host can write data to the same file at a time. Must have locks. Slow! Does not support many users.

14 Centralized Data Store Shared File Directory state UpdateRead Update User Synchronization Locks

15 Server Memory Faster than Shared File because each host uses does not have to open and close each file remotely. Don’t have to have locks. Server arbitrates. Server crash is catastrophic. Maintaining constant connection may. strain server resources. Faster than Shared File because each host uses does not have to open and close each file remotely. Don’t have to have locks. Server arbitrates. Server crash is catastrophic. Maintaining constant connection may. strain server resources.

16 Centralized Server Server Memory state UpdateRead Update User Server Arbitrates requests

17 Virtual Repository Tries to reduce bottleneck at server. Hosts communicate directly to each other following a protocol of information sharing. (can even tailor who you talk to ). Better fault tolerance (Depending on protocol creating the “virtual files” ) Eventual Consistency. Tries to reduce bottleneck at server. Hosts communicate directly to each other following a protocol of information sharing. (can even tailor who you talk to ). Better fault tolerance (Depending on protocol creating the “virtual files” ) Eventual Consistency.

18 Virtual Repository

19 Advantages of Centralized/ Shared Repositories Provides an easy programming model. Guarantees information consistency. No sense of data ownership is imposed; any host can update any piece of the shared state. Provides an easy programming model. Guarantees information consistency. No sense of data ownership is imposed; any host can update any piece of the shared state.

20 Disadvantages of Shared Repositories Data access and update operations require an unpredictable amount of time to complete. Requires considerable communications overhead due to reliable data delivery. Vulnerable to single point failure Data access and update operations require an unpredictable amount of time to complete. Requires considerable communications overhead due to reliable data delivery. Vulnerable to single point failure

21 Disadvantages of Shared Repositories (cont.) Push systems may send info where it is not needed. Limited number of users (else you overload the server or the network) One slow user can drag everyone down. Push systems may send info where it is not needed. Limited number of users (else you overload the server or the network) One slow user can drag everyone down.

22 Frequent State Regeneration / Blind Broadcasts Owner of each state transmits the current value asynchronously and unreliably at regular intervals. Clients cache the most recent update for each piece of the shared state. Hopefully, frequent state update compensate for lost packets. No assumptions made on what information the other hosts have. Owner of each state transmits the current value asynchronously and unreliably at regular intervals. Clients cache the most recent update for each piece of the shared state. Hopefully, frequent state update compensate for lost packets. No assumptions made on what information the other hosts have.

23 Frequent State Regeneration (cont.) Broadcast is sent “blind” to everyone. Usually the entire entity state is sent. No acknowledgements No assurances of delivery No ordering of updates. Broadcast is sent “blind” to everyone. Usually the entire entity state is sent. No acknowledgements No assurances of delivery No ordering of updates.

24 Why use? Can’t afford overhead of centralized repository May not have demanding consistency requirements Can’t afford overhead of centralized repository May not have demanding consistency requirements

25 Explicit Object Ownership With blind broadcasts, multiple hosts must not attempt to update an object at the same time. Each host takes explicit ownership of one piece of the shared state (usually the user’s avatar). To update an unowned piece of the shared state, either proxy updates or ownership transfer is employed. With blind broadcasts, multiple hosts must not attempt to update an object at the same time. Each host takes explicit ownership of one piece of the shared state (usually the user’s avatar). To update an unowned piece of the shared state, either proxy updates or ownership transfer is employed.

26 Explicit Object Ownership (cont.) Unlike “locks” in Shared Filed servers, multiple updates are allowed until ownership is transferred Commonly used in online gaming (DOOM, Diablo), tele- surgery, and in trying to simulate video conferencing. Unlike “locks” in Shared Filed servers, multiple updates are allowed until ownership is transferred Commonly used in online gaming (DOOM, Diablo), tele- surgery, and in trying to simulate video conferencing.

27 Explicit Object Ownership Gaining Ownership Lock Manager Request Ball Lock Grant Ball Lock Reject Ball Lock Update Ball Position HOST A HOST B

28 Explicit Object Ownership Proxy Update Lock Manager Request Ball Lock Grant Ball Lock Update Ball PositionRequest Update Ball PositionUpdate Ball Position [per Host B] HOST A HOST B

29 Explicit Object Ownership Transferring Ownership Lock Manager Notify Lock Transfer Acknowledge Lock Transfer HOST A HOST B Update Ball Position [per Host B]Update Ball PositionRequest Ball OwnershipGrant Ball Ownership

30 Reducing Broadcast Scope Each host sends updates to all participants in the net-VE. Reception of extraneous updates consumes bandwidth and CPU cycles. Need to filter updates, perhaps at a central server (e.g. RING system) which forwards updates only to those who can “see” each other. VEOS - epidemic approach- each host send info to specific neighbors. Each host sends updates to all participants in the net-VE. Reception of extraneous updates consumes bandwidth and CPU cycles. Need to filter updates, perhaps at a central server (e.g. RING system) which forwards updates only to those who can “see” each other. VEOS - epidemic approach- each host send info to specific neighbors.

31 Advantages of Blind Broadcasts Simple to implement; requires no servers, consistency protocols or a lock manager (except for filter or shared items) Can support a larger number of users at a higher frame rate and faster response time. Simple to implement; requires no servers, consistency protocols or a lock manager (except for filter or shared items) Can support a larger number of users at a higher frame rate and faster response time.

32 Disadvantages of Blind Broadcasts Requires large amount of bandwidth. Network latency impedes timely reception of updates and leads to incorrect decisions by remote hosts. Network jitter impedes steady reception of updates leading to ‘jerky’ visual behavior. Assumes everyone broadcasting as the same rate which may be noticeable to users if it is not the case (this may be very noticeable between local users and distant destinations). Requires large amount of bandwidth. Network latency impedes timely reception of updates and leads to incorrect decisions by remote hosts. Network jitter impedes steady reception of updates leading to ‘jerky’ visual behavior. Assumes everyone broadcasting as the same rate which may be noticeable to users if it is not the case (this may be very noticeable between local users and distant destinations).

33 Dead Reckoning Protocols Transmit state updates less frequently by using past updates to estimate the true shared state. Prediction: how the object’s current state is computed based on previously received packets. Convergence: how the object’s estimated state is corrected when another update is received. Transmit state updates less frequently by using past updates to estimate the true shared state. Prediction: how the object’s current state is computed based on previously received packets. Convergence: how the object’s estimated state is corrected when another update is received.

34 Dead Reckoning Protocols Each host estimates entity locations based on past data. No need for central server Sacrifices accuracy of shared state for more participants Each host estimates entity locations based on past data. No need for central server Sacrifices accuracy of shared state for more participants

35 Illustration Current Position Predicted Position Updated Position Convergence Time (y) Time (z) Time (y) Time (x)

36 Prediction Using Derivative Polynomials Zero Order (simplest) x(t +  t) = x(t) (really state regeneration-assumes the object doesn’t move)x(t +  t) = x(t) (really state regeneration-assumes the object doesn’t move) First Order (velocity) x(t +  t) = x(t) + v x  tx(t +  t) = x(t) + v x  t Second Order (acceleration) x(t +  t) = x(t) + v x  t + a x (  t) 2x(t +  t) = x(t) + v x  t + a x (  t) 2 Higher Order Approximations Zero Order (simplest) x(t +  t) = x(t) (really state regeneration-assumes the object doesn’t move)x(t +  t) = x(t) (really state regeneration-assumes the object doesn’t move) First Order (velocity) x(t +  t) = x(t) + v x  tx(t +  t) = x(t) + v x  t Second Order (acceleration) x(t +  t) = x(t) + v x  t + a x (  t) 2x(t +  t) = x(t) + v x  t + a x (  t) 2 Higher Order Approximations

37 Hybrid Polynomial Prediction Need not be absolute Instead of using a fixed prediction scheme, dynamically choose between first or second order based on object’s history. Use first order when acceleration is negligible or acceleration information is unreliable (changes frequently). Need not be absolute Instead of using a fixed prediction scheme, dynamically choose between first or second order based on object’s history. Use first order when acceleration is negligible or acceleration information is unreliable (changes frequently).

38 Hybrid Polynomial Prediction Position History-Based Dead Reckoning Protocol (PHBRR) - only included position. Required the host machine to calculate velocity and acceleration based on past positions. Actually more accurate as “snapshot” velocities and accelerations can be misleading. Position History-Based Dead Reckoning Protocol (PHBRR) - only included position. Required the host machine to calculate velocity and acceleration based on past positions. Actually more accurate as “snapshot” velocities and accelerations can be misleading.

39 Limitations of Derivative Polynomials Why not use more terms? greater bandwidth requiredgreater bandwidth required greater computational complexitygreater computational complexity less accurate prediction since higher order terms are harder to estimate and errors have disparate impactless accurate prediction since higher order terms are harder to estimate and errors have disparate impact Do not take into account capabilities or limitations of objects. Why not use more terms? greater bandwidth requiredgreater bandwidth required greater computational complexitygreater computational complexity less accurate prediction since higher order terms are harder to estimate and errors have disparate impactless accurate prediction since higher order terms are harder to estimate and errors have disparate impact Do not take into account capabilities or limitations of objects.

40 Object Specialized Prediction Object behavior may simplify prediction scheme. A plane’s orientation angle is determined solely by its forward velocity and acceleration. Land based objects need only two dimensions specified. Object behavior may simplify prediction scheme. A plane’s orientation angle is determined solely by its forward velocity and acceleration. Land based objects need only two dimensions specified.

41 Object Specialized Prediction Desired level of detail - often do not need to be precise with some aspects. Do we have to accurately model the flicker of the flames of a burning vehicle or is it enough to say it is on fire. The same with smoke. Some VEs need to accurately model smoke, other do not. Desired level of detail - often do not need to be precise with some aspects. Do we have to accurately model the flicker of the flames of a burning vehicle or is it enough to say it is on fire. The same with smoke. Some VEs need to accurately model smoke, other do not.

42 Convergence Algorithms Tells us what to do to correct an inexact prediction: Trade-off between computational complexity and perceived smoothness of displayed entities Tells us what to do to correct an inexact prediction: Trade-off between computational complexity and perceived smoothness of displayed entities Current Position Predicted Position Updated Position Prediction Error

43 Convergence Algorithms Zero order or snap convergence Predicted Position Updated Position

44 Convergence Algorithms Zero order or snap convergence: Advantage: Simple Disadvantage: Poorly models real world. “Jumping” entities may distract users. “Jumping” entities may distract users. Zero order or snap convergence: Advantage: Simple Disadvantage: Poorly models real world. “Jumping” entities may distract users. “Jumping” entities may distract users.

45 Convergence Algorithms Linear Convergence Predicted Track Updated Track Covergence Path

46 Convergence Algorithms Linear Convergence: Advantage: Avoids jumping Disadvantage: Does not prevent “sudden” or unrealistic changes in speed or direction. Linear Convergence: Advantage: Avoids jumping Disadvantage: Does not prevent “sudden” or unrealistic changes in speed or direction.

47 Convergence Algorithms Cubic Spline Updated Track Predicted Track Covergence Path T-1 T CC+1

48 Convergence Algorithms Cubic Spline: Advantage: Smoothest looking convergence Disadvantage: Computationally expensive Cubic Spline: Advantage: Smoothest looking convergence Disadvantage: Computationally expensive

49 Convergence Algorithms Choice of convergence algorithm may vary within a VE depending on the entity type and characteristics. Hybrids may be used (PHBRR) Choice of convergence algorithm may vary within a VE depending on the entity type and characteristics. Hybrids may be used (PHBRR)

50 Non-Regular Updates Slow update rate if prediction at remote host is within an error tolerance. The source host models the prediction algorithm used by the remote hosts. Only transmit an update when an error threshold is reached or after a timeout period (heartbeat). Entities must have a “heartbeat” otherwise cannot distinguish between live entities and ones that have left the system. Slow update rate if prediction at remote host is within an error tolerance. The source host models the prediction algorithm used by the remote hosts. Only transmit an update when an error threshold is reached or after a timeout period (heartbeat). Entities must have a “heartbeat” otherwise cannot distinguish between live entities and ones that have left the system.

51 Advantages of Dead Reckoning Reduces bandwidth requirements because updates are sent less frequently. Potentially larger number of players. Each host does independent calculations Reduces bandwidth requirements because updates are sent less frequently. Potentially larger number of players. Each host does independent calculations

52 Disadvantages of Dead Reckoning Not all hosts share the identical state about each entity. Protocols are more complex to implement to develop, maintain and evaluate. Must customize for object behavior to achieve best results. Not all hosts share the identical state about each entity. Protocols are more complex to implement to develop, maintain and evaluate. Must customize for object behavior to achieve best results.

53 Disadvantages of Dead Reckoning Must have convergence to cover prediction errors. Collision detection difficult to implement. Poor convergence methods lead to jerky movements and distract from immersion. Must have convergence to cover prediction errors. Collision detection difficult to implement. Poor convergence methods lead to jerky movements and distract from immersion.

54 Conclusions Shared state maintenance is governed by the Consistency-Throughput Tradeoff. Three broad types of maintenance: Centralized/Shared repositoryCentralized/Shared repository Frequent State Regeneration(Blind Broadcast)Frequent State Regeneration(Blind Broadcast) Dead ReckoningDead Reckoning The correct choice relies on balancing many issues including bandwidth, latency, data consistency, reproducibility, and computational complexity. Shared state maintenance is governed by the Consistency-Throughput Tradeoff. Three broad types of maintenance: Centralized/Shared repositoryCentralized/Shared repository Frequent State Regeneration(Blind Broadcast)Frequent State Regeneration(Blind Broadcast) Dead ReckoningDead Reckoning The correct choice relies on balancing many issues including bandwidth, latency, data consistency, reproducibility, and computational complexity.


Download ppt "Chapter Five - Managing Dynamic Shared State The Consistency-Throughput Tradeoff 101 Proof of the Tradeoff 104Proof of the Tradeoff 104 Design Implications."

Similar presentations


Ads by Google