Download presentation
Presentation is loading. Please wait.
Published byCharlotte Little Modified over 9 years ago
1
Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University
2
April 1, 2010RAMCloud AssumptionsSlide 2 Talk Agenda ● Present assumptions we are making in RAMCloud Infrastructure for data centers 5 years in future ● Hardware and Workload assumptions ● Elicit feedback Unconscious assumptions Hopelessly wrong assumptions ● Disclaimer: Goal #1 of RAMCloud is research Research agenda is pushing latency and scale Significant risk is OK Version 1.0 design decisions (OK to leave stuff out, oversimplify)
3
April 1, 2010RAMCloud AssumptionsSlide 3 Network Assumptions ● Assumption: Network latency going down Microsecond latencies require low latency switches & NICs ● Encouraging: Arista: 0.6 μs/switch ● Lots of discouraging Chicken and egg problem ● Assumption: No significant oversubscription Scale-based solutions require high bisection bandwidth ● Example: Recovery ● Actively engaging networking researchers and industry partners Vote: Are we crazy?
4
April 1, 2010RAMCloud AssumptionsSlide 4 Workload Assumptions ● Less than 2 second recovery time is fast enough ● Applications will tolerate 2 second hiccups Vote: Will enough important applications tolerate 2 second pauses when failures happen?
5
April 1, 2010RAMCloud AssumptionsSlide 5 Server Assumptions ● Assumption: Simultaneous failures of all servers unlikely (i.e. all DRAMs can’t lose contents) Require Uninterruptible power supply Example: Google’s battery backup per machine ● Need only enough to transfer write buffers to non-volatile storage ● Want to avoid non-volatile latency in write request path Vote: What do you think?
6
April 1, 2010RAMCloud AssumptionsSlide 6 Assumption: Latency is goodness ● High latency is a major application inhibitor Increases complexity, decreases innovation RAMCloud will enable new applications ● Low latency is underappreciated Users won’t ask for it but will love it Vote: Are we right?
7
April 1, 2010RAMCloud AssumptionsSlide 7 Low Latency: Stronger Consistency? ● Might be able to help with scaling challenges Example: Durability, update multiple in-memory replicas ● Cost of consistency rises with transaction overlap: O ~ R*D O = # overlapping transactions R = arrival rate of new transactions D = duration of each transaction ● R increases with system scale Eventually, scale makes consistency unaffordable ● But, D decreases with lower latency Stronger consistency affordable at larger scale Is this phenomenon strong enough to matter?
8
April 1, 2010RAMCloud AssumptionsSlide 8 Locality Assumptions ● Locality is getting harder to find and exploit Example: Facebook usage ● Flat model rather than tiered storage Vote: Right?
9
April 1, 2010RAMCloud AssumptionsSlide 9 Read/Write Ratio Assumption ● Read request dominate writes ● Postulate: Fast reads will cause applications to use more information leading to more reads than writes Vote: What do you think?
10
April 1, 2010RAMCloud AssumptionsSlide 10 Replication Assumptions ● Replication isn’t needed for performance ● High throughput and load balancing eliminates the need for object replication ● Few applications need more than1M ops/second to a single object Vote: Right?
11
April 1, 2010RAMCloud AssumptionsSlide 11 WAN Replication Assumption ● There are interesting applications that don’t require cross data center replication Speed of light not our friend
12
April 1, 2010RAMCloud AssumptionsSlide 12 Transaction Assumption ● Multi-object update support is required Distributed applications need support for concurrent accesses
13
April 1, 2010RAMCloud AssumptionsSlide 13 Security assumptions ● Threat model RAMCloud servers and network physically secure ● Don’t need encryption on RPCs, DoS defense, etc.
14
April 1, 2010RAMCloud AssumptionsSlide 14 Data Model Assumption ● Simple table/object with indexing sufficient Dictated by speed SQL/relational model can be built on top
15
April 1, 2010RAMCloud AssumptionsSlide 15 Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.