CS5412: ANATOMY OF A CLOUD Ken Birman 1 CS5412 Spring 2012 (Cloud Computing: Birman) Lecture VII.

Slides:



Advertisements
Similar presentations
CS 542: Topics in Distributed Systems Diganta Goswami.
Advertisements

Case Study - Amazon. Amazon r Amazon has many Data Centers r Hundreds of services r Thousands of commodity machines r Millions of customers at peak times.
Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services Authored by: Seth Gilbert and Nancy Lynch Presented by:
CS5412: THE BASE METHODOLOGY VERSUS THE ACID MODEL Ken Birman 1 CS5412 Spring 2012 (Cloud Computing: Birman) Lecture VIII.
Replication. Topics r Why Replication? r System Model r Consistency Models r One approach to consistency management and dealing with failures.
6.852: Distributed Algorithms Spring, 2008 Class 7.
Dynamo: Amazon's Highly Available Key-value Store Distributed Storage Systems CS presented by: Hussam Abu-Libdeh.
CS5412: HOW DURABLE SHOULD IT BE? Ken Birman 1 CS5412 Spring 2014 (Cloud Computing: Birman) Lecture XV.
Adding scalability to legacy PHP web applications Overview Mario A. Valdez-Ramirez.
Consensus Algorithms Willem Visser RW334. Why do we need consensus? Distributed Databases – Need to know others committed/aborted a transaction to avoid.
Giovanni Chierico | May 2012 | Дубна Consistency in a distributed world.
Lecture 2 Page 1 CS 236, Spring 2008 Security Principles and Policies CS 236 On-Line MS Program Networks and Systems Security Peter Reiher Spring, 2008.
Ken Birman. Massive data centers We’ve discussed the emergence of massive data centers associated with web applications and cloud computing Generally.
Ken Birman, Cornell University August 18/19, 20101Berkeley Workshop on Scalability.
Ken Birman Cornell University. CS5410 Fall
Scaling Distributed Machine Learning with the BASED ON THE PAPER AND PRESENTATION: SCALING DISTRIBUTED MACHINE LEARNING WITH THE PARAMETER SERVER – GOOGLE,
CS514: Intermediate Course in Operating Systems Professor Ken Birman Vivek Vishnumurthy: TA.
CS5412: THE BASE METHODOLOGY VERSUS THE ACID MODEL Ken Birman 1 CS5412 Spring 2012 (Cloud Computing: Birman) Lecture VIII.
Ken Birman Professor, Dept. of Computer Science.  Today’s cloud computing platforms are best for building “apps” like YouTube, web search  Highly elastic,
EEC-681/781 Distributed Computing Systems Lecture 3 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Distributed Systems 2006 Group Membership * *With material adapted from Ken Birman.
Definition of terms Definition of terms Explain business conditions driving distributed databases Explain business conditions driving distributed databases.
CS346: Advanced Databases
IBM Haifa Research 1 The Cloud Trade Off IBM Haifa Research Storage Systems.
SYSTEMS SUPPORT FOR GRAPHICAL LEARNING Ken Birman 1 CS6410 Fall /18/2014.
Distributed Data Stores – Facebook Presented by Ben Gooding University of Arkansas – April 21, 2015.
Cloud Computing for the Enterprise November 18th, This work is licensed under a Creative Commons.
Lecture 18 Page 1 CS 111 Online Design Principles for Secure Systems Economy Complete mediation Open design Separation of privileges Least privilege Least.
What is Architecture  Architecture is a subjective thing, a shared understanding of a system’s design by the expert developers on a project  In the.
Fault Tolerance via the State Machine Replication Approach Favian Contreras.
SYSTEMS SUPPORT FOR GRAPHICAL LEARNING Ken Birman 1 CS6410 Fall /18/2014.
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
Ahmad Al-Shishtawy 1,2,Tareq Jamal Khan 1, and Vladimir Vlassov KTH Royal Institute of Technology, Stockholm, Sweden {ahmadas, tareqjk,
Exercises for Chapter 2: System models
CS5412: HOW DURABLE SHOULD IT BE? Ken Birman 1 CS5412 Spring 2015 (Cloud Computing: Birman) Lecture XV.
High Throughput Computing on P2P Networks Carlos Pérez Miguel
CS5412: ANATOMY OF A CLOUD Ken Birman 1 CS5412 Spring 2015 (Cloud Computing: Birman) Lecture VIII.
VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wananga o te Upoko o te Ika a Maui SWEN 432 Advanced Database Design and Implementation Trade-offs in Cloud.
SharePoint document libraries I: Introduction to sharing files Sharjah Higher Colleges of Technology presents:
Lecture 5: Sun: 1/5/ Distributed Algorithms - Distributed Databases Lecturer/ Kawther Abas CS- 492 : Distributed system &
Server to Server Communication Redis as an enabler Orion Free
Fast Crash Recovery in RAMCloud. Motivation The role of DRAM has been increasing – Facebook used 150TB of DRAM For 200TB of disk storage However, there.
1 Distributed Databases BUAD/American University Distributed Databases.
CS5412: SHEDDING LIGHT ON THE CLOUDY FUTURE Ken Birman 1 Lecture XXV.
CS5412: HOW DURABLE SHOULD IT BE? Ken Birman 1 CS5412 Spring 2012 (Cloud Computing: Birman) Lecture XV.
The Relational Model1 Transaction Processing Units of Work.
COORDINATION, SYNCHRONIZATION AND LOCKING WITH ISIS2 Ken Birman 1 Cornell University.
Replication (1). Topics r Why Replication? r System Model r Consistency Models r One approach to consistency management and dealing with failures.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
Consensus and leader election Landon Cox February 6, 2015.
Two-Phase Commit Brad Karp UCL Computer Science CS GZ03 / M th October, 2008.
NoSQL Or Peles. What is NoSQL A collection of various technologies meant to work around RDBMS limitations (mostly performance) Not much of a definition...
Spring 2003CS 4611 Replication Outline Failure Models Mirroring Quorums.
Lecture 4 Page 1 CS 111 Online Modularity and Virtualization CS 111 On-Line MS Program Operating Systems Peter Reiher.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
Exercises for Chapter 2: System models From Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edition 4, © Pearson Education 2005.
Lecture 3 Page 1 CS 236 Online Security Mechanisms CS 236 On-Line MS Program Networks and Systems Security Peter Reiher.
Log Shipping, Mirroring, Replication and Clustering Which should I use? That depends on a few questions we must ask the user. We will go over these questions.
Distributed Systems Lecture 9 Leader election 1. Previous lecture Middleware RPC and RMI – Marshalling 2.
Amazon Web Services. Amazon Web Services (AWS) - robust, scalable and affordable infrastructure for cloud computing. This session is about:
Anatomy of a ClouD.
CSE 486/586 Distributed Systems Consistency --- 2
Trade-offs in Cloud Databases
MongoDB Distributed Write and Read
Anatomy of a Cloud Lecture VIII Anwar Alhenshiri
EECS 498 Introduction to Distributed Systems Fall 2017
PERSPECTIVES ON THE CAP THEOREM
Transaction Properties: ACID vs. BASE
CSE 486/586 Distributed Systems Consistency --- 2
Presentation transcript:

CS5412: ANATOMY OF A CLOUD Ken Birman 1 CS5412 Spring 2012 (Cloud Computing: Birman) Lecture VII

How are cloud structured? CS5412 Spring 2012 (Cloud Computing: Birman) 2  Clients talk to clouds using web browsers or the web services standards  But this only gets us to the outer “skin” of the cloud data center, not the interior  Consider Amazon: it can host entire company web sites (like Target.com or Netflix.com), data (AC3), servers (EC2) and even user-provided virtual machines!

Big picture overview  Client requests are handled in the “first tier” by  PHP or ASP pages  Associated logic  These lightweight services are fast and very nimble  Much use of caching: the second tier Index DB 2 2 Shards CS5412 Spring 2012 (Cloud Computing: Birman)

Many styles of system CS5412 Spring 2012 (Cloud Computing: Birman) 4  Near the edge of the cloud focus is on vast numbers of clients and rapid response  Inside we find high volume services that operate in a pipelined manner, asynchronously  Deep inside the cloud we see a world of virtual computer clusters that are scheduled to share resources and on which applications like MapReduce (Hadoop) are very popular

In the outer tiers replication is key CS5412 Spring 2012 (Cloud Computing: Birman) 5  We need to replicate  Processing: each client has what seems to be a private, dedicated server (for a little while)  Data: as much as possible, that server has copies of the data it needs to respond to client requests without any delay at all  Control information: the entire structure is managed in an agreed-upon way by a decentralized cloud management infrastructure

What about the “shards”? CS5412 Spring 2012 (Cloud Computing: Birman) 6  The caching components running in tier two are central to the responsiveness of tier-one services  Basic idea is to always used cached data if at all possible, so the inner services (here, a database and a search index stored in a set of files) are shielded from “online” load  We need to replicate data within our cache to spread loads and provide fault-tolerance  But not everything needs to be “fully” replicated. Hence we often use “shards” with just a few replicas Inde x DB 2 2 Shards

Sharding used in many ways CS5412 Spring 2012 (Cloud Computing: Birman) 7  The second tier could be any of a number of caching services:  Memcached: a sharable in-memory key-value store  Other kinds of DHTs that use key-value APIs  Dynamo: A service created by Amazon as a scalable way to represent the shopping cart and similar data  BigTable: A very elaborate key-value store created by Google and used not just in tier-two but throughout their “GooglePlex” for sharing information  Notion of sharding is cross-cutting  Most of these systems replicate data to some degree

Do we always need to shard data? CS5412 Spring 2012 (Cloud Computing: Birman) 8  Imagine a tier-one service running on 100k nodes  Can it ever make sense to replicate data on the entire set?  Yes, if some kinds of information might be so valuable that almost every external request touches it.  Must think hard about patterns of data access and use  Some information needs to be heavily replicated to offer blindingly fast access on vast numbers of nodes  The principle is similar to the way Beehive operates. Even if we don’t make a dynamic decision about the level of replication required, the principle is similar We want the level of replication to match level of load and the degree to which the data is needed on the critical path

And it isn’t just about updates CS5412 Spring 2012 (Cloud Computing: Birman) 9  Should also be thinking about patterns that arise when doing reads (“queries”)  Some can just be performed by a single representative of a service  But others might need the parallelism of having several (or even a huge number) of machines do parts of the work concurrently  The term sharding is used for data, but here we might talk about “parallel computation on a shard”

What does “critical path” mean? CS5412 Spring 2012 (Cloud Computing: Birman) 10  Focus on delay until a client receives a reply  Critical path are actions that contribute to this delay Update the monitoring and alarms criteria for Mrs. Marsh as follows… Confirmed Response delay seen by end-user would include Internet latencies Service response delay Service instance

What if a request triggers updates? CS5412 Spring 2012 (Cloud Computing: Birman) 11  If the updates are done “asynchronously” we might not experience much delay on the critical path  Cloud systems often work this way  Avoids waiting for slow services to process the updates but may force the tier-one service to “guess” the outcome  For example, could optimistically apply update to value from a cache and just hope this was the right answer  Many cloud systems use these sorts of “tricks” to speed up response time

First-tier parallelism CS5412 Spring 2012 (Cloud Computing: Birman) 12  Parallelism is vital to speeding up first-tier services  Key question:  Request has reached some service instance X  Will it be faster… … For X to just compute the response … Or for X to subdivide the work by asking subservices to do parts of the job?  Glimpse of an answer  Werner Vogels, CTO at Amazon, commented in one talk that many Amazon pages have content from 50 or more parallel subservices that ran, in real-time, on your request!

What does “critical path” mean? CS5412 Spring 2012 (Cloud Computing: Birman) 13  In this example of a parallel read-only request, the critical path centers on the middle “subservice” Update the monitoring and alarms criteria for Mrs. Marsh as follows… Confirmed Response delay seen by end-user would include Internet latencies Service response delay Service instance Critical path

With replicas we just load balance CS5412 Spring 2012 (Cloud Computing: Birman) 14 Update the monitoring and alarms criteria for Mrs. Marsh as follows… Confirmed Response delay seen by end-user would include Internet latencies Service response delay Service instance

But when we add updates…. CS5412 Spring 2012 (Cloud Computing: Birman) 15 Update the monitoring and alarms criteria for Mrs. Marsh as follows… Confirmed Response delay seen by end-user would also include Internet latencies not measured in our work Now the delay associated with waiting for the multicasts to finish could impact the critical path even in a single service Send Execution timeline for an individual first-tier replica Soft-state first-tier service A B C D

What if we send updates without waiting? CS5412 Spring 2012 (Cloud Computing: Birman) 16  Several issues now arise  Are all the replicas applying updates in the same order? Might not matter unless the same data item is being changed But then clearly we do need some “agreement” on order  What if the leader replies to the end user but then crashes and it turns out that the updates were lost in the network? Data center networks are surprisingly lossy at times Also, bursts of updates can queue up  Such issues result in inconsistency

Eric Brewer’s CAP theorem CS5412 Spring 2012 (Cloud Computing: Birman) 17  In a famous 2000 keynote talk at ACM PODC, Eric Brewer proposed that “you can have just two from Consistency, Availability and Partition Tolerance”  He argues that data centers need very snappy response, hence availability is paramount  And they should be responsive even if a transient fault makes it hard to reach some service. So they should use cached data to respond faster even if the cached entry can’t be validated and might be stale!  Conclusion: weaken consistency for faster response

CAP theorem CS5412 Spring 2012 (Cloud Computing: Birman) 18  A proof of CAP was later introduced by MIT’s Seth Gilbert and Nancy Lynch  Suppose a data center service is active in two parts of the country with a wide-area Internet link between them  We temporarily cut the link (“partitioning” the network)  And present the service with conflicting requests  The replicas can’t talk to each other so can’t sense the conflict  If they respond at this point, inconsistency arises

Is inconsistency a bad thing? CS5412 Spring 2012 (Cloud Computing: Birman) 19  How much consistency is really needed in the first tier of the cloud?  Think about YouTube videos. Would consistency be an issue here?  What about the Amazon “number of units available” counters. Will people notice if those are a bit off?  Puzzle: can you come up with a general policy for knowing how much consistency a given thing needs?

THE WISDOM OF THE SAGES 20 CS5412 Spring 2012 (Cloud Computing: Birman)

eBay’s Five Commandments 21  As described by Randy Shoup at LADIS 2008 Thou shalt… 1. Partition Everything 2. Use Asynchrony Everywhere 3. Automate Everything 4. Remember: Everything Fails 5. Embrace Inconsistency CS5412 Spring 2012 (Cloud Computing: Birman)

Vogels at the Helm 22  Werner Vogels is CTO at Amazon.com…  He was involved in building a new shopping cart service  The old one used strong consistency for replicated data  New version was build over a DHT, like Chord, and has weak consistency with eventual convergence  This weakens guarantees… but  Speed matters more than correctness CS5412 Spring 2012 (Cloud Computing: Birman)

James Hamilton’s advice 23  Key to scalability is decoupling, loosest possible synchronization  Any synchronized mechanism is a risk  His approach: create a committee  Anyone who wants to deploy a highly consistent mechanism needs committee approval …. They don’t meet very often CS5412 Spring 2012 (Cloud Computing: Birman)

Consistency CS5412 Spring 2012 (Cloud Computing: Birman) 24 Consistency technologies just don’t scale!

But inconsistency brings risks too!  Inconsistency causes bugs  Clients would never be able to trust servers… a free-for-all  Weak or “best effort” consistency?  Strong security guarantees demand consistency  Would you trust a medical electronic-health records system or a bank that used “weak consistency” for better scalability? My rent check bounced? That can’t be right! Jason Fane Properties Sept 2009 Tommy Tenant CS5412 Spring 2012 (Cloud Computing: Birman) 25

Puzzle: Is CAP valid in the cloud? CS5412 Spring 2012 (Cloud Computing: Birman) 26  Facts: data center networks don’t normally experience partitioning failures  Wide-area links do fail  But most services are designed to do updates in a single place and mirror read-only data at others  So the CAP scenario used in the proof can’t arise  Brewer’s argument about not waiting for a slow service to respond does make sense  Argues for using any single replica you can find  But does this preclude that replica being consistent?

What does “consistency” mean? CS5412 Spring 2012 (Cloud Computing: Birman) 27  We need to pin this basic issue down!  As used in CAP, consistency is about two things  First, that updates to the same data item are applied in some agreed-upon order  Second, that once an update is acknowledged to an external user, it won’t be forgotten  Not all systems need both properties

Integrated glucose monitor and Insulin pump receives instructions wirelessly Motion sensor, fall-detector Cloud Infrastructure Home healthcare application Healthcare provider monitors large numbers of remote patients Medication station tracks, dispenses pills What properties are needed in remote medical care systems? 28

Which matters more: fast response, or durability of the data being updated?  Need: Strong consistency and durability for data Cloud Infrastructure Mrs. Marsh has been dizzy. Her stomach is upset and she hasn’t been eating well, yet her blood sugars are high. Let’s stop the oral diabetes medication and increase her insulin, but we’ll need to monitor closely for a week Patient Records DB 29

Update the monitoring and alarms criteria for Mrs. Marsh as follows… Confirmed Response delay seen by end-user would also include Internet latencies Local response delay flush Send Execution timeline for an individual first-tier replica Soft-state first-tier service A B C D What if we were doing online monitoring?  An online monitoring system might focus on real-time response and value consistency, yet be less concerned with durability 30

Why does monitoring have weaker needs? CS5412 Spring 2012 (Cloud Computing: Birman) 31  When a monitoring system goes “offline” the device turns a red light or something on.  Later, on recovery, the monitoring policy may have changed and a node would need to reload it  Moreover, with in-memory replication we may have a strong enough guarantee for most purposes  Thus if durability costs enough to slow us down, we might opt for a weaker form of durability in order to gain better scalability and faster responses!

This illustrates a challenge! CS5412 Spring 2012 (Cloud Computing: Birman) 32  Cloud systems just can’t be approached in a one-size fits all manner  For performance-intensive scalability scenarios we need to look closely at tradeoffs  Cost of stronger guarantee, versus  Cost of being faster but offering weaker guarantee  If systems builders blindly opt for strong properties when not needed, we just incur other costs!  Amazon: Each 100ms delay reduces sales by 1%!

Properties we might want CS5412 Spring 2012 (Cloud Computing: Birman) 33  Consistency: Updates in an agreed order  Durability: Once accepted, won’t be forgotten  Real-time responsiveness: Replies with bounded delay  Security: Only permits authorized actions by authenticated parties  Privacy: Won’t disclose personal data  Fault-tolerance: Failures can’t prevent the system from providing desired services  Coordination: actions won’t interfere with one-another

Preview of things to come CS5412 Spring 2012 (Cloud Computing: Birman) 34  We’ll see (but later in the course) that a mixture of mechanisms can actually offer consistency and still satisfy the “goals” that motivated CAP!  Data replicated in outer tiers of the cloud, but each item has a “primary copy” to which updates are routed  Asynchronous multicasts used to update the replicas  The “virtual synchrony” model to manage replica set  Pause, just briefly, to “flush” the communication channels before responding to the outside user

Update the monitoring and alarms criteria for Mrs. Marsh as follows… Confirmed Response delay seen by end-user would also include Internet latencies Local response delay flush Send Execution timeline for an individual first-tier replica Soft-state first-tier service A B C D Fast response with consistency This mixture of features gives us consistency, an in-memory replication guarantee (“amnesia freedom”), but not full durability 35

Does CAP apply deeper in the cloud? CS5412 Spring 2012 (Cloud Computing: Birman) 36  The principle of wanting speed and scalability certainly is universal  But many cloud services have strong consistency guarantees that we take for granted but depend on  Marvin Theimer at Amazon explains:  Avoid costly guarantees that aren’t even needed  But sometimes you just need to guarantee something  Then, be clever and engineer it to scale  And expect to revisit it each time you scale out 10x

Cloud services and their properties CS5412 Spring 2012 (Cloud Computing: Birman) 37 ServiceProperties it guarantees MemcachedNo special guarantees Google’s GFSFile is current if locking is used BigTableShared key-value store with many consistency properties DynamoAmazon’s shopping cart: eventual consistency DatabasesSnapshot isolation with log-based mirroring (a fancy form of the ACID guarantees) MapReduceUses a “functional” computing model within which offers very strong guarantees ZookeeperYahoo! file system with sophisticated properties PNUTSYahoo! database system, sharded data, spectrum of consistency options ChubbyLocking service… very strong guarantees

Is there a conclusion to draw? CS5412 Spring 2012 (Cloud Computing: Birman) 38  One thing to notice about those services…  Most of them cost 10’s or 100’s of millions to create!  Huge investment required to build strongly consistent and scalable and high performance solutions  Oracle’s current parallel database: billions invested  CAP isn’t about telling Oracle how to build a database product…  CAP is a warning to you that strong properties can easily lead to slow services  But thinking in terms of weak properties is often a successful strategy that yields a good solution and requires less effort

Core problem? CS5412 Spring 2012 (Cloud Computing: Birman) 39  When can we safely sweep consistency under the rug?  If we weaken a property in a safety critical context, something bad can happen!  Amazon and eBay do well with weak guarantees because many applications just didn’t need strong guarantees to start with!  By embracing their weaker nature, we reduce synchronization and so get better response behavior  But what happens when a wave of high assurance applications starts to transition to cloud-based models?

Course “belief”? CS5412 Spring 2012 (Cloud Computing: Birman) 40  High assurance cloud computing is just around the corner!  Experts already doing it in a plethora of services  The main obstacle is that typical application developers can’t use the same techniques  As we develop better tools and migrate them to the cloud platforms developers use, options will improve  We’ll see that these really are solvable problems!