CAN CLOUD COMPUTING SYSTEMS OFFER HIGH ASSURANCE WITHOUT LOSING KEY CLOUD PROPERTIES? Ken Birman, Cornell University 1 CS6410.

Slides:



Advertisements
Similar presentations
Distributed Processing, Client/Server and Clusters
Advertisements

Isis 2 Design Choices A few puzzles to think about when considering use of Isis 2 in your work.
Isis 2 guarantees Did you understand how Isis 2 ordering and durability work?
The Top 10 Reasons Why Federated Can’t Succeed And Why it Will Anyway.
1 CS 194: Elections, Exclusion and Transactions Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
CAN CLOUD COMPUTING SYSTEMS OFFER HIGH ASSURANCE WITHOUT LOSING KEY CLOUD PROPERTIES? Ken Birman, Cornell University CS
YOUR FIRST ISIS2 GROUP Ken Birman 1 Cornell University.
CS5412: HOW DURABLE SHOULD IT BE? Ken Birman 1 CS5412 Spring 2014 (Cloud Computing: Birman) Lecture XV.
COS 461 Fall 1997 Transaction Processing u normal systems lose their state when they crash u many applications need better behavior u today’s topic: how.
ICOM 6005 – Database Management Systems Design Dr. Manuel Rodríguez-Martínez Electrical and Computer Engineering Department Lecture 16 – Intro. to Transactions.
Distributed Systems 2006 Group Communication I * *With material adapted from Ken Birman.
Distributed Processing, Client/Server, and Clusters
Distributed Systems Fall 2010 Replication Fall 20105DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Business Continuity and DR, A Practical Implementation Mich Talebzadeh, Consultant, Deutsche Bank
Virtual Synchrony Jared Cantwell. Review Multicast Causal and total ordering Consistent Cuts Synchronized clocks Impossibility of consensus Distributed.
CS514: Intermediate Course in Operating Systems Professor Ken Birman Vivek Vishnumurthy: TA.
Database Replication techniques: a Three Parameter Classification Authors : Database Replication techniques: a Three Parameter Classification Authors :
Group Communications Group communication: one source process sending a message to a group of processes: Destination is a group rather than a single process.
CS 582 / CMPE 481 Distributed Systems
EEC-681/781 Distributed Computing Systems Lecture 3 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Distributed Systems 2006 Group Membership * *With material adapted from Ken Birman.
CS 425 / ECE 428 Distributed Systems Fall 2014 Indranil Gupta (Indy) Lecture 18: Replication Control All slides © IG.
WHAT IS THE Isis2 LIBRARY?
SYSTEMS SUPPORT FOR GRAPHICAL LEARNING Ken Birman 1 CS6410 Fall /18/2014.
A BRIEF INTRODUCTION TO HIGH ASSURANCE CLOUD COMPUTING WITH ISIS2 Ken Birman 1 Cornell University.
CS 6410: ADVANCED SYSTEMS KEN BIRMAN A PhD-oriented course about research in systems Fall 2012.
Disaster Recovery as a Cloud Service Chao Liu SUNY Buffalo Computer Science.
Highly Available ACID Memory Vijayshankar Raman. Introduction §Why ACID memory? l non-database apps: want updates to critical data to be atomic and persistent.
A BRIEF INTRODUCTION TO HIGH ASSURANCE CLOUD COMPUTING WITH ISIS2 Ken Birman 1 Cornell University.
CS5412: PAXOS Ken Birman 1 CS5412 Spring 2014 (Cloud Computing: Birman) Lecture XIII.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
Lecture On Database Analysis and Design By- Jesmin Akhter Lecturer, IIT, Jahangirnagar University.
CLOUD COMPUTING Lecture 27: CS 2110 Spring Computing has evolved...  Fifteen years ago: desktop/laptop + clusters  Then  Web sites  Social networking.
SYSTEMS SUPPORT FOR GRAPHICAL LEARNING Ken Birman 1 CS6410 Fall /18/2014.
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
Replication & EJB Graham Morgan. EJB goals Ease development of applications –Hide low-level details such as transactions. Provide framework defining the.
CS5412: HOW DURABLE SHOULD IT BE? Ken Birman 1 CS5412 Spring 2015 (Cloud Computing: Birman) Lecture XV.
1 Moshe Shadmon ScaleDB Scaling MySQL in the Cloud.
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Replication with View Synchronous Group Communication Steve Ko Computer Sciences and Engineering.
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Fast Crash Recovery in RAMCloud. Motivation The role of DRAM has been increasing – Facebook used 150TB of DRAM For 200TB of disk storage However, there.
CS5412: SHEDDING LIGHT ON THE CLOUDY FUTURE Ken Birman 1 Lecture XXV.
CS5412: HOW DURABLE SHOULD IT BE? Ken Birman 1 CS5412 Spring 2012 (Cloud Computing: Birman) Lecture XV.
Replication (1). Topics r Why Replication? r System Model r Consistency Models – How do we reason about the consistency of the “global state”? m Data-centric.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
COORDINATION, SYNCHRONIZATION AND LOCKING WITH ISIS2 Ken Birman 1 Cornell University.
Replication (1). Topics r Why Replication? r System Model r Consistency Models r One approach to consistency management and dealing with failures.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
Lecture 4 Page 1 CS 111 Online Modularity and Virtualization CS 111 On-Line MS Program Operating Systems Peter Reiher.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
WHAT ARE THE RIGHT ROLES FOR FORMAL METHODS IN HIGH ASSURANCE CLOUD COMPUTING? Ken Birman, Cornell University Princeton Nov
Chapter 1 Database Access from Client Applications.
Fault Tolerance (2). Topics r Reliable Group Communication.
ORDERING AND DURABILITY IN ISIS 2 Ken Birman 1 Cornell University.
 Cloud Computing technology basics Platform Evolution Advantages  Microsoft Windows Azure technology basics Windows Azure – A Lap around the platform.
A SHORT DETOUR: THE THEORY OF CONSENSUS Ken Birman 1 Cornell University.
CS5412: VIRTUAL SYNCHRONY Ken Birman 1 CS5412 Spring 2016 (Cloud Computing: Birman) Lecture XIV.
Amazon Web Services. Amazon Web Services (AWS) - robust, scalable and affordable infrastructure for cloud computing. This session is about:
CS5412: How Durable Should it Be?
Replication & Fault Tolerance CONARD JAMES B. FARAON
CS5412: Virtual Synchrony Lecture XIV Ken Birman
The consensus problem in distributed systems
Alternative system models
Outline Announcements Fault Tolerance.
Active replication for fault tolerance
CS5412: Virtual Synchrony Lecture XIV Ken Birman
Lecture 21: Replication Control
Transaction Properties: ACID vs. BASE
Lecture 21: Replication Control
Presentation transcript:

CAN CLOUD COMPUTING SYSTEMS OFFER HIGH ASSURANCE WITHOUT LOSING KEY CLOUD PROPERTIES? Ken Birman, Cornell University 1 CS6410

High Assurance in Cloud Settings 2  A wave of applications that need high assurance is fast approaching  Control of the “smart” electric power grid  mHealth applications  Self-driving vehicles….  To run these in the cloud, we’ll need better tools  Today’s cloud is inconsistent and insecure by design  Issues arise at every layer (client… Internet… data center) but we’ll focus on the data center today

Isis 2 System 3  Core functionality: groups of objects  … fault-tolerance, speed (parallelism), coordination  Intended for use in very large-scale settings  The local object instance functions as a gateway  Read-only operations performed on local state  Update operations update all the replicas myGroup state transfer “join myGroup” updateupdate

Isis 2 Functionality 4  We implement a wide range of basic functions  Multicast (many “flavors”) to update replicated data  Multicast “query” to initiate parallel operations and collect the results  Lock-based synchronization  Distributed hash tables  Persistent storage…  Easily integrated with application-specific logic

A distributed request that updates group “state”... Some service A B C D Example: Cloud-Hosted Service 5 SafeSend SafeSend is a version of Paxos.... and the response Standard Web-Services method invocation

Isis 2 System  Elasticity (sudden scale changes)  Potentially heavily loads  High node failure rates  Concurrent (multithreaded) apps  Long scheduling delays, resource contention  Bursts of message loss  Need for very rapid response times  Community skeptical of “assurance properties”  C# library (but callable from any.NET language) offering replication techniques for cloud computing developers  Based on a model that fuses virtual synchrony and state machine replication models  Research challenges center on creating protocols that function well despite cloud “events” 6

Isis2 makes developer’s life easier  Formal model permits us to achieve correctness  Think of Isis2 as a collection of modules, each with rigorously stated properties  These help in debugging (model checking)  Isis2 implementation needs to be fast, lean, easy to use, in many ways  Developer must see it as easier to use Isis2 than to build from scratch  Need great performance under “cloudy conditions” 7 Benefits of Using Formal modelImportance of Sound Engineering

Isis 2 makes developer’s life easier Group g = new Group(“myGroup”); Dictionary Values = new Dictionary (); g.ViewHandlers += delegate(View v) { Console.Title = “myGroup members: “+v.members; }; g.Handlers[UPDATE] += delegate(string s, double v) { Values[s] = v; }; g.Handlers[LOOKUP] += delegate(string s) { g.Reply(Values[s]); }; g.Join(); g.SafeSend(UPDATE, “Harry”, 20.75); List resultlist = new List (); nr = g.Query(ALL, LOOKUP, “Harry”, EOL, resultlist);  First sets up group  Join makes this entity a member. State transfer isn’t shown  Then can multicast, query. Runtime callbacks to the “delegates” as events arrive  Easy to request security (g.SetSecure), persistence  “Consistency” model dictates the ordering aseen for event upcalls and the assumptions user can make 8

Isis 2 makes developer’s life easier Group g = new Group(“myGroup”); Dictionary Values = new Dictionary (); g.ViewHandlers += delegate(View v) { Console.Title = “myGroup members: “+v.members; }; g.Handlers[UPDATE] += delegate(string s, double v) { Values[s] = v; }; g.Handlers[LOOKUP] += delegate(string s) { g.Reply(Values[s]); }; g.Join(); g.SafeSend(UPDATE, “Harry”, 20.75); List resultlist = new List (); nr = g.Query(ALL, LOOKUP, “Harry”, EOL, resultlist);  First sets up group  Join makes this entity a member. State transfer isn’t shown  Then can multicast, query. Runtime callbacks to the “delegates” as events arrive  Easy to request security (g.SetSecure), persistence  “Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make 9

Isis 2 makes developer’s life easier Group g = new Group(“myGroup”); Dictionary Values = new Dictionary (); g.ViewHandlers += delegate(View v) { Console.Title = “myGroup members: “+v.members; }; g.Handlers[UPDATE] += delegate(string s, double v) { Values[s] = v; }; g.Handlers[LOOKUP] += delegate(string s) { g.Reply(Values[s]); }; g.Join(); g.SafeSend(UPDATE, “Harry”, 20.75); List resultlist = new List (); nr = g.Query(ALL, LOOKUP, “Harry”, EOL, resultlist);  First sets up group  Join makes this entity a member. State transfer isn’t shown  Then can multicast, query. Runtime callbacks to the “delegates” as events arrive  Easy to request security (g.SetSecure), persistence  “Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make 10

Isis 2 makes developer’s life easier Group g = new Group(“myGroup”); Dictionary Values = new Dictionary (); g.ViewHandlers += delegate(View v) { Console.Title = “myGroup members: “+v.members; }; g.Handlers[UPDATE] += delegate(string s, double v) { Values[s] = v; }; g.Handlers[LOOKUP] += delegate(string s) { g.Reply(Values[s]); }; g.Join(); g.SafeSend(UPDATE, “Harry”, 20.75); List resultlist = new List (); nr = g.Query(ALL, LOOKUP, “Harry”, EOL, resultlist);  First sets up group  Join makes this entity a member. State transfer isn’t shown  Then can multicast, query. Runtime callbacks to the “delegates” as events arrive  Easy to request security (g.SetSecure), persistence  “Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make 11

Isis 2 makes developer’s life easier Group g = new Group(“myGroup”); Dictionary Values = new Dictionary (); g.ViewHandlers += delegate(View v) { Console.Title = “myGroup members: “+v.members; }; g.Handlers[UPDATE] += delegate(string s, double v) { Values[s] = v; }; g.Handlers[LOOKUP] += delegate(string s) { g.Reply(Values[s]); }; g.Join(); g.SafeSend(UPDATE, “Harry”, 20.75); List resultlist = new List (); nr = g.Query(ALL, LOOKUP, “Harry”, EOL, resultlist);  First sets up group  Join makes this entity a member. State transfer isn’t shown  Then can multicast, query. Runtime callbacks to the “delegates” as events arrive  Easy to request security (g.SetSecure), persistence  “Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make 12

Isis 2 makes developer’s life easier Group g = new Group(“myGroup”); Dictionary Values = new Dictionary (); g.ViewHandlers += delegate(View v) { Console.Title = “myGroup members: “+v.members; }; g.Handlers[UPDATE] += delegate(string s, double v) { Values[s] = v; }; g.Handlers[LOOKUP] += delegate(string s) { g.Reply(Values[s]); }; g.SetSecure(myKey); g.Join(); g.SafeSend(UPDATE, “Harry”, 20.75); List resultlist = new List (); nr = g.Query(ALL, LOOKUP, “Harry”, EOL, resultlist);  First sets up group  Join makes this entity a member. State transfer isn’t shown  Then can multicast, query. Runtime callbacks to the “delegates” as events arrive  Easy to request security, persistence, tunnelling on TCP...  “Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make 13

Consistency model: Virtual synchrony meets Paxos (and they live happily ever after…) 14  Membership epochs: begin when a new configuration is installed and reported by delivery of a new “view” and associated state  Protocols run “during” a single epoch: rather than overcome failure, we reconfigure when a failure occurs Synchronous executionVirtually synchronous execution Non-replicated reference execution A=3B=7B = B-AA=A+1

Exact comparison 15  What I am calling a synchronous (by which I mean “step by step”) execution actually matches what Paxos offers, but Paxos, as we will see, uses quorum operations to implement this without group views  Virtual synchrony has managed group membership, but also has some optimistic steps (early message delivery, which speeds things up, but it comes at the price of needing to do a “flush” to sync to the network)  Analogy: when you write to a file often the IO system buffers and until you do a file-sync, data might not yet be certain to have reached the disk

Formalizing the model 16  Must express the picture in temporal logic equations  Closely related to state machine replication, but optimistic early delivery of multicasts (optional!) is tricky.  What can one say about the guarantees in that case?  Either I’m going to be allowed to stay in the system, in which case all the properties hold  … or the majority will kick me out. Then some properties are still guaranteed, but others might actually not hold for those optimistic early delivery events  User is expected to combine optimistic actions with Flush to mask speculative lines of execution that could turn out to be risky

Core issue: How is replicated data used? 17  High availability  Better capacity through load-balanced read-only requests, which can be handled by a single replica  Concurrent parallel computing on consistent data  Fault-tolerance through “warm standby”

Do users find formal model useful? 18  Developer keeps the model in mind, can easily visualize the possible executions that might arise  Each replica sees the same events  … in the same order  … and even sees the same membership when an event occurs. Failures or joins are reported just like multicasts  All sorts of reasoning is dramatically simplified

But why complicate it with optimism? 19  Optimistic early delivery kind of breaks the model, although Flush allows us to hide the effects  To reason about a system must (more or less) erase speculative events not covered by Flush. Then you are left with a more standard state machine model  Yet this standard model, while simpler to analyze, is actually too slow for demanding use cases

Roles for formal methods 20  Proving that SafeSend is a correct “virtually synchronous” implementation of Paxos?  I worked with Robbert van Renesse and Dahlia Malkhi to optimize Paxos for the virtual synchrony model. Despite optimizations, protocol is still bisimulation equivalent  Robbert later coded it in 60 lines of Erlang. His version can be proved correct using NuPRL  Leslie Lamport was initially involved too. He suggested we call it “virtually synchronous Paxos”. Virtually Synchronous Methodology for Dynamic Service Replication. Ken Birman, Dahlia Malkhi, Robbert van Renesse. MSR November 18, Appears as Appendix A in Guide to Reliable Distributed Systems. Building High-Assurance Applications and Cloud-Hosted Services. Birman, K.P. 2012, XXII, 730p. 138 illus.

The resulting theory is of limited value 21  If we apply it only to Isis 2 itself, we can generally get quite far. The model is valuable for debugging the system code because we can detect bad runs.  If we apply it to a user’s application plus Isis 2, the theory is often “incomplete” because the theory would typically omit any model for what it means for the application to achieve its end-user goals  This pervasive tendency to ignore the user is a continuing issue throughout the community even today. It represents a major open research topic.

The fundamental issue  How to formalize the notion of application state?  How to formalize the composition of a protocol such as SafeSend with an application (such as replicated DB)?  No obvious answer… just (unsatisfying) options  A composition-based architecture: interface types (or perhaps phantom types) could signal user intentions. This is how our current tool works.  An annotation scheme: in-line pragmas (executable “comments”) would tell us what the user is doing  Some form of automated runtime code analysis

A further issue: Performance causes complexity 23  A one-size fits-all version of SafeSend wouldn’t be popular with “real” cloud developers because it would lack necessary flexibility  Speed and elasticity are paramount  SafeSend is just too slow and too rigid: Basis of Brewer’s famous CAP conjecture (and theorem)  Let’s look at a use case in which being flexible is key to achieving performance and scalability

Integrated glucose monitor and Insulin pump receives instructions wirelessly Motion sensor, fall-detector Cloud Infrastructure Home healthcare application Healthcare provider monitors large numbers of remote patients Medication station tracks, dispenses pills Building an online medical care system 24 Monitoring subsystem

Two replication cases that arise 25  Replicating the database of patient records  Goal: Availability despite crash failures, durability, consistency and security.  Runs in an “inner” layer of the cloud: A back-end database  Replicating the state of the “monitoring” framework  It monitors huge numbers of patients (cloud platform will monitor many, intervene rarely)  Goal is high availability, high capacity for “work”  Probably runs in the “outer tier” of the cloud Patient Records DB

Real systems demand tradeoffs 26  The database with medical prescription records needs strong replication with consistency and durability  The famous ACID properties. A good match for Paxos  But what about the monitoring infrastructure?  A monitoring system is an online infrastructure  In the soft state tier of the cloud, durability isn’t available  Paxos works hard to achieve durability. If we use Paxos, we’ll pay for a property we can’t really use

Why does this matter? 27  Durability is expensive  Basic Paxos always provides durability  SafeSend is like Paxos and also has this guarantee  If we weaken durability we get better performance and scalability, but we no longer mimic Paxos  Generalization of Brewer’s CAP conjecture: one-size-fits-all won’t work in the cloud. You always confront tradeoffs.

Weakening properties in Isis 2 28  SafeSend: Ordered+Durable  OrderedSend+Flush: Ordered but “optimistic” delivery  Send, CausalSend+Flush: FIFO or Causal order  RawSend: Unreliable, not virtually synchronous  Out of Band file transfer: Uses RDMA to asynchronously move big objects using RDMA network; Isis 2 application talks “about” these objects but doesn’t move the bytes (might not even touch the bytes)

Update the monitoring and alarms criteria for Mrs. Marsh as follows… Confirmed Response delay seen by end-user would also include Internet latencies Local response delay flush Send Execution timeline for an individual first-tier replica Soft-state first-tier service A B C D  In this situation we can replace SafeSend with Send+Flush.  But how do we prove that this is really correct? 29 Monitoring in a soft-state service with a primary owner issuing the updates g.Send is an optimistic, early-deliverying virtually synchronous multicast. Like the first phase of Paxos Flush pauses until prior Sends have been acknowledged and become “stable”. Like the second phase of Paxos. In our scenario, g.Send + g.Flush  g.SafeSend

Isis 2 : Send v.s. SafeSend 30 Send scales best, but SafeSend with modern disks (RAM-like performance) and small numbers of acceptors isn’t terrible.

Variance from mean, 32-member case Jitter: how “steady” are latencies? 31 The “spread” of latencies is much better (tighter) with Send: the 2-phase SafeSend protocol is sensitive to scheduling delays

Flush delay as function of shard size 32 Flush is fairly fast if we only wait for acks from 3-5 members, but is slow if we wait for acks from all members. After we saw this graph, we changed Isis 2 to let users set the threshold.

What does the data tell us? 33  With g.Send+g.Flush we can have  Strong consistency, fault-tolerance, rapid responses  Similar guarantees to Paxos (but not identical)  Scales remarkably well, with high speed  The experiment isn’t totally fair to Paxos  Even 5 years ago, hardware was actually quite different  With RDMA and NVRAM the numbers all get (much) better!

Sinfonia 34  A more recent system somewhat in the same style, but very different API and programming model  Starts with a kind of atomic transaction model, which more recent work (this year’s SOSP!) has made more explicit

Key Sinfonia idea 35  Allow the application to submit “mini-transactions”  Not the full SQL + begin / commit / abort, but rather “RISC” in style  They consist of:  Precomputation: Application prepares a mini-transaction however it likes  Validation step: objects and versions: the mini-transaction will not be performed (will abort) if any of these objects have been updated  Action step: If validation is successful, a series of updates to those objects, which will generate new versions. The actions are done atomically.

Illustration 36  The server members are exact replicas, so all either perform the action or reject it. So the data replicas stay in the identical state Sinfonia Core: A kind of state machine replicated process group Check that: X.ver = 1661 Y.ver = 73 Z.ver = 9908 If so: X = “apple” Y = 1.22 Z = 2000 Totally ordered protocol, like OrderedSend Success!

Precomputation step 37  This gives Sinfonia remarkable scalability  Idea is that we can keep cached copies of the system state, or even entire read-only replicas, and run any code we wish against it  State = Any collection of data with some form of records we can identify and version numbers on each record  Code = Database transaction, graph crawl, whatever…

Why does this give scalability? 38  At the edge, we soak up the potentially slow, complex compute costs  Transactions can be very complex to carry out (joins, projections, aggregation operations, complex test logic…)  All of this can be done “offline” from the perspective of the core  Then we either commit the request all at once if the versions still match, or abort it all at once if not, so Sinfonia core stays in a consistent state  In fact, the edge can manage perfectly well with a slightly stale cache!

Generality? 39  Paper explains how this model can support a great variety of use cases from the web, standard databases, financial settings (banking or stock trading), etc.  Basically, you just need an adaptor to “represent” your data in Sinfonia format with data records and version numbering  And in recent work at Vmware, they add sharding (partitioning), automatic support for commutative actions, many other features, and get even more impressive performance

Summary? 40  We set out to bring formal assurance guarantees to the cloud  And succeeded: Many systems like Isis 2 exist now and are in wider and wider use (Corfu, Zookeeper, Zab, Raft, libPaxos, Sinfonia, and the list goes on)  Industry is also reporting successes (e.g. entire SOSP program this year)  Formal tools are also finding a major role now (model checking and constructive logic used to prove these kinds of systems correct)  Can the cloud “do” high assurance?  At Cornell, and in Silicon Valley, the evidence now is “yes”  … but even so, much more research is still needed because they are slow “on first try” and much optimization generally has to occur to make them fast