Depot: Cloud Storage with Minimal Trust OSDI 2010 Prince Mahajan, Srinath Setty, Sangmin Lee, Allen Clement, Lorenzo Alvisi, Mike Dahlin, and Michael Walfish.

Slides:



Advertisements
Similar presentations
Wyatt Lloyd * Michael J. Freedman * Michael Kaminsky David G. Andersen * Princeton, Intel Labs, CMU Dont Settle for Eventual : Scalable Causal Consistency.
Advertisements

Dynamo: Amazon’s Highly Available Key-value Store
CASSANDRA-A Decentralized Structured Storage System Presented By Sadhana Kuthuru.
Replication techniques Primary-backup, RSM, Paxos Jinyang Li.
Dynamo: Amazon’s Highly Available Key-value Store ID2210-VT13 Slides by Tallat M. Shafaat.
SUNDR: Secure Untrusted Data Repository
Accountable systems or how to catch a liar? Jinyang Li (with slides from authors of SUNDR and PeerReview)
SPORC: Group Collaboration using Untrusted Cloud Resources Ariel J. Feldman, William P. Zeller, Michael J. Freedman, Edward W. Felten Published in OSDI’2010.
Storage management and caching in PAST Antony Rowstron and Peter Druschel Presented to cs294-4 by Owen Cooper.
Replication Management. Motivations for Replication Performance enhancement Increased availability Fault tolerance.
Amazon’s Dynamo Simple Cloud Storage. Foundations 1970 – E.F. Codd “A Relational Model of Data for Large Shared Data Banks”E.F. Codd –Idea of tabular.
Dynamo: Amazon's Highly Available Key-value Store Distributed Storage Systems CS presented by: Hussam Abu-Libdeh.
Distributed Systems Fall 2010 Replication Fall 20105DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Efficient Systematic Testing for Dynamically Updatable Software Christopher M. Hayden, Eric A. Hardisty, Michael Hicks, Jeffrey S. Foster University of.
SPORC Group Collaboration using Untrusted Cloud Resources 1SPORC: Group Collaboration using Untrusted Cloud Resources — OSDI 10/5/10 Ariel J. Feldman,
Scaling Distributed Machine Learning with the BASED ON THE PAPER AND PRESENTATION: SCALING DISTRIBUTED MACHINE LEARNING WITH THE PARAMETER SERVER – GOOGLE,
SRG PeerReview: Practical Accountability for Distributed Systems Andreas Heaberlen, Petr Kouznetsov, and Peter Druschel SOSP’07.
P2P: Advanced Topics Filesystems over DHTs and P2P research Vyas Sekar.
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Squirrel: A decentralized peer- to-peer web cache Paul Burstein 10/27/2003.
September 24, 2007The 3 rd CSAIL Student Workshop Byzantine Fault Tolerant Cooperative Caching Raluca Ada Popa, James Cowling, Barbara Liskov Summer UROP.
Consistency of cloud storage service CIS 700 Wenqing Zhuang.
Wide-area cooperative storage with CFS
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
MetaSync File Synchronization Across Multiple Untrusted Storage Services Seungyeop Han Haichen Shen, Taesoo Kim*, Arvind Krishnamurthy,
Team CMD Distributed Systems Team Report 2 1/17/07 C:\>members Corey Andalora Mike Adams Darren Stanley.
PNUTS: YAHOO!’S HOSTED DATA SERVING PLATFORM FENGLI ZHANG.
Amazon’s Dynamo System The material is taken from “Dynamo: Amazon’s Highly Available Key-value Store,” by G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati,
Dynamo: Amazon’s Highly Available Key-value Store Giuseppe DeCandia, et.al., SOSP ‘07.
Dynamo: Amazon’s Highly Available Key-value Store Presented By: Devarsh Patel 1CS5204 – Operating Systems.
SPORC: Group Collaboration using Untrusted Cloud Resources OSDI 2010 Presented by Yu Chen.
SUNDR: Secure Untrusted Data Repository Jinyuan LiN.Y.U. Maxwell KrohnM.I.T. David MazièresN.Y.U. Dennis ShashaN.Y.U. Slides modified for CS739 OSDI 2004.
Peer-to-Peer in the Datacenter: Amazon Dynamo Aaron Blankstein COS 461: Computer Networks Lectures: MW 10-10:50am in Architecture N101
Replication and Consistency. Reference The Dangers of Replication and a Solution, Jim Gray, Pat Helland, Patrick O'Neil, and Dennis Shasha. In Proceedings.
Depot: Cloud Storage with minimal Trust COSC 7388 – Advanced Distributed Computing Presentation By Sushil Joshi.
Yongzhi Wang, Jinpeng Wei VIAF: Verification-based Integrity Assurance Framework for MapReduce.
Low-Overhead Byzantine Fault-Tolerant Storage James Hendricks, Gregory R. Ganger Carnegie Mellon University Michael K. Reiter University of North Carolina.
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
Ivy: A Read/Write Peer-to-Peer File System A. Muthitacharoen, R. Morris, T. M. Gil, and B. Chen In Proceedings of OSDI ‘ Presenter : Chul Lee.
VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wananga o te Upoko o te Ika a Maui SWEN 432 Advanced Database Design and Implementation Data Versioning Lecturer.
Strong Security for Distributed File Systems Group A3 Ka Hou Wong Jahanzeb Faizan Jonathan Sippel.
Practical Byzantine Fault Tolerance
Presented by: Sanketh Beerabbi University of Central Florida.
1 ZYZZYVA: SPECULATIVE BYZANTINE FAULT TOLERANCE R.Kotla, L. Alvisi, M. Dahlin, A. Clement and E. Wong U. T. Austin Best Paper Award at SOSP 2007.
May 20, 2013 Anon-Pass: Practical Anonymous Subscriptions Michael Z. Lee †, Alan M. Dunn †, Jonathan Katz *, Brent Waters †, Emmett Witchel † † University.
Eiger: Stronger Semantics for Low-Latency Geo-Replicated Storage Wyatt Lloyd * Michael J. Freedman * Michael Kaminsky † David G. Andersen ‡ * Princeton,
IM NTU Distributed Information Systems 2004 Replication Management -- 1 Replication Management Yih-Kuen Tsay Dept. of Information Management National Taiwan.
Robustness in the Salus scalable block store Yang Wang, Manos Kapritsos, Zuocheng Ren, Prince Mahajan, Jeevitha Kirubanandam, Lorenzo Alvisi, and Mike.
Carnegie Mellon Increasing Intrusion Tolerance Via Scalable Redundancy Mike Reiter Natassa Ailamaki Greg Ganger Priya Narasimhan Chuck Cranor.
Efficient Fork-Linearizable Access to Untrusted Shared Memory Presented by: Alex Shraer (Technion) IBM Zurich Research Laboratory Christian Cachin IBM.
Feb 1, 2001CSCI {4,6}900: Ubiquitous Computing1 Eager Replication and mobile nodes Read on disconnected clients may give stale data Eager replication prohibits.
SOSP 2007 © 2007 Andreas Haeberlen, MPI-SWS 1 Practical accountability for distributed systems Andreas Haeberlen MPI-SWS / Rice University Petr Kuznetsov.
Robustness in the Salus scalable block store Yang Wang, Manos Kapritsos, Zuocheng Ren, Prince Mahajan, Jeevitha Kirubanandam, Lorenzo Alvisi, and Mike.
Robustness in the Salus scalable block store Yang Wang, Manos Kapritsos, Zuocheng Ren, Prince Mahajan, Jeevitha Kirubanandam, Lorenzo Alvisi, and Mike.
1 Thierry Titcheu Chekam 1,2, Ennan Zhai 3, Zhenhua Li 1, Yong Cui 4, Kui Ren 5 1 School of Software, TNLIST, and KLISS MoE, Tsinghua University 2 Interdisciplinary.
Big Data Yuan Xue CS 292 Special topics on.
Ivy: A Read/Write Peer-to- Peer File System Authors: Muthitacharoen Athicha, Robert Morris, Thomer M. Gil, and Benjie Chen Presented by Saurabh Jha 1.
CS791Aravind Elango Maintenance-Free Global Data Storage Sean Rhea, Chris Wells, Patrick Eaten, Dennis Geels, Ben Zhao, Hakim Weatherspoon and John Kubiatowicz.
Scaling a file system to many cores using an operation log
Sub-millisecond Stateful Stream Querying over
Consistency in Distributed Systems
Be Fast, Cheap and in Control
I Can’t Believe It’s Not Causal
EECS 498 Introduction to Distributed Systems Fall 2017
EECS 498 Introduction to Distributed Systems Fall 2017
Prophecy: Using History for High-Throughput Fault Tolerance
Leader Election Using NewSQL Database Systems
Accountable Virtual Machines
Distributed Systems CS
Sisi Duan Assistant Professor Information Systems
Presentation transcript:

Depot: Cloud Storage with Minimal Trust OSDI 2010 Prince Mahajan, Srinath Setty, Sangmin Lee, Allen Clement, Lorenzo Alvisi, Mike Dahlin, and Michael Walfish The University of Texas at Austin Presented by: Masoud SAEIDA ARDEKANI 24/11/2011

Motivation Cloud services are: – Fault-prone – Black-box Clients – Hesitate to trust cloud services – Rely on end-to-end checks of properties

What is Depot? A cloud storage with minimal trust Eliminate trust for: – Put availability – Eventual consistency – Staleness-detection – Dependency preservation Minimizes trust for: – Get availability – Durability

Depot Overview Consistency Despite Faults! – Add metadata to Puts – Add local states to nodes – Add checks on received metadata Put (k, ) Get (k) {nodeID, key, H(value), localClock, History} nodeID Gossip N1 N2

Checks upon receiving an update Accept an update u sent by N if: – u must be properly signed – There is not omission All updates in u’s history are also in local history – History is not modified u is newer than any prior update by N

But, faults can cause forks! Forks: – Node’s local view is consistent! – Inconsistent views between different nodes! Prevent eventual consistency! N1N2F

Join forks for eventual consistency Faulty node  Two (correct) virtual node N1N2

Faults vs Concurrency Converting faults into concurrency – Allow correct nodes to converge Concurrency can introduce conflicts! – Already possible due to decentralized servers! – Applications for high availability allow concurrent writes Depot exposes the conflicts to the application – GET operation returns set of most recent concurrent updates

Consistency Causal Consistency (CC) – If update u 1 by a node depends on an update u 0 by any node, then u 0 becomes observable before u 1 at any node. Fork-Join Causal (FJC) Consistency – If update u 1 by a correct node depends on an update u 0 by any node, then u 0 becomes observable before u 1 at any correct node.

Ensuring Properties Safety (FJC consistency) – Local checks Liveness – Reduce failures to concurrency – Joining forks

Evaluation Setup 8 Clients + 4 Servers – Quad-core Intel Xeon 2.4 GHz – 8 GB RAM – Two local 7200 RPM disk – 2 clients are connected to each other! 1 Gbps link Each client issue 1 request / Minute

Variants Baseline – Clients trust the servers (no local data, no checks) B+hash – Clients attach hashes of values and verify hashes B+hash+Signing – Clients sign the values and verifies signatures B+hash+Signing+Store – Like B+hash+signing, plus locally store values that they put

Latency of Depot

Cost of Depot

Behavior Under Faults 50% Put, 50% Get Total server failure after 300 seconds

Fork by faulty clients 50% Reads, 50% Writes Failure after 300 seconds No effect on Get or Put

Conclusion Depot: Cloud storage with minimal trust Any node could fail in any way! Eliminate trust for – Put availability – Eventual consistency Minimize trust for – Get availability