CSE 451: Operating Systems Distributed File Systems

Slides:



Advertisements
Similar presentations
THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015.
Advertisements

G O O G L E F I L E S Y S T E M 陳 仕融 黃 振凱 林 佑恩 Z 1.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
1DT057 DISTRIBUTED INFORMATION SYSTEM DISTRIBUTED FILE SYSTEM 1.
Lecture 6 – Google File System (GFS) CSE 490h – Introduction to Distributed Computing, Winter 2008 Except as otherwise noted, the content of this presentation.
File System Implementation
The Google File System. Why? Google has lots of data –Cannot fit in traditional file system –Spans hundreds (thousands) of servers connected to (tens.
Other File Systems: LFS and NFS. 2 Log-Structured File Systems The trend: CPUs are faster, RAM & caches are bigger –So, a lot of reads do not require.
The Google File System.
NFS. The Sun Network File System (NFS) An implementation and a specification of a software system for accessing remote files across LANs. The implementation.
University of Pennsylvania 11/21/00CSE 3801 Distributed File Systems CSE 380 Lecture Note 14 Insup Lee.
CSE 451: Operating Systems Spring 2012 Module 23 Distributed File Systems Ed Lazowska Allen Center 570.
Case Study - GFS.
File Systems (2). Readings r Silbershatz et al: 11.8.
Distributed File Systems Concepts & Overview. Goals and Criteria Goal: present to a user a coherent, efficient, and manageable system for long-term data.
1 The Google File System Reporter: You-Wei Zhang.
CSC 456 Operating Systems Seminar Presentation (11/13/2012) Leon Weingard, Liang Xin The Google File System.
Distributed File Systems 1 CS502 Spring 2006 Distributed Files Systems CS-502 Operating Systems Spring 2006.
Distributed Systems. Interprocess Communication (IPC) Processes are either independent or cooperating – Threads provide a gray area – Cooperating processes.
Distributed File Systems
CSE 451: Operating Systems Section 10 Project 3 wrap-up, final exam review.
Introduction to DFS. Distributed File Systems A file system whose clients, servers and storage devices are dispersed among the machines of a distributed.
MapReduce and GFS. Introduction r To understand Google’s file system let us look at the sort of processing that needs to be done r We will look at MapReduce.
Eduardo Gutarra Velez. Outline Distributed Filesystems Motivation Google Filesystem Architecture The Metadata Consistency Model File Mutation.
Presented By: Samreen Tahir Coda is a network file system and a descendent of the Andrew File System 2. It was designed to be: Highly Highly secure Available.
ITEC 502 컴퓨터 시스템 및 실습 Chapter 11-2: File System Implementation Mi-Jung Choi DPNM Lab. Dept. of CSE, POSTECH.
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition File System Implementation.
Distributed File Systems Architecture – 11.1 Processes – 11.2 Communication – 11.3 Naming – 11.4.
Eduardo Gutarra Velez. Outline Distributed Filesystems Motivation Google Filesystem Architecture Chunkservers Master Consistency Model File Mutation Garbage.
Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung
Distributed File Systems
File System Implementation
GFS.
Google Filesystem Some slides taken from Alan Sussman.
Google File System CSE 454 From paper by Ghemawat, Gobioff & Leung.
Gregory Kesden, CSE-291 (Storage Systems) Fall 2017
Gregory Kesden, CSE-291 (Cloud Computing) Fall 2016
A Survey on Distributed File Systems
Chapter 12: File System Implementation
The Google File System Sanjay Ghemawat, Howard Gobioff and Shun-Tak Leung Google Presented by Jiamin Huang EECS 582 – W16.
NFS and AFS Adapted from slides by Ed Lazowska, Hank Levy, Andrea and Remzi Arpaci-Dussea, Michael Swift.
IT344 – Operating Systems Winter 2011, Dale Rowe.
Outline Midterm results summary Distributed file systems – continued
The Google File System (GFS)
Today: Coda, xFS Case Study: Coda File System
CSE 451: Operating Systems Winter Module 22 Distributed File Systems
The Google File System (GFS)
CSE 451: Operating Systems Autumn Module 22 Distributed File Systems
Distributed File Systems
DISTRIBUTED FILE SYSTEMS
Distributed File Systems
The Google File System (GFS)
The Google File System (GFS)
CSE 451: Operating Systems Spring Module 21 Distributed File Systems
DESIGN AND IMPLEMENTATION OF THE SUN NETWORK FILESYSTEM
Distributed File Systems
Distributed File Systems
CSE 451: Operating Systems Winter Module 22 Distributed File Systems
The Google File System (GFS)
Chapter 15: File System Internals
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Outline Review of Quiz #1 Distributed File Systems 4/20/2019 COP5611.
THE GOOGLE FILE SYSTEM.
by Mikael Bjerga & Arne Lange
Distributed File Systems
CSE 451: Operating Systems Autumn Module 22 Distributed File Systems
Distributed File Systems
The Google File System (GFS)
Presentation transcript:

CSE 451: Operating Systems Distributed File Systems

Distributed File Systems The most common distributed services: printing email Files Computation Basic idea of distributed file systems support network-wide sharing of files and devices (disks) Generally provide a “traditional” view a centralized shared local file system But with a distributed implementation read blocks from remote hosts, instead of from local disks 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Issues What is the basic abstraction remote file system? open, close, read, write, … remote disk? read block, write block Naming how are files named? are those names location transparent? is the file location visible to the user? are those names location independent? do the names change if the file moves? do the names change if the user moves? 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Caching caching exists for performance reasons where are file blocks cached? on the file server? on the client machine? both? Sharing and coherency what are the semantics of sharing? what happens when a cached block/file is modified how does a node know when its cached blocks are out of date? 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Replication replication can exist for performance and/or availability can there be multiple copies of a file in the network? if multiple copies, how are updates handled? what if there’s a network partition and clients work on separate copies? Performance what is the cost of remote operation? what is the cost of file sharing? how does the system scale as the number of clients grows? what are the performance limitations: network, CPU, disks, protocols, data copying? 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

Example: SUN Network File System (NFS) The Sun Network File System (NFS) has become a common standard for distributed UNIX file access NFS runs over LANs (even over WANs – slowly) Basic idea allow a remote directory to be “mounted” (spliced) onto a local directory Gives access to that remote directory and all its descendants as if they were part of the local hierarchy Pretty much exactly like a “local mount” or “link” on UNIX except for implementation and performance … no, we didn’t really learn about these, but they’re obvious  2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan For instance: I mount /u4/bershad on Node1 onto /students/foo on Node2 users on Node2 can then access this directory as /students/foo if I had a file /u4/bershad/myfile, users on Node2 see it as /students/foo/myfile Just as, on a local system, I might link /cse/www/education/courses/451/06wi/ as /u4/bershad/451 to allow easy access to my web data from my home directory 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan NFS implementation NFS defines a set of RPC operations for remote file access: searching a directory reading directory entries manipulating links and directories reading/writing files Every node may be both a client and server 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

NFS defines new layers in the Unix file system The virtual file system (VFS) provides a standard interface, using v-nodes as file handles. A v-node describes either a local or remote file. System Call Interface Virtual File System UFS NFS RPCs to other (server) nodes (local files) (remote files) RPC requests from remote clients, and server responses buffer cache / i-node table 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan NFS caching / sharing On an open, the client asks the server whether its cached blocks are up to date. Once a file is open, different clients can write it and get inconsistent data. Modified data is flushed back to the server every 30 seconds. 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

Example: CMU’s Andrew File System (AFS) Developed at CMU to support all of its student computing Consists of workstation clients and dedicated file server machines (differs from NFS) Workstations have local disks, used to cache files being used locally (originally whole files, subsequently 64K file chunks) (differs from NFS) Andrew has a single name space – your files have the same names everywhere in the world (differs from NFS) Andrew is good for distant operation because of its local disk caching: after a slow startup, most accesses are to local disk 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan AFS caching/sharing Need for scaling required reduction of client-server message traffic Once a file is cached, all operations are performed locally On close, if the file has been modified, it is replaced on the server The client assumes that its cache is up to date, unless it receives a callback message from the server saying otherwise on file open, if the client has received a callback on the file, it must fetch a new copy; otherwise it uses its locally-cached copy (differs from NFS) 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

Example: Berkeley Sprite File System Unix file system developed for diskless workstations with large memories at UCB (differs from NFS, AFS) Considers memory as a huge cache of disk blocks memory is shared between file system and VM Files are permanently stored on servers servers have a large memory that acts as a cache as well Several workstations can cache blocks for read-only files If a file is being written by more than 1 machine, client caching is turned off – all requests go to the server (differs from NFS, AFS) 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Other Approaches Serverless Xfs, Farsite Highly Available GFS Mostly Read Only WWW State, not Files SQL Server BigTable 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

A Storage Architecture for Large Data Center Applications GFS A Storage Architecture for Large Data Center Applications 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan 15

Workstation vs Data Center Independence Small Scale Many users Many programs Cooperation Large Scale Few users Few programs 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

GFS: Google File System Why YADFS? Google has unique FS requirements Huge read/write bandwidth Reliability over thousands of nodes Mostly operating on large data blocks Need efficient distributed operations Unfair advantage Google has control over applications, libraries and operating system 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan GFS Idealogy Huge amount of data Ability to efficiently access data Large quantity of Cheap machines BW more important than latency Component failures are the norm rather than the exception Atomic append operation so that multiple clients can append concurrently 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan 18

© 2007 Gribble, Lazowska, Levy, Zahorjan GFS Usage @ Google 50+ clusters Filesystem clusters of up to 1000+ machines Pools of 1000+ clients 1+ PB Filesystems 10 GB/s read/write load (in the presence of frequent HW failures) 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Files in GFS Files are huge by traditional standards Most files are mutated by appending new data rather than overwriting existing data Once written, the files are only read, and often only sequentially. Appending becomes the focus of performance optimization and atomicity guarantees 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan 20

© 2007 Gribble, Lazowska, Levy, Zahorjan GFS Setup Misc. servers GFS Master Client Masters Replicas GFS Master Client Client C0 C1 C1 C0 C5 … C5 C2 C5 C3 C2 Chunkserver 1 Chunkserver 2 Chunkserver N Master manages metadata Data transfers happen directly between clients/chunkservers Files broken into chunks (typically 64 MB) 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Architecture GFS cluster consists of a single master and multiple chunk servers and is accessed by multiple clients. Each of these is typically a commodity Linux machine running a user-level server process. Files are divided into fixed-size chunks identified by an immutable and globally unique 64 bit chunk handle For reliability, each chunk is replicated on multiple chunk servers master maintains all file system metadata. The master periodically communicates with each chunk server in HeartBeat messages to give it instructions and collect its state Neither the client nor the chunk server caches file data eliminating cache coherence issues. Clients do cache metadata, however. 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan 22

© 2007 Gribble, Lazowska, Levy, Zahorjan Architecture 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan 23

© 2007 Gribble, Lazowska, Levy, Zahorjan Read Process Single master vastly simplifies design Clients never read and write file data through the master. Instead, a client asks the master which chunk servers it should contact. Using the fixed chunk size, the client translates the file name and byte offset specified by the application into a chunk index within the file It sends the master a request containing the file name and chunk index. The master replies with the corresponding chunk handle and locations of the replicas. The client caches this information using the file name and chunk index as the key. The client then sends a request to one of the replicas, most likely the closest one. The request specifies the chunk handle and a byte range within that chunk 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan 24

© 2007 Gribble, Lazowska, Levy, Zahorjan Specifications Chunk Size = 64 MB Chunks stored as plain Unix files on chunk server. A persistent TCP connection to the chunk server over an extended period of time (reduce network overhead) cache all the chunk location information to facilitate small random reads. Master keeps the metadata in memory Disadvantages – Small files become Hotspots. Solution – Higher replication for such files. 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan 25

© 2007 Gribble, Lazowska, Levy, Zahorjan Summary There are a number of issues to deal with: what is the basic abstraction naming caching sharing and coherency replication performance No right answer! Different systems make different tradeoffs! 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Performance is always an issue always a tradeoff between performance and the semantics of file operations (e.g., for shared files). Caching of file blocks is crucial in any file system maintaining coherency is a crucial design issue. Newer systems are dealing with issues such as disconnected operation for mobile computers 2/28/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan