Download presentation
Presentation is loading. Please wait.
Published byLily Wade Modified over 9 years ago
1
CSC 456 Operating Systems Seminar Presentation (11/13/2012) Leon Weingard, Liang Xin The Google File System
2
Outline 1. Background 2. Distributed File Systems Overview 3. GFS Architecture 4. Fault Tolerance
3
Background - Introduction Google – search engine. Applications process lots of data. Need good file system. Solution: Google File System (GFS). Introduction The largest cluster to date provides hundreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients.
4
Background - Motivation Goal: Large, distributed, highly fault tolerant file system 1.Fault-tolerance and auto-recovery need to be built into the system. 2.Standard I/O assumptions (e.g. block size) have to be re-examined. 3.Record appends are the prevalent form of writing. 4.Google applications and GFS should be co-designed.
5
Distributed File Systems Performance Measurement of a DFS depends on : The amount of time needed to satisfy service requests. The multiplicity and dispersion of its servers and storage devices should be made invisible. In computing, a distributed file system is any file system that allows access to files from multiple hosts sharing via a computer network.
6
Network File System (NFS) Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. Architecture: – NFS as collection of protocols the provide clients with a distributed file system. – Remote Access Model (as opposed to Upload/Download Model) – Every machine can be both a client and a server. – Servers export directories for access by remote clients – Clients access exported directories by mounting them remotely. Protocols: – mounting – file and directory access Workgroup network file service Any Unix machine can be a server (easily) Machines can be both client & server My files on my disk, your files on your disk Everybody in group can access all files Serious trust, scaling problems
7
The Andrew File System (AFS) The Andrew File System was introduced by researchers at Carnegie-Mellon University in the 1980’s. The basic tenets of all versions of AFS is whole-file caching on the local disk of the client machine that is accessing a file. Goal: Scalability Features: Uniform name space Location independent file sharing Client side caching with cache consistency Secure authentication by Kerberos Scalability “No write conflict” model only partial success All the files in AFS are distributed among the servers. The set of files in one server is referred to as a volume. In case a request can not be satisfied from this set of files, the vice server informs the client where it can find the required file.
8
Differences between AFS and NFS NFS Distributed with OS client side cache is optional clear-text passwords on net Does not scale well Uses standard UNIX permissions Not secure More reliable than AFS? AFS Add-in product client side caching is standard authenticated challenge on net scales well Uses Access Control Lists (ACL’s) More secure than NFS Less reliable than NFS?
9
GFS Architecture In the GFS: A master process maintains the metadata. A lower layer (i.e. a set of chunkservers) stores the data in units called “chunks”.
10
GFS Architecture What is a master? A single process running on a separate machine. Stores all metadata File namespace File to chunk mappings Chunk location information Access control information Chunk version numbers …
11
GFS Architecture What is a chunk? Analogous to block, except larger. Size: 64 MB! Stored on chunkserver as file Chunk handle (~ chunk file name) used to reference chunk. Chunk replicated across multiple chunkservers Note: There are hundreds of chunkservers in a GFS cluster distributed over multiple racks.
12
Read Algorithm
14
Primary and Leases Master grants a chunk lease to one replica. Replica is called the primary. Primary picks an order for mutations to the chunk. Leases expire in 60 seconds. Primary can request an extension if chunk is being mutated. Master can revoke a lease before it expires. Master can assign a new lease if it loses contact with the primary.
15
Write Algorithm
19
Fault Tolerance: Chunks Fast Recovery: master and chunkservers are designed to restart and restore state in a few seconds. No persistent log of chunk location in the master. Syncing chunkservers and master is unnecessary under this model. Alternate view: Chunkservers hold final say on what chunks it has on its disk. Chunk Replication: across multiple machines, across multiple racks.
20
Fault Tolerance: Master Data structures are kept in memory, must be able to recover from system failure. Operation Log: Log of all changes made to metadata. Log records batched before flushed to disk. Checkpoints of state when log grows to big. Log and latest checkpoint used to recover state. Log and checkpoints replicated on multiple machines. Master state is replicated on multiple machines. “Shadow” masters for reading data if “real” master is down. Data integrity: Each chunk has an associated checksum.
21
summary Distributed File Systems NFS AFS Differences between AFS and NFS GFS Architecture Single master with metadata and many chunk servers. GFS designed to be fault tolerant. Master must be able to recover. chunk locations not persistent Most operations are reads and appends so record append added so append operations can be done efficiently
22
(These slides modified from Alex Moshchuk, University of Washington – used during Google lecture series.) References 1. The Google File System, SOSP ’03 2. Presentation slide of GFS in SOSP ’03 3. www.delmarlearning.com/companions/content/.../Ch18.ppt 4. lyle.smu.edu/cse/8343/fall_2003/.../DFS_Presentation.ppt
23
Thank you!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.