CSE 486/586 CSE 486/586 Distributed Systems Distributed File Systems Steve Ko Computer Sciences and Engineering University at Buffalo.

Slides:



Advertisements
Similar presentations
More on File Management
Advertisements

CSE 486/586, Spring 2014 CSE 486/586 Distributed Systems Consistency Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems New Trends in Distributed Storage Steve Ko Computer Sciences and Engineering University at Buffalo.
The Zebra Striped Network Filesystem. Approach Increase throughput, reliability by striping file data across multiple servers Data from each client is.
Distributed Storage March 12, Distributed Storage What is Distributed Storage?  Simple answer: Storage that can be shared throughout a network.
CS-550: Distributed File Systems [SiS]1 Resource Management in Distributed Systems: Distributed File Systems.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
Chapter 11: File System Implementation
Distributed Systems 2006 Styles of Client/Server Computing.
File Management Systems
File System Implementation
File System Implementation
Other File Systems: LFS and NFS. 2 Log-Structured File Systems The trend: CPUs are faster, RAM & caches are bigger –So, a lot of reads do not require.
CS 333 Introduction to Operating Systems Class 18 - File System Performance Jonathan Walpole Computer Science Portland State University.
Chapter 12 File Management Systems
NFS. The Sun Network File System (NFS) An implementation and a specification of a software system for accessing remote files across LANs. The implementation.
University of Pennsylvania 11/21/00CSE 3801 Distributed File Systems CSE 380 Lecture Note 14 Insup Lee.
File Systems (2). Readings r Silbershatz et al: 11.8.
Solid State Drive Feb 15. NAND Flash Memory Main storage component of Solid State Drive (SSD) USB Drive, cell phone, touch pad…
DESIGN AND IMPLEMENTATION OF THE SUN NETWORK FILESYSTEM R. Sandberg, D. Goldberg S. Kleinman, D. Walsh, R. Lyon Sun Microsystems.
Sun NFS Distributed File System Presentation by Jeff Graham and David Larsen.
File System. NET+OS 6 File System Architecture Design Goals File System Layer Design Storage Services Layer Design RAM Services Layer Design Flash Services.
Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google∗
Distributed File Systems Concepts & Overview. Goals and Criteria Goal: present to a user a coherent, efficient, and manageable system for long-term data.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Distributed File Systems Steve Ko Computer Sciences and Engineering University at Buffalo.
1 The Google File System Reporter: You-Wei Zhang.
CSC 456 Operating Systems Seminar Presentation (11/13/2012) Leon Weingard, Liang Xin The Google File System.
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Distributed File Systems Steve Ko Computer Sciences and Engineering University at Buffalo.
Networked File System CS Introduction to Operating Systems.
1 Chapter 12 File Management Systems. 2 Systems Architecture Chapter 12.
Distributed File Systems
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Distributed Shared Memory Steve Ko Computer Sciences and Engineering University at Buffalo.
Distributed file systems, Case studies n Sun’s NFS u history u virtual file system and mounting u NFS protocol u caching in NFS u V3 n Andrew File System.
Chapter 20 Distributed File Systems Copyright © 2008.
What is a Distributed File System?? Allows transparent access to remote files over a network. Examples: Network File System (NFS) by Sun Microsystems.
Chapter 10: File-System Interface Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Jan 1, 2005 Chapter 10: File-System.
UNIX File and Directory Caching How UNIX Optimizes File System Performance and Presents Data to User Processes Using a Virtual File System.
Introduction to DFS. Distributed File Systems A file system whose clients, servers and storage devices are dispersed among the machines of a distributed.
1 Chap8 Distributed File Systems  Background knowledge  8.1Introduction –8.1.1 Characteristics of File systems –8.1.2 Distributed file system requirements.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 12: File System Implementation File System Structure File System Implementation.
ITEC 502 컴퓨터 시스템 및 실습 Chapter 11-2: File System Implementation Mi-Jung Choi DPNM Lab. Dept. of CSE, POSTECH.
Computer Science Lecture 19, page 1 CS677: Distributed OS Last Class: Fault tolerance Reliable communication –One-one communication –One-many communication.
Chapter 11: File System Implementation Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 11: File System Implementation Chapter.
CS 346 – Chapter 11 File system –Files –Access –Directories –Mounting –Sharing –Protection.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition File System Implementation.
GLOBAL EDGE SOFTWERE LTD1 R EMOTE F ILE S HARING - Ardhanareesh Aradhyamath.
CS333 Intro to Operating Systems Jonathan Walpole.
Distributed File Systems Architecture – 11.1 Processes – 11.2 Communication – 11.3 Naming – 11.4.
EE324 INTRO TO DISTRIBUTED SYSTEMS. Distributed File System  What is a file system?
Distributed File Systems Group A5 Amit Sharma Dhaval Sanghvi Ali Abbas.
Review CS File Systems - Partitions What is a hard disk partition?
Distributed File Systems Questions answered in this lecture: Why are distributed file systems useful? What is difficult about distributed file systems?
Chapter Five Distributed file systems. 2 Contents Distributed file system design Distributed file system implementation Trends in distributed file systems.
Distributed Systems: Distributed File Systems Ghada Ahmed, PhD. Assistant Prof., Computer Science Dept. Web:
Computer Science Lecture 19, page 1 CS677: Distributed OS Last Class: Fault tolerance Reliable communication –One-one communication –One-many communication.
File-System Management
Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung
CSE 486/586 Distributed Systems Distributed File Systems
Jonathan Walpole Computer Science Portland State University
File System Implementation
NFS and AFS Adapted from slides by Ed Lazowska, Hank Levy, Andrea and Remzi Arpaci-Dussea, Michael Swift.
CSE 486/586 Distributed Systems Consistency --- 1
Chapter 2: Operating-System Structures
CSE 486/586 Distributed Systems Distributed File Systems
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Outline Review of Quiz #1 Distributed File Systems 4/20/2019 COP5611.
Chapter 2: Operating-System Structures
CSE 486/586 Distributed Systems Distributed File Systems
Network File System (NFS)
Presentation transcript:

CSE 486/586 CSE 486/586 Distributed Systems Distributed File Systems Steve Ko Computer Sciences and Engineering University at Buffalo

CSE 486/586 Recap Distributed transactions with replication –One copy serializability –Primary copy replication –Read-one/write-all replication –Available copies replication 2

CSE 486/586 Local File Systems File systems provides file management. –Name space –API for file operations (create, delete, open, close, read, write, append, truncate, etc.) –Physical storage management & allocation (e.g., block storage) –Security and protection (access control) Name space is usually hierarchical. –Files and directories File systems are mounted. –Different file systems can be in the same name space. 3

CSE 486/586 Traditional Distributed File Systems Goal: emulate local file system behaviors –Files not replicated –No hard performance guarantee But, –Files located remotely on servers –Multiple clients access the servers Why? –Users with multiple machines –Data sharing for multiple users –Consolidated data management (e.g., in an enterprise) 4

CSE 486/586 Requirements Transparency: a distributed file system should appear as if it’s a local file system –Access transparency: it should support the same set of operations, i.e., a program that works for a local file system should work for a DFS. –(File) Location transparency: all clients should see the same name space. –Migration transparency: if files move to another server, it shouldn’t be visible to users. –Performance transparency: it should provide reasonably consistent performance. –Scaling transparency: it should be able to scale incrementally by adding more servers. 5

CSE 486/586 Requirements Concurrent updates should be supported. Fault tolerance: servers may crash, msgs can be lost, etc. Consistency needs to be maintained. Security: access-control for files & authentication of users 6

CSE 486/586 File Server Architecture 7 Client computerServer computer Application program Application program Client module Flat file service Directory service

CSE 486/586 Components Directory service –Meta data management –Creates and updates directories (hierarchical file structures) –Provides mappings between user names of files and the unique file ids in the flat file structure. Flat file service –Actual data management –File operations (create, delete, read, write, access control, etc.) These can be independently distributed. –E.g., centralized directory service & distributed flat file service 8

CSE 486/586 Sun NFS 9 Application Program Virtual File System UNIX File System Other File System NFS Client System Client Computer Virtual File System NFS Server System UNIX File System Server Computer NFS Protocol UNIX Kernel

CSE 486/586 VFS A translation layer that makes file systems pluggable & co-exist –E.g., NFS, EXT2, EXT3, ZFS, etc. Keeps track of file systems that are available locally and remotely. Passes requests to appropriate local or remote file systems Distinguishes between local and remote files. 10

CSE 486/586 NFS Mount Service / student usr … / users nfs pet jimbob staff / people org mth johnbob Each server keeps a record of local files available for remote mounting. Clients use a mount command for remote mounting, providing name mappings Remote Mount Server 1 ClientServer 2

CSE 486/586 NFS Basic Operations Client –Transfers blocks of files to and from server via RPC Server –Provides a conventional RPC interface at a well-known port on each host –Stores files and directories Problems? –Performance –Failures 12

CSE 486/586 Improving Performance Let’s cache! Server-side –Typically done by OS & disks anyway –A disk usually has a cache built-in. –OS caches file pages, directories, and file attributes that have been read from the disk in a main memory buffer cache. Client-side –On accessing data, cache it locally. What’s a typical problem with caching? –Consistency: cached data can become stale. 13

CSE 486/586 (General) Caching Strategies Read-ahead (prefetch) –Read strategy –Anticipates read accesses and fetches the pages following those that have most recently been read. Delayed-write –Write strategy –New writes stored locally. –Periodically or when another client accesses, send back the updates to the server Write-through –Write strategy –Writes go all the way to the server’s disk This is not an exhaustive list! 14

CSE 486/586 NFS Client-Side Caching Write-through, but only at close() –Not every single write –Helps performance Other clients periodically check if there’s any new write (next slide). Multiple writers –No guarantee –Could be any combination of writes Leads to inconsistency 15

CSE 486/586 Validation A client checks with the server about cached blocks. Each block has a timestamp. –If the remote block is new, then the client invalidates the local cached block. Always invalidate after some period of time –3 seconds for files –30 seconds for directories Written blocks are marked as “dirty.” 16

CSE 486/586 Failures Two design choices: stateful & stateless Stateful –The server maintains all client information (which file, which block of the file, the offset within the block, file lock, etc.) –Good for the client-side process (just send requests!) –Becomes almost like a local file system (e.g., locking is easy to implement) Problem? –Server crash  lose the client state –Becomes complicated to deal with failures 17

CSE 486/586 Failures Stateless –Clients maintain their own information (which file, which block of the file, the offset within the block, etc.) –The server does not know anything about what a client does. –Each request contains complete information (file name, offset, etc.) –Easier to deal with server crashes (nothing to lose!) NFS’s choice Problem? –Locking becomes difficult. 18

CSE 486/586 NFS Client-side caching for improved performance Write-through at close() –Consistency issue Stateless server –Easier to deal with failures –Locking is not supported (later versions of NFS support locking though) Simple design –Led to simple implementation, acceptable performance, easier maintenance, etc. –Ultimately led to its popularity 19

CSE 486/586 CSE 486/586 Administrivia Survey! 20

CSE 486/586 New Trends in Distributed Storage Geo-replication: replication with multiple data centers –Latency: serving nearby clients –Fault-tolerance: disaster recovery Power efficiency: power-efficient storage –Going green! –Data centers consume lots of power 21

CSE 486/586 Power Consumption eBay: 16K servers, ~0.6 * 10^5 MWh, ~$3.7M Akamai: 40K servers, ~1.7 * 10^5 MWh, ~$10M Rackspace: 50K servers, ~2 * 10^5 MWh, ~$12M Microsoft: > 200K servers, > 6 * 10^5 MWh, > $36M Google: > 500K servers, > 6.3 * 10^5 MWh, > $38M USA (2006): 10.9M servers, 610 * 10^5 MWh, $4.5B Year-to-year: 1.7%~2.2% of total electricity use in US Question: can we reduce the energy footprint of a distributed storage while preserving performance? 22

CSE 486/586 Flash (Solid State Disk) Unlike magnetic disks, there’s no mechanical part –Disks have motors that rotate disks & arms that move and read. Efficient I/O –Less than 1 Watt consumption –Magnetic disks over 10 Watt Fast random reads –<< 1 ms –Up to 175 times faster than random reads on magnetic disks 23

CSE 486/586 Flash (Solid State Disk) The smallest unit of operation (read/write) is a page –Typically 4KB –Initially all 1 –A write involves setting some bits to 0 –A write is fundamentally constrained. Individual bits cannot be reset to 1. –Requires an erasure operation that resets all bits to 1. –This erasure is done over a large block (e.g., 128KB), i.e., over multiple pages together. –Typical latency: 1.5 ms Blocks wear out for each erasure. –100K cycles or 10K cycles depending on the technology. 24

CSE 486/586 Flash (Solid State Disk) Early design limitations –Slow write: a write to a random 4 KB page  the entire 128 KB erase block to be erased and rewritten  write performance suffers –Uneven wear: imbalanced writes result in uneven wear across the device Any idea to solve this? 25

CSE 486/586 Flash (Solid State Disk) Recent designs: log-based The disk exposes a logical structure of pages & blocks (called Flash Translation Layer). –Internally maintains remapping of blocks. For rewrite of a random 4KB page: –Read the surrounding entire 128KB erasure block into the disk’s internal buffer –Update the 4KB page in the disk’s internal buffer –Write the entire block to a new or previously erased physical block –Additionally, carefully choose this new physical block to minimize uneven wear 26

CSE 486/586 Flash (Solid State Disk) E.g. sequential write till block 2, then random read of a page in block 1 27 Block 0 Block 1 Block 2 Logical Structure Block 0 Block 1 Block 2 Block 1 Physical Structure Write 1) Read to buffer 2) Update the page 3) Write to a different block location 4) Garbage collect the old block Free Write

CSE 486/586 Summary NSF –Caching with write-through policy at close() –Stateless server One power efficient design: Flash storage 28

CSE 486/ Acknowledgements These slides contain material developed and copyrighted by Indranil Gupta (UIUC).