Download presentation
Presentation is loading. Please wait.
Published byHugh Fleming Modified over 9 years ago
1
Network File Systems Victoria Krafft CS 614 10/4/05
2
General Idea People move around Machines may want to share data Want a system with: No new interface for applications No need to copy all the data No space consuming version control
3
Network File Systems Diagram from http://www.cs.binghamton.edu/~kang/cs552/note11.ppt
4
A Brief History Network File System (NFS) developed in 1984 Simple client-server model Some problems Andrew File System (AFS) developed in 1985 Better performance More caching client-side SFS 1999 NFS can be run on untrusted networks
5
Lingering Issues Central server is a major bottleneck All choices still require lots of bandwidth LANs getting faster & lower latency Remote memory faster than local disk ATM faster with more nodes sending data
6
Cooperative Caching Michael D. Dahlin, Randolph Y. Wang, Thomas E. Anderson, and David A. Patterson in 1994 ATM, Myrinet provide faster, low-latency network This makes remote memory 10-20x faster than disk Want to get data from memory of other clients rather than server disk
7
Cooperative Caching Data can be found in: 1. Local memory 2. Server memory 3. Other client memory 4. Server disk How should we distribute cache data?
8
Design Decisions Private/Global Coop. Cache? Coordinated Cache Entries? Static/Dynamic Partition? Block Location? Weighted LRU Hash N-Chance Direct Client Cooperation Greedy Forwarding Cent. Coord Private Any Client DynamicStatic Coordination Global No Coordination Fixed
9
Direct Client Cooperation Active clients use idle client memory as a backing store Simple Don’t get info from other active clients
10
Greedy Forwarding Each client manages its local cache greedily Server stores contents of client caches Still potentially large amounts of data duplication No major cost for performance improvements
11
Centrally Coordinated Caching Client cache split into two parts – local and global
12
N-Chance Forwarding Clients prefer to cache singlets, blocks stored in only one client cache. Instead of discarding a singlet, set recirculation count to n and pass on to a random other client.
13
Sensitivity Variation in Response Time with Client Cache Size Variation in Response Time with Network Latency
14
Simulation results Average read response timeSever load
15
Simulation results Slowdown
16
Results N-Chance forwarding close to best possible performance Requires clients to trust each other Requires fast network
17
Serverless NFS Thomas E. Anderson, Michael D. Dahlin, Jeanna M. Neefe, David A. Patterson, Drew S. Roselli, and Randolph Y. Wang in 1995 Eliminates central server Takes advantage of ATM and Myrinet
18
Starting points RAID: Redundancy if nodes leave or fail LFS: Recovery when system fails Zebra: Combines LFS and RAID for distributed systems Multiprocessor Cache Consistency: Invalidating stale cache info
19
To Eliminate Central Servers Scaleable distributed metadata, which can be reconfigured after a failure Scalable division into groups for efficient storage Scalable log cleaning
20
How it works Each machine has one or more roles: 1. Client 2. Storage Server 3. Manager 4. Cleaner Management split among metadata managers Disks clustered into stripe groups for scalability Cooperative caching among clients
21
xFS xFS is a prototype of the serverless network file system Lacks a couple features: Recovery not completed Doesn’t calculate or distribute new manager or stripe group maps No distributed cleaner
22
File Read
23
File Write Buffered into segments in local memory Client commits to storage Client notifies managers of modified blocks Managers update index nodes & imaps Periodically, managers log changes to stable storage
24
Distributing File Management First Writer – management goes to whoever created the file *does not include all local hits
25
Cleaning Segment utilization maintained by segment writer Segment utilization stored in s-files Cleaning controlled by stripe group leader Optimistic Concurrency control resolves cleaning / writing conflicts
26
Recovery Several steps are O(N 2 ), but can be run in parallel Steps For Recovery
27
xFS Performance Aggregate Bandwidth Writing 10MB files Aggregate Bandwidth Reading 10MB files NFS max with 2 clients AFS max with 32 clients AFS max with 12 clients NFS max with 2 clients
28
xFS Performance Average time to complete the Andrew benchmark, varying the number of simultaneous clients
29
System Variables Aggregate Large-Write Bandwidth with Different Storage Server Configurations Variation in Average Small File Creation Speed with more Managers
30
Possible Problems System relies on secure network between machines, and trusted kernels on distributed nodes Testing done on Myrinet
31
Low-Bandwidth NFS Want efficient remote access over slow or wide area networks File systems better than CVS, copying all data over Want close-to-open consistency
32
LBFS Large client cache containing user’s working set of files Don’t send all the data – reconstitute files from previous data, and only send changes
33
File indexing Non-overlapping chunks between 2K and 64K Broken up using 48 byte Rabin fingerprints Identified by SHA-1 hash, indexing on first 64 bits Stored in database, recomputed before use to avoid synchronization issues
34
Protocol Based on NFS, added GETHASH, MKTMPFILE, TMPWRITE, CONDWRITE, COMMITTMP Security infrastructure from SFS Whole file caching Retrieve from server on read unless valid copy in cache Write back to server when file closed
35
File Reads
36
File Writes
37
Implementation LBFS server accesses file system as an NFS client Server creates trash directory for temporary files Server inefficient when files overwritten or truncated, which could be fixed by lower- level access. Client uses xfs driver
38
Evaluation
39
Bandwidth consumption Much higher bandwidth for first build
40
Application Performance
41
Bandwidth and Round Trip Time
42
Conclusions New technologies open up new possibilities for network file systems Cost of increased traffic over Ethernet may cause problems for xFS, cooperative caching.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.