Download presentation
Presentation is loading. Please wait.
Published byEvangeline Freeman Modified over 9 years ago
1
Distributed File Systems Sarah Diesburg Operating Systems CS 3430
2
Distributed File System Provides transparent access to files stored on a remote disk Recurrent themes of design issues Failure handling Performance optimizations Cache consistency
3
No Client Caching Use RPC to forward every file system request to the remote server open, seek, read, write Server cache: X Client AClient B readwrite
4
No Client Caching + Server always has a consistent view of the file system - Poor performance - Server is a single point of failure
5
Network File System (NFS) Uses client caching to reduce network load Built on top of RPC Server cache: X Client A cache: XClient B cache: X
6
Network File System (NFS) + Performance better than no caching - Has to handle failures - Has to handle consistency
7
Failure Modes If the server crashes Uncommitted data in memory are lost Current file positions may be lost The client may ask the server to perform unacknowledged operations again If a client crashes Modified data in the client cache may be lost
8
NFS Failure Handling 1. Write-through caching 2. Stateless protocol: the server keeps no state about the client read open, seek, read, close No server recovery after a failure 3. Idempotent operations: repeated operations get the same result No static variables
9
NFS Failure Handling 4. Transparent failures to clients Two options The client waits until the server comes back The client can return an error to the user application Do you check the return value of close ?
10
NFS Weak Consistency Protocol A write updates the server immediately Other clients poll the server periodically for changes No guarantees for multiple writers
11
NFS Summary + Simple and highly portable - May become inconsistent sometimes Does not happen very often
12
Andrew File System (AFS) Developed at CMU Design principles Files are cached on each client’s disks NFS caches only in clients’ memory Callbacks: The server records who has the copy of a file Write-back cache on file close. The server then tells all clients that own an old copy. Session semantics: Updates are only visible on close
13
AFS Illustrated Server cache: X Client AClient B
14
AFS Illustrated Server cache: X Client AClient B read X callback list of X client A
15
AFS Illustrated Server cache: X Client A cache: XClient B read X callback list of X client A
16
AFS Illustrated Server cache: X Client A cache: XClient B read X callback list of X client A
17
AFS Illustrated Server cache: X Client A cache: XClient B read X callback list of X client A client B
18
AFS Illustrated Server cache: X Client A cache: XClient B cache: X read X callback list of X client A client B
19
AFS Illustrated Server cache: X Client A cache: XClient B cache: X write X, X X
20
AFS Illustrated Server cache: X Client A cache: XClient B cache: X close X X X
21
AFS Illustrated Server cache: X Client A cache: XClient B cache: X close X X X
22
AFS Illustrated Server cache: X Client A cache: XClient B cache: X close X
23
AFS Illustrated Server cache: X Client A cache: XClient B cache: X open X X
24
AFS Illustrated Server cache: X Client A cache: XClient B cache: X open X X
25
AFS Failure Handling If the server crashes, it asks all clients to reconstruct the callback states
26
AFS vs. NFS AFS Less server load due to clients’ disk caches Not involved for read-only files Both AFS and NFS Server is a performance bottleneck Single point of failure
27
Serverless Network File Service (xFS) Idea: construct a file system as a parallel program and exploit the high-speed LAN Four major pieces Cooperative caching Write-ownership cache coherence Software RAID Distributed control
28
Cooperative Caching Uses remote memory to avoid going to disk On a cache miss, check the local memory and remote memory, before checking the disk Before discarding the last cached memory copy, send the content to remote memory if possible
29
Cooperative Caching Client C cache:Client D cache: Client A cache: XClient B cache:
30
Cooperative Caching Client C cache:Client D cache: Client A cache: XClient B cache: read X X
31
Cooperative Caching Client C cache: XClient D cache: Client A cache: XClient B cache: read X X
32
Write-Ownership Cache Coherence Declares a client to be a owner of the file at writes No one else can have a copy
33
Write-Ownership Cache Coherence Client C cache:Client D cache: Client A cache: XClient B cache: owner, read-write
34
Write-Ownership Cache Coherence Client C cache:Client D cache: Client A cache: XClient B cache: owner, read-write read X
35
Write-Ownership Cache Coherence Client C cache:Client D cache: Client A cache: XClient B cache: read-only read X X
36
Write-Ownership Cache Coherence Client C cache: XClient D cache: Client A cache: XClient B cache: read-only X
37
Write-Ownership Cache Coherence Client C cache: XClient D cache: Client A cache: XClient B cache: read-only write X
38
Write-Ownership Cache Coherence Client C cache: XClient D cache: Client A cache:Client B cache: owner, read-write write X
39
Other components Software RAID Stripe data redundantly over multiple disks Distributed control File system managers are spread across all machines
40
xFS Summary Built on small, unreliable components Data, metadata, and control can live on any machine If one machine goes down, everything else continues to work When machines are added, xFS starts to use their resources
41
xFS Summary - Complexity and associated performance degradation - Hard to upgrade software while keeping everything running
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.