Download presentation
Presentation is loading. Please wait.
1
Caching
2
Andrew Security Andrew Scale and Performance Sprite Performance
3
Andrew File System
4
Sprite
5
Network File System
6
Andrew File System AFS, AFS2, Coda 1983 to present, Satya its champion Ideas spread to other systems, NT
7
Security Terms Release, Modification, Denial of Service Mutual suspicion, Modification, Conservation, Confinement, Initialization Identification, Authentication, Privacy, Nonrepudiation
8
System Components Vice Secure Servers Virtue Protected Workstations Venus Virtual File System Authentication Server
9
Andrew Encryption DES - Private Keys E[msg,key], D[msg,key] Local copy of secret key Exchange of keys doesn’t scale –Web of trust extends to lots of servers –Pair wise keys unwieldy
10
Andrew Authentication Username sent in the clear Random number exchange –E[X,key] sent to server (Vice) –D[E[X,key],key] = X –E[X+1,key] to client (Venus) BIND exchanges session keys
11
Authentication Tokens Description of the user ID, timestamp valid/invalid Used to coordinate what should be available from Vice (server) to Virtue (client)
12
Access Control Hierarchical groups –Project/shared accounts discouraged Positive/Negative Rights U(+) — U(-) VMS linear list & rights IDs Prolog engine in NT Netware has better admin feedback
13
Resource Usage Network not an issue –Distributed DOS ‘hard’ Server High Water Mark –Violations by SU programs tolerated –Daemon processes given ‘stem’ accnt Workstations not an issue –User files in Vice
14
Other Security Issues XOR for session encryption PC support via special server Diskless workstations avoided
15
Enhancements Cells (NT Domains) Kerberos Protection Server for user administration
16
Sprite Components Client Server Local Disk Server Disk Client CacheServer Cache
17
Sprite Design Cache in client and server RAM Kernel file system modification –Affects system/paging and user files Cache size negotiated with VM Delayed 30s write-back –Called ‘laissez-faire’ by Andrew
18
NFS Comparison Presumed optimized RPC access semantics –NFS uses UDP, others TCP Sprite targeting 100+ nodes Andrew targeting 5,000+ nodes
19
Andrew Scale and Performance Dedicated server process per client Directory redirection for content Whole file copy in cache
20
Problems already… Context switching in server TCP connection overhead –Session done by kernel Painful to move parts of VFS to other servers –Volume abstraction fixed this later
21
Cache Management Write on close No concurrent write Versioning User level Delayed write Cache disabled Versioning Kernel level
22
Function Distribution TestAuth - validate cache GetFileStat - file status Fetch - server to client Store - client to server 61.7% 26.8% 4.0% 2.1%
23
Performance Improvements Virtue caches directory Local copy assumed correct File id’s, not names, exchanged Lightweight Processes (LWP) –Context data record on server
24
Andrew Benchmarks
25
Sprite Throughput
26
Sprite Benchmarks
28
Cache Impact - Client
29
Cache Impact - Server
30
Cache Impact - Net
31
Comparison
32
General Considerations 17-20% slower than local Server bottleneck Scan for files and read almost all local 6-8x faster vs no cache Server cache extends local cache Remote paging fast as local disk! 5x users/server
33
Fini
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.