Download presentation
Presentation is loading. Please wait.
1
Separating Abstractions from Resources in a Tactical Storage System Douglas Thain University of Notre Dame http://www.nd.edu/~ccl
2
Abstract Users of distributed systems encounter many practical barriers between their jobs and the data they wish to access. Problem: Users have access to many resources (disks), but are stuck with the abstractions (cluster NFS) provided by administrators. Solution: Tactical Storage Systems allow any user to create, reconfigure, and tear down abstractions without bugging the administrator.
3
Transparent Distributed Filesystem shared disk The Standard Model
4
Transparent Distributed Filesystem shared disk Transparent Distributed Filesystem shared disk private disk private disk private disk private disk FTP, SCP, RSYNC, HTTP,...
5
Problems with the Standard Model Users encounter partitions in the WAN. –Easy to access data inside cluster, hard outside. –Must use different mechanisms on diff links. –Difficult to combine resources together. Resources go unused. –Disks on each node of a cluster. –Unorganized resources in a department/lab. Unnecessary cross-talk between users. –User A demands async NFS for performance. –User B demands sync NFS for consistency. A global file system is not possible!
6
What if... Users could easily access any storage? I could borrow an unused disk for NFS? An entire cluster can be used as storage? Multiple clusters could be combined? I could reconfigure structures without root? –(Or bugging the administrator daily.) Solution: Tactical Storage System (TSS)
7
Outline Problems with the Standard Model Tactical Storage Systems –File Servers, Catalogs, Abstractions, Adapters Applications: –Remote Database Access in HEP Simulation –Remote Dynamic Linking in HEP Simulation –Expandable Filesystem for Experimental Data –Expandable Database for Bioinformatics Simulation Final Thought
8
Tactical Storage Systems (TSS) A TSS allows any node to serve as a file server or as a file system client. All components can be deployed without special privileges – but with security. Users can build up complex structures. –Filesystems, databases, caches,... Two Independent Concepts: –Resources – The raw storage to be used. –Abstractions – The organization of storage.
9
file system file system file system file system file system file system file system Central Filesystem App Distributed Database Abstraction Adapter App Distributed Filesystem Abstraction Adapter App Cluster administrator controls policy on all storage in cluster UNIX Workstations owners control policy on each machine. file server file server file server file server file server file server file server UNIX ??? Adapter
10
Components of a TSS: 1 – File Servers 2 – Catalogs 3 – Abstractions 4 – Adapters
11
1 – File Servers Unix-Like Interface –open/close/read/write –getfile/putfile to stream whole files –opendir/stat/rename/unlink Complete Independence –choose friends –limit bandwidth/space –evict users? Trivial to Deploy –run server + setacl –no privilege required –can be thrown into a grid system Flexible Access Control file server A file server B Chirp Protocol file system owner of server A owner of server B
12
Access Control in File Servers Unix Security is not Sufficient –No global user database possible/desirable. –Mapping external credentials to Unix gets messy. Instead, Make External Names First-Class –Perform access control on remote, not local, names. –Types: Globus, Kerberos, Unix, Hostname, Address Each directory has an ACL: globus:/O=NotreDame/CN=DThain RWLA kerberos:dthain@nd.edu RWL hostname:*.cs.nd.edu RL address:192.168.1.* RWLA
13
Problem: Shared Namespace file server globus:/O=NotreDame/* RWLAX a.out test.ctest.dat cms.exe
14
Solution: Reservation (V) Right file server O=NotreDame/CN=* V(RWLA) /O=NotreDame/CN=Monk RWLA mkdir a.outtest.c /O=NotreDame/CN=Monk mkdir /O=NotreDame/CN=Ted RWLA a.outtest.c /O=NotreDame/CN=Ted mkdir only!
15
2 - Catalogs catalog server catalog server periodic UDP updates HTTP XML, TXT, ClassAds
16
3 - Abstractions An abstraction is an organizational layer built on top of one or more file servers. End Users choose what abstractions to employ. Working Examples: –CFS: Central File System –DSFS: Distributed Shared File System –DSDB: Distributed Shared Database Others Possible? –Distributed Backup System –Striped File System (RAID/Zebra)
17
CFS: Central File System file server adapter appl file CFS
18
ptr DSFS: Dist. Shared File System file server appl file server file server file adapter DSFS lookup file location access data
19
DSDB: Dist. Shared Database adapter appl file server file server file database server file index query direct access insert create file DSDB
20
system calls trapped via ptrace tcsh catvi tcsh catvi file table process table Like an OS Kernel –Tracks procs, files, etc. –Adds new capabilities. –Enforces owner’s policies. Delegated Syscalls –Trapped via ptrace interface. –Action taken by Parrot. –Resources chrgd to Parrot. User Chooses Abstr. –Appears as a filesystem. –Option: Timeout tolerance. –Option: Cons. semantics. –Option: Servers to use. –Option: Auth mechanisms. 4 - Adapter Adapter - Parrot Abstractions: CFS – DSFS - DSDB
27
file system file system file system file system file system file system file system Central Filesystem App Distributed Database Abstraction Adapter App Distributed Filesystem Abstraction Adapter App Cluster administrator controls policy on all storage in cluster UNIX Workstations owners control policy on each machine. file server file server file server file server file server file server file server UNIX ??? Adapter
28
Performance Summary Nothing comes for free! –System calls: order of magnitude slower. –Memory bandwidth overhead: extra copies. –TSS can drive network/switch to limits. Compared to NFS Protocol: –TSS slightly better on small operations. (no lookup) –TSS much better in network bandwidth. (TCP) –NFS caches, TSS doesn’t (today), mixed blessing. On real applications: –Measurable slowdown –Benefit: far more flexible and scalable.
29
Outline Problems with the Standard Model Tactical Storage Systems –File Servers, Catalogs, Abstractions, Adapters Applications: –Remote Dynamic Linking in HEP Simulation –Remote Database Access in HEP Simulation –Expandable Filesystem for Astrophysics Data –Expandable Database for Mol. Dynamics Simulation Final Thoughts
30
Remote Dynamic Linking appl adapter ld.so FTP server file system liba.so libb.so libc.so WAN Credit: Igor Sfiligoi @ Fermi National Lab FTP driver Modular Simulation Needs Many Libraries –Devel. on workstations, then ported to grid. –Selection of library depends on analysis tech. Solution: Dynamic Link with TSS and FTP: –LD_LIBRARY_PATH=/ftp/server.name/libs Send adapter along with job. Send adapter along with job. select several MB from 60 GB of libraries Anon. Login.
31
Related Work Lots of file services for the Grid: –GridFTP, Freeldr, NeST, IBP, SRB, RFIO,... –Adapter interfaces with many of these! Why have another file server? –Reason 1: Must have precise Unix semantics! Apps distinguish ENOENT vs EACCES vs EISDIR. FTP always returns error 550, regardless of error. –Reason 2: TSS focused on easy deployment. No privilege required, no config files, no rebuilding, flexible access control,...
32
Remote Database Access script adapter TSS file server file system DB data libdb.so sim.exe WAN CFS HEP Simulation Needs Direct DB Access –App linked against Objectivity DB. –Objectivity accesses filesystem directly. –How to distribute application securely? Solution: Remote Root Mount via TSS: parrot –M /=/chirp/fileserver/rootdir parrot –M /=/chirp/fileserver/rootdir DB code can read/write/lock files directly. DB code can read/write/lock files directly. GSI Auth GSI Credit: Sander Klous @ NIKHEF
33
Performance on EDG Testbed Setup Time to Init Time/Event Unix 446 +/- 46 446 +/- 4664s LAN/NFS 4464 +/- 172 113s LAN/TSS 4505 +/- 155 113s WAN/TSS 6275 +/- 330 88s
34
Expandable Filesystem for Experimental Data Credit: John Poirer @ Notre Dame Astrophysics Dept. buffer disk 10 GB/day today could be lots more! daily tape daily tape daily tape daily tape daily tape 25-year archive analysis code Can only analyze the most recent data. Project GRAND http://www.nd.edu/~grand
35
Expandable Filesystem for Experimental Data Credit: John Poirer @ Notre Dame Astrophysics Dept. buffer disk 10 GB/day today could be lots more! daily tape daily tape daily tape daily tape daily tape 25-year archive Project GRAND http://www.nd.edu/~grand file server file server file server file server Distributed Shared Filesystem Adapter analysis code Can analyze all data over large time scales.
36
Appl: Distributed MD Database State of Molecular Dynamics Research: –Easy to run lots of simulations! –Difficult to understand the “big picture” –Hard to systematically share results and ask questions. Desired Questions and Activities: –“What parameters have I explored?” –“How can I share results with friends?” –“Replicate these items five times for safety.” –“Recompute everything that relied on this machine.” GEMS: Grid Enabled Molecular Sims –Distributed database for MD siml at Notre Dame. –XML database for indexing, TSS for storage/policy.
37
GEMS Distributed Database database server catalog server catalog server XML ->host1:fileA host7:fileB host3:fileC ACB YZX XML ->host6:fileX host2:fileY host5:fileZ data XML+ Temp>300K Mol==CH 4 host5:fileZ host6:fileX Credit: Jesus Izaguirre and Aaron Striegel, Notre Dame CSE Dept.
38
Active Recovery in GEMS
39
GEMS and Tactical Storage Dynamic System Configuration –Add/remove servers, discovered via catalog Policy Control in File Servers –Groups can Collaborate within Constraints –Security Implemented within File Servers Direct Access via Adapters –Unmodified Simulations can use Database –Alternate Web/Viz Interfaces for Users.
40
Outline Problems with the Standard Model Tactical Storage Systems –File Servers, Catalogs, Abstractions, Adapters Applications: –Remote Dynamic Linking in HEP Simulation –Remote Database Access in HEP Simulation –Expandable Filesystem for Astrophysics Data –Expandable Database for Mol. Dynamics Simulation Final Thoughts
41
Tactical Storage Systems Separate Abstractions from Resources Components: –Servers, catalogs, abstractions, adapters. –Completely user level. –Performance acceptable for real applications. Independent but Cooperating Components –Owners of file servers set policy. –Users must work within policies. –Within policies, users are free to build.
42
Ongoing Work Malloc() for the Filesystem –Resource owners want to limit users. (quota) –End users need space assurance. (alloc) –Need per-user allocations, not just global limits. Dynamic System Management –Add a node, delete a node, reconfigure. –Need tools that allow rebalancing as needed. Distributed Access Control –ACLs refer to group definitions elsewhere. –What’s new? Fault tolerance / policy management. Processing in Storage (PINS) –Move computation to data. –Needs new programming (scripting) model.
44
Acknowledgments Science Collaborators: –Jesus Izaguirre –Sander Klous –Peter Kunzst –Erwin Laure –John Poirer –Igor Sfiligoi –Aaron Striegel CSE Graduate Students: –Paul Brenner –James Fitzgerald –Jeff Hemmes –Paul Madrid –Chris Moretti –Phil Snowberger –Justin Wozniak
45
For more information... Cooperative Computing Lab Cooperative Computing Lab http://www.cse.nd.edu/~ccl Cooperative Computing Tools Cooperative Computing Tools http://www.cctools.org Douglas Thain Douglas Thain –dthain@cse.nd.edu dthain@cse.nd.edu –http://www.cse.nd.edu/~dthain http://www.cse.nd.edu/~dthain
46
Extra Slides Different sized disks Check contents Black stinks
47
Performance – System Calls
48
Performance - Applications parrot only
49
Performance – I/O Calls
50
Performance – Bandwidth
51
Performance – DSFS
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.