Presentation is loading. Please wait.

Presentation is loading. Please wait.

Xrootd Update Andrew Hanushevsky Stanford Linear Accelerator Center 15-Feb-05

Similar presentations


Presentation on theme: "Xrootd Update Andrew Hanushevsky Stanford Linear Accelerator Center 15-Feb-05"— Presentation transcript:

1 xrootd Update Andrew Hanushevsky Stanford Linear Accelerator Center 15-Feb-05 http://xrootd.slac.stanford.edu

2 15-Feb-052: xrootd Goals High Performance File-Based Access Scalable, extensible, usable Fault tolerance Servers may be dynamically added and removed Flexible Security Allowing use of almost any protocol Simplicity Can run xrootd out of the box No config file needed for non-complicated/small installations Generality Can configure xrootd for ultimate performance Meant for intermediate to large-scale sites Rootd Compatibility

3 15-Feb-053: xrootd How Is high performance achieved? Rich but efficient server protocol Combines file serving with P2P elements Allows client hints for improved performance Pre-read, prepare, client access & processing hints, Multiplexed request stream Multiple parallel requests allowed per client An extensible base architecture Heavily multi-threaded Clients are dedicated threads whenever possible Extensive use of OS I/O features Async I/O, memory mapped I/O, device polling, etc. Load adaptive reconfiguration.

4 15-Feb-054: xrootd How performant is it? Can deliver data at disk speeds (streaming mode) Assuming good network, proper TCP buffer size, and asynchronous I/O Low CPU overhead 75% less CPU than NFS for same data load General requirements Middling speed machine NIC matched Rule of thumb: 1MHz/1Mb + 20% (minimum) The more CPU’s the better 1GB of RAM

5 15-Feb-055: xrootd Measured performance

6 15-Feb-056: xrootd Xrootd vs. NFS (2/9/2005 - Gregory Schott, DESY)

7 15-Feb-057: xrootd xrootd Server Architecture Protocol Layer Filesystem Logical Layer Filesystem Physical Layer Filesystem Implementation Protocol & Thread Manager (included in distribution) p2p heart

8 15-Feb-058: xrootd Acronyms, Entities & Relationships data xrootd olbdxrootd olbd Data Clients Redirectors Data Servers M S ctl olbd Control Network Managers & Servers (resource info, file location) xrootd Data Network (redirectors steer clients to data Data servers provide data)

9 15-Feb-059: xrootd Closer Look Into Now & The Future Adaptive Reconfiguration Load Balancing Fault Tolerance Proxy Services Security & Privacy Monitoring & Client Tuning File Services (MSS, SRM, & Grid)

10 15-Feb-0510: xrootd Adaptive Reconfiguration Server dynamically adjusts configuration Number of threads Count and sizes of buffers and objects Finally got the algorithms right 400 Clients can be accommodated in about 80MB RAM High latency connection rescheduling Affects how async I/O is done Future: Future: Make algorithms CPU-speed sensitive Appropriate time/space trade-off

11 15-Feb-0511: xrootd Load Balancing xrootd scales in multiple dimensions Can run multiple load balanced xrootd’s Architected as self-configuring structured peer-to-peer (SP 2 ) data servers Servers can be added & removed at any time Client (XTNetFile) understands SP 2 configurations xrootd informs client when running in this mode Client has more recovery options in the event of failure

12 15-Feb-0512: xrootd Example: SLAC Configuration client machines kan01kan02kan03kan04 kanxx bbr-olb03bbr-olb04 kanolb-a

13 15-Feb-0513: xrootd Next Load Balancing Step Fully deploy scalable p 2 16 p architecture Accommodate swarms of up to 64,000 servers Perhaps more…. Self-organizing (no config file changes needed) Servers self-organize into cells of 64 Cells cluster into a minimum spanning tree Testing alpha code now Some message semantics still need work

14 15-Feb-0514: xrootd Fault Tolerance Server and resources may come and go Uses load balancing to effect recovery New servers can be added at any time for any reason Files can be moved around in real-time Client simply adjust to the new configuration XTNetFile object handles recovery protocol Future: Future: Provide Protocol UnResp interface Can be used to perform reactive client scheduling

15 15-Feb-0515: xrootd Bottom Heavy System The olbd/xrootd architecture is bottom heavy Allows super-rapid client dispersal Ideal scaling characteristics

16 15-Feb-0516: xrootd Current Proxy Support client machines data01data02data03data04 proxy xrootd IN2P3 INDRA 2 3 2 Firewall olbd

17 15-Feb-0517: xrootd Future: Future: Comprehensive Proxies Current support addresses external access Need solution when clients also behind firewall Allow proxy/proxy connections Will need direct connections for performance Local Proxy xrootd Remote Proxy xrootd client data02 data03 data04

18 15-Feb-0518: xrootd Flexible Security Negotiated Security Protocol Allows client/server to agree on protocol Can be easily extended Multi-protocol authentication support Protocols implemented as dynamic plug-ins Future: Future: Add GSI and encryption interfaces Foundation for distributed proxy security Accommodates data encryption for privacy Geri Ganis, CERN, working on this

19 15-Feb-0519: xrootd Scalable Proxy Security SLAC PROXYRAL PROXY 3 2 21 1 1 Authenticate & develop session key 2 2 Distribute session key to authenticated subscribers 3 3 Data servers can log into each other using session key Data Servers

20 15-Feb-0520: xrootd Monitoring Today xrootd can provide event traces Login, file open, close, disconnect Application markers and client information All read/write requests Highly scalable architecture used Future: Future: Complete toolkit for utilizing data Critical for tuning applications Jacek Becla, SLAC, working on this

21 15-Feb-0521: xrootd MSS Support Lightweight agnostic interfaces provided oss.mssgwcmd command Invoked for each create, dirlist, mv, rm, stat oss.stagecmd |command Long running command, request stream protocol Used to populate disk cache (i.e., “stage-in”) xrootd (oss layer) mssgwcmd MSS stagecmd

22 15-Feb-0522: xrootd SRM Direction MSS Interface ideal spot for SRM hook Can simply use existing hooks mssgwcmd & stagecmd Define new long running hook oss.srm |command Process all external disk cache management requests xrootd (oss layer) srm MSS

23 15-Feb-0523: xrootd Future: Future: SRM Great interest in SRM Provides external data management interface MSS and Grid Unfortunately, SRM interface in flux Heavy vs light protocol Two sites looking at providing SRM layer BNL IN2P3 Will work with LLNL team to speed effort

24 15-Feb-0524: xrootd Conclusion xrootd provides high performance file access It appears to be one of the best in it’s class Unique performance, usability, scalability, security, compatibility, and recoverability characteristics One server can easily support over 600 parallel clients New software architecture Challenges Maintain scaling while interfacing to external systems Opportunities Ability to provide data access at the LHC scale.


Download ppt "Xrootd Update Andrew Hanushevsky Stanford Linear Accelerator Center 15-Feb-05"

Similar presentations


Ads by Google