Presentation is loading. Please wait.

Presentation is loading. Please wait.

High Performance Grid Data Transfer Bill Allcock, ANL BNL Technology Meeting 26 April 2004.

Similar presentations


Presentation on theme: "High Performance Grid Data Transfer Bill Allcock, ANL BNL Technology Meeting 26 April 2004."— Presentation transcript:

1 High Performance Grid Data Transfer Bill Allcock, ANL BNL Technology Meeting 26 April 2004

2 Outline l Transport and Related Services l GridFTP and XIO l Alternate Transport Protocols l Optical Networking l Firewalls

3 Transport and Related Services

4 Services: What’s the Big Deal? l Grids (Globus anyway) has been service oriented from the start. l However, every service had its own, unique, non- standard, and not necessarily well documented protocol l Web services began to gain traction and via SOAP and WSDL provided a nice interface, good tooling, etc.. l But, did not deal with some issues critical to the Grid, such as state, lifetime management, etc.

5 Open Grid Services Infrastructure (OGSI) l The first crack at melding the Grid with Web Services. l Largely driven by Globus and IBM, but OGSI working group in GGF was very active in it’s creation. l Worked well, and gained lots of support l Except from the Web Service Stack Vendors…

6 Grid and Web Services: Convergence? Grid Web However, despite enthusiasm for OGSI, adoption within Web community turned out to be problematic Started far apart in apps & tech OGSI GT2 GT1 HTTP WSDL, WS-* WSDL 2, WSDM Have been converging ?

7 Three Major Web Services Concerns about OGSI l Too much stuff in one specification l Does not work well with existing Web services tooling l Too “object oriented” u Read “It didn’t fit their model”

8 Concerns Addressed l Too much stuff in one specification è WSRF partitions OGSI v1.0 functionality into a family of composable specifications l Does not work well with existing Web services tooling è WSRF tones down the usage of XML Schema l Too object oriented è WSRF makes an explicit distinction between the “service” and the stateful “resources” acted upon by that service

9 Grid and Web Services: Convergence: Yes! Grid Web The definition of WSRF means that Grid and Web communities can move forward on a common base WSRF Started far apart in apps & tech OGSI GT2 GT1 HTTP WSDL, WS-* WSDL 2, WSDM Have been converging

10 From OGSI to WSRF: Refactoring and Evolution** OGSIWSRF Grid Service ReferenceWS-Addressing Endpoint Reference Grid Service HandleWS-Addressing Endpoint Reference HandleResolver portTypeWS-RenewableReferences Service data defn & accessWS-ResourceProperties GridService lifetime mgmtWS-ResourceLifeCycle Notification portTypesWS-Notification Factory portType Treated as a pattern ServiceGroup portTypesWS-ServiceGroup Base fault typeWS-BaseFaults **Draft document at www.globus.org/wsrf this week

11 GridFTP and XIO

12 What is GridFTP? l A secure, robust, fast, efficient, standards based, widely accepted data transfer protocol l A Protocol u Multiple Independent implementation can interoperate l This works. Both the Condor Project at Uwis and Fermi Lab have home grown servers that work with ours. l Lots of people have developed clients independent of the Globus Project. l We also supply a reference implementation: u Server u Client tools (globus-url-copy) u Development Libraries

13 wuftpd based GridFTP Existing Functionality l Security l Reliability / Restart l Parallel Streams l Third Party Transfers l Manual TCP Buffer Size l Server Side Processing l Partial File Transfer l Large File Support l Data Channel Caching l Integrated Instrumentation l De facto standard on the Grid New Functionality in 3.2 l Server Improvements l Structured File Info l MLST, MLSD l checksum support l chmod support l globus-url-copy changes l File globbing support l Recursive dir moves l RFC 1738 support l Control of restart l Control of DC security

14 New GridFTP Implementation l 100% Globus code. No licensing issues. l GT3.2 Alpha had a very minimal implementation l GT3.2 final does *NOT* have the new server. It will be released in 4.0. l wuftpd specific functionality, such as virtual domains, will NOT be present l Will have IPV6 support included (EPRT, EPSV) l Based on XIO l Extremely modular to allow integration with a variety of data sources (files, mass stores, etc.) l Striping will also be present in 4.0

15 GridFTP at SC’2000: Long-Running Dallas-Chicago Transfer SciNet Power Failure Other demos starting up (Congestion) Parallelism Increases (Demos) Backbone problems on the SC Floor DNS Problems Transition between files (not zero due to averaging)

16 GridFTP: Standards Based l FTP protocol is defined by several IETF RFCs l Start with most commonly used subset u Standard FTP: get/put etc., 3 rd -party transfer l Implement standard but often unused features u GSS binding, extended directory listing, simple restart l Extend in various ways, while preserving interoperability with existing servers u Striped/parallel data channels, partial file, automatic & manual TCP buffer setting, progress monitoring, extended restart

17 GridFTP: Standards Based (cont) l Existing standards u RFC 959: File Transfer Protocol u RFC 2228: FTP Security Extensions u RFC 2389: Feature Negotiation for the File Transfer Protocol u Draft: FTP Extensions l New drafts u GridFTP: Protocol Extensions to FTP for the Grid l Grid Forum Recommendation l GFD.20 l http://www.ggf.org/documents/GWD-R/GFD-R.020.pdf

18 GridFTP: Caveats l Protocol requires that the sending side do the TCP connect (possible Firewall issues) l Client / Server u Currently, no server library, therefore Peer to Peer type apps VERY difficult u Generally needs a pre-installed server l Looking at a “dynamically installable” server l Striped Server is a prototype, though a heavily used and well tested prototype. u alpha of new server with striping by mid summer u Should be released by Fall 2004

19 Striped Server l Multiple nodes work together and act as a single GridFTP server l An underlying parallel file system stores blocks of the file, usually in round robin fashion, across all of the nodes. l Each node then moves only the pieces of the file that it is responsible for. l The other side then writes the file in the same way, block round robin on a parallel file system. l This allows multiple levels of parallelism, CPU, bus, NIC, disk, etc.

20

21 Extensible IO (XIO) system l Provides a framework that implements a Read/Write/Open/Close Abstraction l Drivers are written that implement the functionality (file, TCP, UDP, GSI, etc.) l Different functionality is achieved by building protocol stacks l GridFTP drivers will allow 3 rd party applications to easily access files stored under a GridFTP server l Other drivers could be written to allow access to other data stores. l Changing drivers requires minimal change to the application code.

22 Optical Networking

23 l Will have (is having) a profound effect on the way we do computing. l Networking speeds increassing >> Moore’s Law u The network is no longer necessarily the weak link. l Network bandwidth is >> than the bus on your computer. u Explode the computer out over the network l Current economics make it reasonable to have your own fiber.

24 Optical Networking l Whats the big deal? u Real QoS available l Can carve out an OC-12 that is all yours u Its all yours, use any protocol you want l i.e. ditch TCP u Optical switches are cheap u Bringing about a change in the business model. l Users can request reconfiguration and get it within minutes or even seconds.

25 Optical Networking l What does it mean to you? u User: l It should be nothing more than a service to request a reservation for a path. l Should be able to get guaranteed, or at least reasonable and consistent network performance. u Developer l For middleware or network developers some people think the level of exposure will go down to every optical switch (mems, or individual path switch, not switch like the entire box) l Applications can (and should) become network (or if you luck buzz words, photonic) aware. They can alter their behavior based on what network resources are available.

26 Optical Networking Caveat l The following is all IMHO u We will never go to a fully circuit switched network l Just does not make sense for most web traffic l Will be treated as a premium service u Most people will not have the luxury of true end- to-end optical switching l Will often do packet switching on-site to the Optical gear. l Often will be virtual circuits over MPLS or GMPLS l In those cases you still need proper traffic engineering u This may seem obvious, but this will only work if it is profitable.

27 Optical Networking Picture

28 Protocols

29 l Current TCP variants are great for web traffic, but not for bulk data transfer. l Attempts to improve this fall into a few categories: u Evolution of AIMD u Rate based u Reliable UDP based u XCP

30 What’s wrong with TCP? l Uses packet loss to signal congestion u Not necessarily the case u Feedback is too coarse u You only know when it is too late l Additive Increase, Multiplicative Decrease (AIMD) for its control mechanism l Need to have a buffer equal to the Bandwidth Delay Product l The Interaction between AIMD and BWDP

31 AIMD l To the first order this algorithm: u Begin in “slow start” by putting one packet on the network and then doubling (exponentially increasing) the number of unacknowledged packets (congestion window or CWND) each time an ACK is received, until it gets a congestion event u When there is a congestion event, enter congestion avoidance: l Cut the CWND in half l Linearly increase one or a few packets per ACK u Note that CWND size is equivalent to Max BW

32 BWDP l TCP is reliable and must keep a copy of every packet until the receiver acknowledges it has been received. l Use a tank as an analogy u I can keep putting water in until it is full. u After that, I can only put in one gallon for each gallon removed. u The volume of the tank is the cross sectional area times the height u Think of the BW as the area and the RTT as the length of the network pipe.

33 Recovery Time

34 Recovery Time for a Single Congestion Event l T1 (1.544 Mbs) with 50ms RTT  10 KB u Recovery Time (1500 MTU): 0.16 Sec l GigE with 50ms RTT  6250 KB u Recovery Time (1500 MTU): 104 Seconds l GigE to Amsterdam (100ms)  1250 KB u Recovery Time (1500 MTU): 416 Seconds l GigE to CERN (160ms)  2000 KB u Recovery Time (1500 MTU): 1066 Sec (17.8 min)

35 Evolution of AIMD l Still do Additive Increase, Multiplicative Decrease, but change the rate at which you increase and decrease: u High Speed TCP (RFC ?) Sally Floyd l After a set point (typically 38 packets) a table is used and more aggressive AIMD parameters are set u Scalable TCP l Increase is exponential and decrease is 0.125 u H-TCP (Leith) l Uses time between congestion events to differentiate high speed network from low speed. u Good Paper that compares these and others l http://dsd.lbl.gov/DIDC/PFLDnet2004/papers/Bullot.pdf

36 Rate Based Control l Flow control is based on changes in RTT u Assume an increase in RTT means buffers in the router are filling up l This was tried back in the 70’s, but TCP won out. l FAST TCP out of Cal Tech (Stephen Low) uses this approach. l So far, results look very promising u Good performance u Fairly TCP friendly l Still TCP, so socket libraries work

37 Reliable UDP l Send the data UDP and use some form of retransmit for lost data u Reliable UDP (RFC ?) l Retransmit and control on TCP channel u Reliable Blast UDP (UIC/EVL/Leigh) l Retransmit and control on TCP channel l File based (must know # of bytes being transmitted) u Tsunami (IU/ANML/Wallace) l Retransmit and control on TCP channel l File based (must know # of bytes being transmitted) u SABUL/UDT (UIC/Grossman) l SABUL (UDP/TCP), UDT (UDP ONLY) l General transport, not file based

38 Reliable UDP l Issues u TCP friendliness l Only SABUL/UDT attempt to back off in the face of congestion. l This means all others will shut out TCP flows l This is why UDT went to UDP control, others strangle their own control / retransmit flows. u Standardization u Some routers won’t let UDP through l Great choice over an optical circuit though l Can’t use the socket libraries l Can use XIO

39 XCP l XCP (Katabi / MIT) u Generalization of Explicit Congestion Notification (ECN) u Decouples utilization control from fairness control u Based on control systems theory u Requires router modifications  l Again, IMHO u This is the way to go u You need a real feedback and a proper control system

40 Firewalls

41 What Security People Would like Client GridFTP Server The NETWORK

42 What application People Want Big, Fat, Pipe GridFTP Server Client

43 Where are we at today? l Applications often can’t run, and if they are high bandwidth apps, the firewall often limits performance. l Today, it means manual configuration to open holes, but then they stay open too long, and anyone can exploit them. l We need something better… l We need to open holes in the firewall u Only while absolutely necessary u For a specific party u With confidence that the use of that port falls within authorized behavior

44 How can we do that? l We envision two services u A service proxy l Intercepts incoming service requests (SOAP in Web Services) l Validates / Authorizes the request l Pluggable framework so it can be easily extended l Once authorized forwards it to the service u A secure, dynamic firewall “automatic garage door opener” l Temporarily opens holes through the firewall l Uses lifetime management to ensure the holes close l Ideally can specify exact host and service that will contact l Possibly have no monitoring of packets after that? l But could work with IDS, if it were fast enough. u Could, in theory, use these separately

45 A picture ProxyOpener client request service Authorized request Open for “n” minutes

46 Issues l You might notice the picture does not show who talks to the opener, this has significant security and effort impacts u The Proxy? u The Service? u The Client? l Stability / Security of plug-ins l What about non-SOAP requests? l Would this be secure enough to let the fat pipe run un-monitored?

47 Fusion Collaboratory

48 Even with Optical Paths…


Download ppt "High Performance Grid Data Transfer Bill Allcock, ANL BNL Technology Meeting 26 April 2004."

Similar presentations


Ads by Google