Download presentation
Presentation is loading. Please wait.
Published byEugene Adams Modified over 6 years ago
1
End-to-End Provisioned Network Testbed for eScience
Team: Malathi Veeraraghavan, University of Virginia Nagi Rao, Bill Wing, Tony Mezzacappa, ORNL John Blondin, NCSU Ibrahim Habib, CUNY NSF EIN: Experimental Infrastructure Network Project: Jan – Dec. 2007 Grant: $3.5M
2
Project goals Develop the infrastructure and networking technologies to support a broad class of eScience projects and specifically the Terascale Supernova Initiative (TSI) Optical network testbed Transport protocols Middleware and applications 11/21/2018
3
Provide rate-controlled connectivity
Computer Storage Storage Computer Computer Storage Shouldn’t a “network” be able to provide connectivity between two nodes at some requested bandwidth level? From application perspective: is there value for such a network Answer: yes! 11/21/2018
4
Long way to go! Network switches are available off-the-shelf with capability to provide bandwidth-on-demand Sufficient to just buy these and hook ‘em together? Answer: No! Implement a “socket” to enable applications to request bandwidth on demand and release when done Need to integrate this “socket” with applications Compare a computer to bandwidth Can’t stop by giving a computer to a scientist Need to give the scientists implemented applications (toolkit) to enable them to use the computer Same with bandwidth 11/21/2018
5
Long way to go (contd)! Test the applications with the integrated BW-on-demand “socket” on a lab BW-on-demand network testbed Finally, “take” the network wide area 11/21/2018
6
Project concept Network: Transport protocols: TSI applications:
CHEETAH: Circuit-switched High-Speed End-to-End Transport ArcHitecture Create a network that offers end-to-end BW-on-demand service Make it a PARALLEL network to existing high-speed IP networks – NOT AN ALTERNATIVE! Transport protocols: Design to take advantage of dual end-to-end paths: IP path and end-to-end circuit TSI applications: High-throughput file transfers Interactive apps. like remote visualization Remote computational steering Multipoint collaboration 11/21/2018
7
Network specifics Circuit: High-speed Ethernet mapped to Ethernet-over-SONET circuit Leverage existing strengths: 100Mbps/1Gbps Ethernet in LANs SONET in MANs/WANs Availability of Multi-Service Provisioning Platforms (MSPP) Can map Ethernet to Ethernet-over-SONET Can be crossconnected dynamically 11/21/2018
8
Dynamic circuit sharing
Internet PC 1 PC 2 Request bandwidth Ethernet/EoS circuit Parallel circuit-based testbed MSPP MSPP PC 4 PC 3 SONET XC with UNI-N/NNI SONET XC with UNI-N/NNI XC Steps: Route lookup Resource availability checking and allocation Program switch fabric for the crossconnection 11/21/2018
9
TSI application Construct local visualization environment
Added 6 cluster nodes, expanded RAID to 1.7TB Installed dedicated server for network monitoring Began constructing visualization cluster Wrote software to distribute data on cluster Supernova Science Generated TB data set on Cray ORNL Tested ORNL/NCSU collaborative visualization session John M. Blondin
10
Currently testing viz on Altrix + cluster using single-screen graphics
LAN and WAN testing ORNL NC State Operational April 1 Operational March 1 27-tile Display wall 6-panel LCD display SGI Altrix Linux Cluster Same 1Tb SN model on Disk at NCSU + ORNL Supernova model Supernova model Currently testing viz on Altrix + cluster using single-screen graphics John M. Blondin 11/21/2018
11
Applications that we will upgrade for TSI project
To enable scientists to enjoy the rate-controlled connectivity of the CHEETAH network GridFTP Viz. tool: Ensight or Aspect/Paraview 11/21/2018
12
Transport protocol File transfers
Tested various rate-based transport solutions SABUL, UDT, Tsunami, RBUDP Two Dell 2.4Ghz PCs with 100Mhz 64-bit PCI buses Connected directly to each other via a GbE link Emulates a dedicated GbE-EoS-GbE link Disk bottleneck: IDE 7200 rpm disks Why rate based: not for congestion control: not needed after the circuit is setup instead for flow control 11/21/2018
13
Rate-based flow control
Receive-buffer overflows: a necessary evil Play it safe and set a low rate: avoid/eliminate receive-buffer losses Or send data at higher rates but have to recover from losses 11/21/2018 (MTU=1500B, UDP buffer size=256KB, SABUL data block size=7.34MB)
14
Oak Ridge National Laboratory
PIs: Nageswara S. V. Rao, Anthony Mezzacappa, William R. Wing Overall Project Task: To develop protocols, application interfaces for interactive visualization and computational steering tasks of TSI eScience application 20Mbps on ORNL-GaTech IP connection On-going activities Stabilization protocols for visualization control streams Developed and tested stochastic approximation methods for implementing stable application-to-application streams Modularization and channel separation framework for visualization Developed an architecture for decomposing visualization pipeline into modules, measuring effective bandwidths and mapping them onto network Dedicated channel testbed Setup dedicated ORNL-ATL_ORNL gigE-SONET channel 11/21/2018
15
Planned activities Control channel protocols for interactive visualization and computation Develop and test stochastic approximation methods on dedicated channels Implementation of modularized and channel-separated visualization Automatic decomposition and mapping of visualization pipeline onto dedicated channels Integrate control channel modules Develop rate controllers to avoid channel overflows Dedicated ORNL-ATL_ORNL connection over gigE-SONET channel Test protocols and visualizations Integration with TSI visualization application ORNL NCSU visualization clusters Visualization pipeline will be automatically distributed among the channels and nodes Visualization channel Control channel 11/21/2018
16
Taking it wide-area Three possible approaches
Collocate high-speed circuit switches at POPs and lease circuits from commercial service provider or NLR Use MPLS tunnels through Abilene Collocate switches at Abilene POPs and share router links – after thorough testing 11/21/2018
17
Router-to-router link leverage: UVA/CUNY
Abilene backbone ATLA WASH OC192 OC48 OC48 ORNL NCSU/NCNI GbE/21OC1 circuit Rate limit to 27 OC1s OC192 Cisco router OC48 OC48 SOX Setup request (1Gbps) Release request UVA equipment award from Internet2/Cisco Two Cisco (high-end) routers Solutions: Link bundling Rate limiting Question: impact of such link rate reductions on ongoing TCP flows 11/21/2018
18
Summary Implement the piece parts needed for TSI scientists to take advantage of rate-guaranteed channels Demonstrate these modified applications on a local area dynamically shared high-speed circuit-switched network Take it to the wide area 11/21/2018
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.