Download presentation
Presentation is loading. Please wait.
Published byGeorgina Turner Modified over 9 years ago
1
Masaki Hirabaru Internet Architecture Group GL Meeting March 19, 2004 High Performance Data transfer on High Bandwidth-Delay Product Networks
2
VLBI (Very Long Baseline Interferometry) (CRL 鹿島宇宙電波応用Gホームページから )
3
Motivations MIT Haystack – CRL Kashima e-VLBI Experiment on August 27, 2003 to measure UT1-UTC in 24 hours –41.54 GB CRL => MIT 107 Mbps (~50 mins) 41.54 GB MIT => CRL 44.6 Mbps (~120 mins) –RTT ~220 ms, UDP throughput 300-400 Mbps However TCP ~6-8 Mbps (per session, tuned) –BBFTP with 5 x 10 TCP sessions to gain performance HUT – CRL Kashima Gigabit VLBI Experiment -RTT ~325 ms, UDP throughput ~70 Mbps However TCP ~2 Mbps (as is), ~10 Mbps (tuned) -Netants (5 TCP sessions with ftp stream restart extension) They need high-speed / real-time / reliable / long-haul high-performance, huge data transfer.
4
Purpose Measure, analyze and improve end-to-end performance in high bandwidth-delay product networks –to support for networked science applications –to help operations in finding a bottleneck –to evaluate advanced transport protocols (e.g. Tsunami, SABUL, HSTCP, FAST, XCP, ikob) Improve TCP under easier conditions
5
Assumptions Packet Switching Network –Shared, Best-Effort End-to-End Principle –No hard state Contents Advanced TCP evaluation on TransPAC / Internet2 Advanced TCP evaluation in laboratory Research Topics in 2004
6
Kwangju Busan 2.5G Fukuoka Korea 2.5G SONET KOREN Taegu Daejon 10G 0.6G1Gx2 QGPOP Seoul XP Genkai XP Kitakyushu Tokyo XP Kashima 0.1G Fukuoka Japan 250km 1,000km 2.5G TransPAC 9,000km 4,000km Los Angeles Cicago New York MIT Haystack HUT 10G 1G APII/JGN Abilene 0.1G Helsinki 2.4G Stockholm 0.6G 2.4G GEANT Nordunet funet Koganei 1G 7,000km Indianapolis I2 Venue 1G 10G 100km server (general) server (e-VLBI) Abilene Observatory: servers at each NOC CMM: common measurement machines Network Diagram for TransPAC/I2 Measurement (Oct. 2003) 1G x2 sender receiver Mark5 Linux 2.4.7 (RH 7.1) P3 1.3GHz Memory 256MB GbE SK-9843 PE1650 Linux 2.4.22 (RH 9) Xeon 1.4GHz Memory 1GB GbE Intel Pro/1000 XT Iperf UDP ~900Mbps (no loss)
7
TransPAC/I2 #2: High Speed (60 mins)
8
Evaluating Advanced TCPs new Reno (Linux TCP, web100 version) –Ack: w=w+1/w, Loss: w=w-1/2*w HighSpeed TCP (included in web100) –Ack: w=w+a(w)/w, Loss: w=w-1/b(w)*w FAST TCP (binary, provided from Caltech) –w=1/2*(w_old*baseRTT/avgRTT+α+w_current) Limited Slow-Start (included in web 100) Note: Differences in sender side only
9
Path ReceiverSender Backbone B1 <= B2 & B1 <= B3 Access B1 B2 B3 a) w/o bottleneck queue ReceiverSender Backbone B1 > B2 || B1 > B3 Access B1 B2 B3 b) w/ bottleneck (congestion) queue bottleneck
10
Test in a laboratory – with bottleneck Packet Sphere ReceiverSender L2SW (12GCF) Bandwidth 800Mbps Buffer 256KB Delay 88 ms Loss 0 GbE/SX GbE/T PE 2650PE 1650 #4: Reno => Reno #5: High Speed TCP => Reno #6: FAST TCP => Reno 2*BDP = 16MB #4, 5: Data obtained on sender #6: Data obtained on receiver
11
Laboratory #4,#5,#6: 800M bottleneck Reno HighSpeed FAST (default setting)
12
Laboratory #5: High Speed (Limiting) Window Size (16MB) Rate Control Cwnd Clamp 270 us every 10 packets With limited slow-start (1000) (95%) With limited slow-start (100) With limited slow-start (1000)
13
Laboratory #6: FAST (alpha=17, beta=18, gamma=2)
14
Research topics in 2004 Internet Bottleneck Receiver Sender Router Bottleneck ACK (100 ms delay) 100 x 1KB = 100KB ~ 1 ms at Gbps 100M 1G 100M Throughput Window size Parameter Auto-tuning (w/ packet pair/train) TCP Analysis WEB Tool with web100 Router queue length RED on the bottleneck Signal from the bottleneck Mobile TCP (Mobile router support) Multi-Homing TCP (ID+Locator) I2 / piPEs collaboration (bwctl) APII / CMM collaboration with Korea Real-time Gbps e-VLBI experiment between CRL Kashima and MIT Haystack
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.