Download presentation
Presentation is loading. Please wait.
1
PingER: Methodology, Uses & Results
Les Cottrell SLAC, Warren Matthews GATech Extending the Reach of Advanced Networking: Special International Workshop Arlington, VA., April 22, 2004 This presentation will introduce the methodology used by the PingER project to measure end-to-end Internet performance. We will then illustrate the use of PingER to show overall Internet performance trends and differences to most regions of the world for the last 9 years. This will be followed up with some specific illustrations of how PingER has been used to help policy making decisions and indicate the results of those decisions. We will conclude with some of the challenges and the overall state of Internet end-to-end performance across the Digital Divide. Partially funded by DOE/MICS Field Work Proposal on Internet End-to-end Performance Monitoring (IEPM), also supported by IUPAP
2
Outline What is PingER World Internet performance trends
Regions and Digital Divide Examples of use Challenges Summary of Uses
3
Methodology Use ubiquitous ping
Each 30 minutes from monitoring site to target : 1 ping to prime caches by default send11x100Byte pkts followed by 10x1000Byte pkts Low network impact + no software to install / configure / maintain at remote sites + no passwords / accounts needed = good for developing sites / regions Record loss & RTT, (+ reorders, duplicates) Derive throughput, jitter, unreachability …
4
Architecture Hierarchical vs. full mesh WWW Archive Archive Monitoring
HTTP SLAC Ping Reports & Data Archive FNAL Archive Monitoring ~35 Monitoring Monitoring Cache Monitoring Remote 1 monitor host remote host pair Remote Remote Remote ~550 Hierarchical vs. full mesh
5
Regions Monitored Recent added NIIT PK as monitoring site
Monitoring sites in ~ 35 countries Recent added NIIT PK as monitoring site White = no host monitored in country Colors indicate regions Also have affinity groups (VOs), e.g. AMPATH, Silk Road, CMS, XIWT and can select multiple groups Worksheet: \\Zwinsan2\c\cottrell\regions-mapland.xls
6
World Trends Increase in sites with Good (<1%) loss
25% increase in sites monitored Big focus on Africa 4=>19 countries Silk Road Spreadsheet\cottrell\iepm\world-loss-quality.xls
7
Trends S.E. Europe, Russia: catching up
Latin Am., Mid East, China: keeping up India, Africa: falling behind Derived throughput~MSS/(RTT*sqrt(loss)) Silk Road NaukaNet/ Gloriad Spreadsheet \cottrell\iepm\esnet-to-all-longterm.xls CERN data only goes back to Aug-01. It confirms S.E. Europe & Russia are catching up, and India & Africa are falling behind Note for Africa only one host in Uganda. Actually have been adding hosts 5 countries), but there is considerable disparity in performance so as add hosts from less developed countries the aggregate performance measured to Africa is dropping! Ghana, Nigeria and Uganda are all satellite links with ms RTTs. The losses to Ghana & Nigeria are 8-12% while to Uganda they are 1-3%. The routes are different. The route from SLAC to Ghana uses ESnet-Worldcom-UUNET, Nigeria goes CalREN-Qwest-Teiianet-New Skies satellite, Uganda goes Esnet-Level3-Intelsat. For both Ghana and Nigeria there are no losses (for 100 pings) until the last hop when over 40 of 100 packets were lost. For Uganda the losses (3 in 100 packets) also occur at the last hop. Worksheet: for trends: \\Zwinsan2\c\cottrell\iepm\esnet-to-all-longterm.xls for Africa: \\Zwinsan2\c\cottrell\iepm\africa.xls traceroute to ( ): 1-30 hops, 38 byte packets 1 rtr-core1-nethub.slac.stanford.edu ( ) [AS SU-SLAC] ms ms ms 2 rtr-dmz1-ger.slac.stanford.edu ( ) [AS SU-SLAC] ms ms ms ( ) ms (ttl=252!) ms (ttl=252!) ms (ttl=252!) 4 snv-pos-slac.es.net ( ) [AS293 - Energy Sciences Network (ESnet)] ms (ttl=251!) ms (ttl=251!) ms (ttl=251!) 5 snvrt1-ge0-snvcr1.es.net ( ) [AS293 - Energy Sciences Network (ESnet)] ms (ttl=250!) ms (ttl=250!) ms (ttl=250!) ATM1-0.BR2.SJC1.ALTER.NET ( ) [AS701 - UUNET, An MCI Worldcom Company] ms ms ms ATM3-0.XR1.SJC1.ALTER.NET ( ) [AS701 - UUNET, An MCI Worldcom Company] ms ms ms 8 0.so XL1.SJC1.ALTER.NET ( ) [AS701 - UUNET, An MCI Worldcom Company] ms (ttl=247!) ms (ttl=247!) ms (ttl=247!) 9 0.so TL1.SAC1.ALTER.NET ( ) [AS701 - UUNET, An MCI Worldcom Company] ms (ttl=246!) ms (ttl=246!) ms (ttl=246!) 10 0.so IL1.NYC9.ALTER.NET ( ) [AS701 - UUNET, An MCI Worldcom Company] ms (ttl=242!) ms (ttl=242!) 74.4ms (ttl=242!) 11 0.so IR1.NYC12.ALTER.NET ( ) [AS701 - UUNET, An MCI Worldcom Company] ms (ttl=241!) ms (ttl=241!) ms (ttl=241!) 12 so TR1.CPH3.ALTER.NET ( ) [AS702 - UUNET, An MCI Worldcom Company] 171 ms (ttl=241!) 171 ms (ttl=241!) 171 ms (ttl=241!) 13 POS5-0.XR1.CPH3.ALTER.NET ( ) [AS702 - UUNET, An MCI Worldcom Company] 172 ms (ttl=241!) 173 ms (ttl=241!) 171 ms (ttl=241!) 14 POS4-0-0.CR1.CPH2.ALTER.NET ( ) [AS702 - UUNET, An MCI Worldcom Company] 171 ms (ttl=240!) 172 ms (ttl=240!) 172 ms (ttl=240!) 15 FastEthernet GW1.CPH2.ALTER.NET ( ) [AS702 - UUNET, An MCI Worldcom Company] 212 ms (ttl=239!) 347 ms (ttl=239!) 408 ms (ttl=239!) 16 satworks.gw.dk.uu.net ( ) [AS702 - UUNET DK Block 4] 226 ms (ttl=238!) 163 ms (ttl=238!) 163 ms (ttl=238!) ( ) [AS702 - UUNET DK Block 4] * 907 ms 865 ms# 43% loss on 100 pings (0 losses until this hop) asoju.oauife.edu.ng traceroute to asoju.oauife.edu.ng ( ): 1-30 hops, 38 byte packets 1 rtr-core1-nethub.slac.stanford.edu ( ) [AS SU-SLAC] ms ms ms 2 rtr-dmz1-ger.slac.stanford.edu ( ) [AS SU-SLAC] ms ms ms 3 i2-gateway.stanford.edu ( ) ms ms ms 4 STAN.POS.calren2.NET ( ) [AS32 - BN-CIDR ] ms ms ms 5 SUNV--STAN.POS.calren2.net ( ) [AS NET-C2-NORTH] ms ms ms 6 QSV-M10-2-C2.GE.calren2.net ( ) [AS CENIC-DCP] ms (ttl=249!) ms (ttl=249!) ms (ttl=249!) ( ) [AS209 - Qwest Communications] ms (ttl=247!) ms (ttl=247!) ms (ttl=247!) ( ) [AS209 - Qwest Communications] ms ms ms ( ) [AS209 - Qwest Communications] ms (ttl=248!) ms (ttl=248!) ms (ttl=248!) ( ) [AS209 - Qwest Communications] ms (ttl=245!) ms (ttl=245!) ms (ttl=245!) 11 sca-bb1-pos0-0-0.telia.net ( ) [AS TELIANET-BLK] ms (ttl=243!) ms (ttl=243!) ms (ttl=243!) 12 chi-bb1-pos1-0-0.telia.net ( ) [AS TELIANET-BLK] ms (ttl=242!) ms (ttl=242!) ms (ttl=242!) 13 nyk-bb1-pos0-1-0.telia.net ( ) [AS TELIANET-BLK] ms (ttl=238!) 76.1 ms (ttl=238!) ms (ttl=238!) 14 nyk-bb2-pos1-0-0.telia.net ( ) [AS TELIANET-BLK] ms (ttl=239!) ms (ttl=239!) ms (ttl=239!) 15 ldn-bb2-pos1-3-0.telia.net ( ) [AS TELIANET-BLK] 148 ms (ttl=237!) 148 ms (ttl=237!) 147 ms (ttl=237!) 16 ldn-b1-pos11-0.telia.net ( ) [AS TELIANET-BLK] 147 ms (ttl=236!) 148 ms (ttl=237!) 147 ms (ttl=236!) 17 ldn-th-i1-srp1-0.telia.net ( ) [AS TELIANET-BLK] 147 ms (ttl=234!) 147 ms (ttl=234!) 148 ms (ttl=234!) 18 new-skies ldn-th-i1.c.telia.net ( ) [AS TELIANET-BLK] 141 ms (ttl=242!) 141 ms (ttl=242!) 141 ms (ttl=242!) 19 rtr-cor01-pos6-0-0.cha.newskies.net ( ) [AS New Skies Satellites]142 ms (ttl=241!) 142 ms (ttl=241!) 142 ms (ttl=241!) 20 rtr-dvb01-gi cha.newskies.net ( ) [AS New Skies Satellites] 142 ms (ttl=240!) 142 ms (ttl=240!) 143 ms (ttl=240!) 21 * * * reverse.newskies.net ( ) [AS701 - UUNET - AS 701] ms (ttl=238!) 970 ms (ttl=238!) 988 ms (ttl=238!)# 44% loss on 100 pings for this hop, 0 for others mail2.starcom.co.ug traceroute to mail2.starcom.co.ug ( ): 1-30 hops, 38 byte packets 1 rtr-core1-nethub.slac.stanford.edu ( ) [AS SU-SLAC] ms ms ms 2 rtr-dmz1-ger.slac.stanford.edu ( ) [AS SU-SLAC] ms ms ms ( ) ms (ttl=252!) ms (ttl=252!) ms (ttl=252!) 4 snv-pos-slac.es.net ( ) [AS293 - Energy Sciences Network (ESnet)] ms (ttl=251!) ms (ttl=251!) ms (ttl=251!) 5 snvrt1-ge0-snvcr1.es.net ( ) [AS293 - Energy Sciences Network (ESnet)] ms (ttl=250!) ms (ttl=250!) ms (ttl=250!) 6 paix-pa-snv.es.net ( ) [AS293 - Energy Sciences Network (ESnet)] ms ms ms 7 gigabitethernet edge1.paix-sjo1.Level3.net ( ) [AS no more prtraceroute whiners ! Just kidding - we love you Nik.] ms ms ms 8 GigabitEthernet3-1.core1.SanJose1.Level3.net ( ) [AS no more prtraceroute whiners ! Just kidding - we love you Nik.] ms ms ms 9 ae0-55.mp1.SanJose1.Level3.net ( ) [AS no more prtraceroute whiners ! Just kidding - we love you Nik.] ms (ttl=246!) ms (ttl=246!) ms (ttl=246!) ( ) [AS no more prtraceroute whiners ! Just kidding - we love you Nik.] ms (ttl=245!) ms (ttl=245!) ms (ttl=245!) 11 so mp1.London2.Level3.net ( ) [AS Level 3 RIPE block] 152 ms (ttl=244!) 152 ms (ttl=244!) 153 ms (ttl=244!) 12 so mp1.London1.Level3.net ( ) [AS Level 3 RIPE block] 152 ms (ttl=243!) 152 ms (ttl=243!) 152 ms (ttl=243!) 13 so gar1.London1.Level3.net ( ) [AS Level 3 RIPE block] 158 ms (ttl=242!) 158 ms (ttl=242!) 158 ms (ttl=242!) 14 pos2-0.metro1-londencyh00.London1.Level3.net ( ) [AS Level 3 RIPE block] 158 ms 158 ms 160 ms ( ) [AS Level 3 (ex Businessnet)] 154 ms (ttl=240!) 153 ms (ttl=240!) 153 ms (ttl=240!) 16 fus-rt001-stm core.globalconnex.net ( ) [AS Intelsat Specific route within RIPE LIR allocation] 178 ms (ttl=239!) 177 ms (ttl=239!) 176 ms (ttl=239!) 17 fus-rt004-fe-0-0-v2.its-dvb.globalconnex.net ( ) [AS Intelsat Specific route within RIPE LIR allocation] 172 ms 171 ms 171 ms 18 * * * 19 * * * 20 * * * 21 mail2.starcom.co.ug ( ) ms ms ms # Loss of 3% for both 100 and 1400 byte packets AMPath
8
Current State – Aug ‘03 thruput ~ MSS / (RTT * sqrt(loss))
Worksheet: \\zwinsan2\c\cottrell\iepm\table-thru-aug03.xls Within region performance better E.g. Ca|EDU|GOV-NA, Hu-SE Eu, Eu-Eu, Jp-E Asia, Au-Au, Ru-Ru|Baltics Africa, Caucasus, Central & S. Asia all bad Bad < 200kbits/s < DSL Poor > 200, < 500kbits/s Acceptable > 500kbits/s, < 1000kbits/s Good > 1000kbits/s
9
Examples of Use Need for constant upgrades Upgrades Filtering Pakistan
10
Usage Examples Identify need to upgrade and effects
BW increase by factor 300 Multiple sites track Xmas & summer holiday Selecting ISPs for DSL/Cable services for home users Monitor accessibility of routers etc. from site Long term and changes Trouble shooting Identifying problem reported is probably network related Identify when it started and if still happening or fixed Look for patterns: Step functions Periodic behavior, e.g. due to congestion Multiple sites with simultaneous problems, e.g. common problem link/router … Provide quantitative information to ISPs Increases in bandwidth from 2Mbps to 622Mbps in 6 years Multiple sites track one another, gives rationale for Beacons Improvements around holidays, summer and end of year = students on holidays (most sites Universities) Beacons
11
Russia Examples Russian losses improved by factor 5 in last 2 years, due to multiple upgrades Upgrade funded by KEK, BINP and US DoE. Little change in RTT, big improvement in loss Spreadsheets: \\Zwinsan2\c\cottrell\iepm\russia-sep03.xls S:\www\grp\scs\net\papers\ictp\binp-may02.xls Shows importance of monitoring E.g. Upgrade to KEK-BINP link from 128kbps to 512kbps, May ’02: improved from few % loss to ~0.1% loss
12
Usage Examples Peering problems, took long time identify/fix
North America Ten-155 became operational on December 11. Smurf Filters installed on NORDUnet’s US connection. Upgrades & ping filtering To Western Europe Identifies time of occurrence so can report to ISP NOCs. Peering problems, took long time identify/fix
13
Pakistan Example Big performance differences to sites, depend on ISP (at least 3 ISPs seen for Pakistan A&R sites) To NIIT (Rawalpindi): Get about 300Kbps, possibly 380Kbps at best Verified bottleneck appeared to be in Pakistan There is often congestion (packet loss & extended RTTs) during busy periods each weekday Video will probably be sensitive to packet loss, so it may depend on the time of day H.323 (typically needs 384Kbps + 64Kbps), would appear to be marginal at best at any time. Requested upgrade to 1Mbps, and verified got it (Feb ’04) No peering Pakistan between NIIT and NSC
14
Example S. Asia Factors of six difference, large variations
Between countries, even between sites in city Nepal, Sri Lanka & Bangladesh worse off
15
Challenges 1 of 2 Ping blocking Effort:
Complete block easy to ID, then contact site to try and by-pass, can be frustrating for 3rd world Partial blocks trickier, compare with synack Effort: Negligible for remote hosts Monitoring host: < 1 day to install and configure, occasional updates to remote host tables and problem response Archive host: 20% FTE, code stable, could do with upgrade, contact monitoring sites whose data is inaccessible Analysis: your decision, usually for long term details download & use Excel Trouble-shooting: usually re-active, user reports, then look at PingER data Working on automating alerts, data is available for download
16
Challenges 2 of 2 Funding DoE development/research funding ended 2003
Looking for alternate funding sources Sustain, maintain & extend databases & measurements to more countries Get measurements FROM & within developing regions New analyses, preparing & presenting reports Making contacts, coordinating efforts
17
Uses Near real time results: Long term trends:
Trouble shooting, detect problems see when they occur Long term trends: Set expectations, planning, Give sites/regions better idea of how good/bad things are Input to policy and funding agencies, assist in deciding where help is needed and how to provide Measure before & after upgrades Is it working right, did we get our money’s worth
18
More Information PingER: MonaLisa GGF/NMWG
www-iepm.slac.stanford.edu/pinger/ MonaLisa monalisa.cacr.caltech.edu/ GGF/NMWG www-didc.lbl.gov/NMWG/ ICFA/SCIC Network Monitoring report, Jan03 Monitoring the Digital Divide, CHEP03 paper arxiv.org/ftp/physics/papers/0305/ pdf Human Development Index Network Readiness Index
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.