1 Overview of IEPM-BW - Bandwidth Testing of Bulk Data Transfer Tools Connie Logg & Les Cottrell – SLAC/Stanford University Presented at the Internet 2.

Slides:



Advertisements
Similar presentations
Web100 at SLAC Presented at the Web100 Workshop, Boulder, CO, August 2002.
Advertisements

Grid Monitoring Discussion Dantong Yu BNL. Overview Goal Concept Types of sensors User Scenarios Architecture Near term project Discuss topics.
1 High Performance Active End-to- end Network Monitoring Les Cottrell, Connie Logg, Warren Matthews, Jiri Navratil, Ajay Tirumala – SLAC Prepared for the.
1 IEPM-BWIEPM-BW Warren Matthews (SLAC) Presented at the UCL Monitoring Infrastructure Workshop, London, May 15-16, 2003.
MAGGIE Monitoring and Analysis for the Global Grid and Internet End-to-end performance Warren Matthews (SLAC) Presented at the Measurement SIG ESCC/Internet2.
1 Traceanal: a tool for analyzing and representing traceroutes Les Cottrell, Connie Logg, Ruchi Gupta, Jiri Navratil SLAC, for the E2Epi BOF, Columbus.
1 Internet End-to-end Monitoring Project at SLAC Les Cottrell, Connie Logg, Jerrod Williams, Gary Buhrmaster Site visit to SLAC by DoE program managers.
1 SLAC Internet Measurement Data Les Cottrell, Jerrod Williams, Connie Logg, Paola Grosso SLAC, for the ISMA Workshop, SDSC June,
Monitoring and controlling VRVS Reflectors Catalin Cirstoiu 3/7/2003.
MAGGIE NIIT- SLAC On Going Projects Measurement & Analysis of Global Grid & Internet End to end performance.
PIPE Dreams Trouble Shooting Network Performance for Production Science Data Grids Presented by Warren Matthews at CHEP’03, San Diego March 24-28, 2003.
1 Terapaths: Datagrid WAN Network Monitoring Infrastructure Les Cottrell, Connie Logg, Jerrod Williams SLAC, for the DoE 2004 PI Network Research Meeting,
1 IEPM-BW a new network/application throughput performance measurement infrastructure Les Cottrell – SLAC Presented at the GGF4 meeting, Toronto Feb 20-21,
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Network Monitoring grid network performance measurement, simulation & analysis Presented by Warren Matthews at the Performance.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 11 Managing and Monitoring a Windows Server 2008 Network.
The Effects of Systemic Packets Loss on Aggregate TCP Flows Thomas J. Hacker May 8, 2002 Internet 2 Member Meeting.
1 ESnet Network Measurements ESCC Feb Joe Metzger
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
1 Report from NGI Testbed meeting at Berkeley, Jul 21-22, 1999 Les Cottrell – SLAC,
LAN and WAN Monitoring at SLAC Connie Logg September 21, 2005.
GridNM Network Monitoring Architecture (and a bit about my phd) Yee-Ting Li, 1 st Year UCL, 17 th June 2002.
1 Using Netflow data for forecasting Les Cottrell SLAC and Fawad Nazir NIIT, Presented at the CHEP06 Meeting, Mumbai India, February
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
IEPM-BW Deployment Experiences Connie Logg SLAC Joint Techs Workshop February 4-9, 2006.
EMERGE Tech Agenda Oct 7, :30-11Vision (Applications focus) (TD) 11-12:30Describe GARA + Test Results (VS) 12:30-1:30Lunch 1:30-2:15GARNET demo.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
DataGrid Wide Area Network Monitoring Infrastructure (DWMI) Connie Logg February 13-17, 2005.
Measurement & Analysis of Global Grid & Internet End to end performance (MAGGIE) Network Performance Measurement.
1 ESnet/HENP Active Internet End-to-end Performance & ESnet/University performance Les Cottrell – SLAC Presented at the ESSC meeting Albuquerque, August.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Server Performance, Scaling, Reliability and Configuration Norman White.
1 Internet End-to-end Monitoring Project - Overview Les Cottrell – SLAC/Stanford University Partially funded by DOE/MICS Field Work Proposal on Internet.
February 6-8, 2006[Joint Techs] Albuquerque, NM Performance Tool Development: NLANR Network Performance Advisor J. W. Ferguson NCSA.
1 SLAC IEPM PingER and BW monitoring & tools PingER Presented by Les Cottrell, SLAC At LBNL, Jan 21,
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
IEPM. Warren Matthews (SLAC) Presented at the ESCC Meeting Miami, FL, February 2003.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05
1 MAGGIE Monitoring and Analysis for the Global Grid and Internet End-to-end performance Warren Matthews Stanford Linear Accelerator Center (SLAC)
Internet Connectivity and Performance for the HEP Community. Presented at HEPNT-HEPiX, October 6, 1999 by Warren Matthews Funded by DOE/MICS Internet End-to-end.
NET100 Development of network-aware operating systems Tom Dunigan
Navigating PingER Les Cottrell – SLAC Presented at the Optimization Technologies for Low-Bandwidth Networks, ICTP Workshop,
1 PingER performance to Bangladesh Prepared by Les Cottrell, SLAC for Prof. Hilda Cerdeira May 27, 2004 Partially funded by DOE/MICS Field Work Proposal.
1 WAN Monitoring Prepared by Les Cottrell, SLAC, for the Joint Engineering Taskforce Roadmap Workshop JLab April 13-15,
1 IEPM / PingER project & PPDG Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99 Partially funded by DOE/MICS Field Work Proposal on.
1 PingER6 Preliminary PingER Monitoring Results from the 6Bone/6REN. Warren Matthews Les Cottrell.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
11 DEPLOYING AN UPDATE MANAGEMENT INFRASTRUCTURE Chapter 6.
Toward a Measurement Infrastructure. Warren Matthews (SLAC) Presented at the e2e Workshop Miami, FL, February 2003.
IEPM-BW Deployment Experiences
IEPM-BW (or PingER on steroids) and the PPDG
Milestones/Dates/Status Impact and Connections
High Speed File Replication
Warren Matthews and Les Cottrell (SLAC)
Using Netflow data for forecasting
ESnet Network Measurements ESCC Feb Joe Metzger
Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the
Wide Area Networking at SLAC, Feb ‘03
High Performance Active End-to-end Network Monitoring
Connie Logg February 13 and 17, 2005
My Experiences, results and remarks to TCP BW and CT Measurements Tools Jiří Navrátil SLAC.
Experiences in Traceroute and Available Bandwidth Change Analysis
SLAC monitoring Web Services
Advanced Networking Collaborations at SLAC
IEPM. Warren Matthews (SLAC)
Wide-Area Networking at SLAC
MAGGIE NIIT- SLAC On Going Projects
PIPE Dreams Trouble Shooting Network Performance for Production Science Data Grids Presented by Warren Matthews at CHEP’03, San Diego March 24-28, 2003.
Presentation transcript:

1 Overview of IEPM-BW - Bandwidth Testing of Bulk Data Transfer Tools Connie Logg & Les Cottrell – SLAC/Stanford University Presented at the Internet 2 May 8, 2002 Partially funded by DOE/MICS Field Work Proposal on Internet End-to-end Performance Monitoring (IEPM), also supported by IUPAP

2 Why? Grid computing will require reliable, scalable, predictable, and automatable transfer tools to distribute large volumes of data all over the world We need to understand the requirements, characteristics and complications of performing such transfers in order to optimize the use of existing tools, and/or to design and develop new ones We need to know how to schedule and configure the automated transfers We need to understand how to monitor performance, test applications, and troubleshoot performance issues

3 What? We are developing a framework for testing and analyzing various bandwidth sensors and data transfer tools for Grid computing These tools are being used to gather, reduce, analyze, and publicly report on the results. The reports include: –Web accessible data –Tables –Time series plots –Scatter plots to see correlations –Histograms –Comparisons of the active and passive measurements

4 What – Cont. These tools will be useful for: –Testing new transfer applications and sensors –Analyzing performance to new domains –Baselining performance –Forecasting performance –Performing continuous measurements when needed due to performance and/or other changes –Evaluating passive vs active performance measurements

5 Where? To the world! Currently we have 34 nodes in 8 countries around the world to which we are running the tests We plan on adding more

6 SLAC LANL NERSC ORNL LBNL KEK ANL FNAL TRIUMF NIKHEF IN2P3 CERN BNL RAL DL INFN/Milan Roma Stanford SDSC Caltech UTDallas Rice UFL SOX NASA WISC RIKEN PPDG (Particle Physics Data Grid) GriPHyN (Grid Physics Network) PPDG and GriPHyN EDG (European Data Grid) JLAB IU KAIST UDEL ESnet CalREN & Internet2

7 Infrastructure Overview Must get a system and accounts allocated for testing Master configuration file with specifications for setting up and configuring the tests to each node “remoteos.pl” uses master configuration file to set up remote hosts, push out latest releases of the sensors “run-bw-tests” script which runs the tests approximately every 90 minutes (same code runs from command line as well as cron) “codeanal” analyzes the performance of the “run-bw-tests” code “post test processing” which extract the data and does the plots and analysis

8 “run-bw-tests” Sequentially runs the following sensors –Ping –Traceroute –Iperf (10 seconds) –Bbcp memory to memory (10 seconds) –Bbcp disk to disk (file sized from memory to memory) –Bbftp disk to disk (save file as bbcpmem) –Pipechar (phasing out) Using the info in the configuration file All text from the sensor runs is saved to a log file

9 “codeanal” Looks at the logs of the run-bw-tests to analyze how well the test code itself performed. Makes a summary web page Useful for getting a picture of how things are working and patterns of failure

10 “codeanal” Analysis Diagnostic codes: NR – test not run; - NN – test timed out CTO – connection timed out

11 Analysis, Displays and Results Time series plots Scatterplots panels for visualizing correlations Histogram panels for visualizing distribution of the data values Scatterplots of all data for each sensor Correlation tables “Forecasting” experiments Passive vs Active measurement comparisons

12 Time Series Plots Overplot all sensors

13 Scatterplot Panel Show correlations with scatterplot panel Plot the sensors versus each other IPERF BBCP

14 Histogram Panel for each Node Shows distribution of results

15 Overplot all Sensor Results for all Nodes Bbcpmem vs Iperf for all nodes Bbcpdisk vs Iperf for all nodes

16 Compare Sensors on Different Speed Links Limiting factors are disk speeds in left example BBCPdisk < BBCPmem Low speed links track well High Available Bandwidth Low Available Bandwidth

17 “Forecasting” Red w/errorbars is average of 5 previous measurements & std. dev. Blue is actual value

18 Active vs Passive Measurements All the traffic going in and out of SLAC is recorded by the Cisco switch at our border using Netflow. Just starting to compare the passive measurements of our active measurements. Preliminarily, the results look promising.

19 Active vs Passive Compare the active measurements and the passive measurement of the active measurements Iperf SLAC to Caltech (Feb-Mar ’02)

20 Passive vs Active from SLAC to ORNL “Track” Iperf R=.98 Bbcp Mem R=.75 Bbcp Disk R=.92 Bbftp R=.4 Active Passive Time (21 days)

21 Futures Expand deployment – port to Linux – other sites Integrate with WEB100 (retries, packet loss) Add more sensors (GridFTP, pathrate, pathload) Investigate further the comparison between active and passive measurements Look at passive measurements of users’ transfers