The Internet2 Network Observatory Rick Summerhill Director Network Research, Architecture, and Technologies Brian Cashman Network Planning Manager Matt.

Slides:



Advertisements
Similar presentations
Performance Update Eric Boyd Director of Performance Architecture and Technologies Internet2.
Advertisements

Connect. Communicate. Collaborate WI5 – tools implementation Stephan Kraft October 2007, Sevilla.
Connect. Communicate. Collaborate GÉANT2 JRA1 & perfSONAR Loukik Kudarimoti, DANTE 28 th May, 2006 RNP Workshop, Curitiba.
Connect. Communicate. Collaborate Introduction to perfSONAR Loukik Kudarimoti, DANTE 27 th September, 2006 SEEREN2 Summer School, Heraklion.
LHC Monitoring: perfSONAR Overview Eric L. Boyd Director of Performance Architecture and Technologies Internet2.
By Aaron Thomas. Quick Network Protocol Intro. Layers 1- 3 of the 7 layer OSI Open System Interconnection Reference Model  Layer 1 Physical Transmission.
Routing Measurements Matt Zekauskas, ITF Meeting 2006-Apr-24.
Network Performance Measurement Atlas Tier 2 Meeting at BNL December Joe Metzger
1 ESnet Network Measurements ESCC Feb Joe Metzger
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
13 September 2015 The Abilene Observatory and Network Research Rick Summerhill, Director Network Research, Architecture, and Technology, Internet2 Joint.
GEANT Performance Monitoring Infrastructure – Joint Techs meeting July Nicolas Simar GEANT’s Performance Monitoring.
Connect. Communicate. Collaborate perfSONAR and Wavelengths Monitoring LHC meeting, Cambridge, 16 of June 2006 Matthias Hamm - DFN Nicolas Simar - DANTE.
Performance Monitoring - Internet2 Member Meeting -- Nicolas Simar Performance Monitoring Internet2 Member Meeting, Indianapolis.
Network Developments and Network Monitoring in Internet2 Eric Boyd Director of Performance Architecture and Technologies Internet2.
Observatory Requirements Strawman Matt Zekauskas, Measurement SIG Fall 2006 Member Meeting Monday, 4-Dec-2006.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
workshop eugene, oregon What is network management? System & Service monitoring  Reachability, availability Resource measurement/monitoring.
Rick Summerhill Chief Technology Officer, Internet2 Internet2 Fall Member Meeting 9 October 2007 San Diego, CA The Dynamic Circuit.
Internet2 Performance Update Jeff W. Boote Senior Network Software Engineer Internet2.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 Measuring Circuit Based Networks Joint Techs Feb Joe Metzger
Rick Summerhill Chief Technology Officer, Internet2 TIP January 2008 Honolulu, HI Internet2 Update.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
Connect. Communicate. Collaborate Implementing Multi-Domain Monitoring Services for European Research Networks Szymon Trocha, PSNC A. Hanemann, L. Kudarimoti,
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
Abilene Observatory Chris Robb Indiana University APAN Engineering Workshop 2004 Slides prepared by Chris Small, IU Global NOC Software Engineer.
Connect. Communicate. Collaborate BANDWIDTH-ON-DEMAND SYSTEM CASE-STUDY BASED ON GN2 PROJECT EXPERIENCES Radosław Krzywania (speaker) PSNC Mauro Campanella.
Measurement on the Internet2 Network: an evolving story Matt Zekauskas Joint Techs, Minneapolis 11-Feb-2007.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Internet2 Network Observatory Update Matt Zekauskas, Measurement SIG 2006 Fall Member Meeting 4-Dec-2006.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Performance Update Eric L. Boyd Director of Performance Architecture and Technologies Internet2 Eric L. Boyd Director of Performance Architecture and Technologies.
13-Oct-2003 Internet2 End-to-End Performance Initiative: piPEs Eric Boyd, Matt Zekauskas, Internet2 International.
Jeremy Nowell EPCC, University of Edinburgh A Standards Based Alarms Service for Monitoring Federated Networks.
© 2006 Open Grid Forum Network Monitoring and Usage Introduction to OGF Standards.
Internet2 End-to-End Performance Initiative Eric L. Boyd Director of Performance Architecture and Technologies Internet2.
Interoperable Measurement Frameworks: Joint Monitoring of GEANT & Abilene Eric L. Boyd, Internet2 Nicolas Simar, DANTE.
Connect communicate collaborate LHCONE Diagnostic & Monitoring Infrastructure Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
PerfSONAR WG 2006 Spring Member Meeting Jeff W. Boote 24 April 2006.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
1 Backbone Performance Comparison Jeff Boote, Internet2 Warren Matthews, Georgia Tech John Moore, MCNC.
Charaka Palansuriya EPCC, The University of Edinburgh An Alarms Service for Federated Networks Charaka.
Connect communicate collaborate perfSONAR MDM News Domenico Vicinanza DANTE (UK)
1 Network related topics Bartosz Belter, Wojbor Bogacki, Marcin Garstka, Maciej Głowiak, Radosław Krzywania, Roman Łapacz FABRIC meeting Poznań, 25 September.
Advanced Network Diagnostic Tools Richard Carlson EVN-NREN workshop.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Connecting to the new Internet2 Network What to Expect… Steve Cotter Rick Summerhill FMM 2006 / Chicago.
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
Status of perfSONAR Tools Jason Zurawski April 23, 2007 Spring Member Meeting.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Joint Techs 17 July 2006 University of Wisconsin, Madison,
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Internet2 End-to-End Performance Initiative
Networking for the Future of Science
Robert Szuman – Poznań Supercomputing and Networking Center, Poland
PerfSONAR: Development Status
Network Monitoring and Troubleshooting with perfSONAR MDM
Internet2 Performance Update
Deployment & Advanced Regular Testing Strategies
ESnet Network Measurements ESCC Feb Joe Metzger
Panel on Network Data and Monitoring - The Abilene Network
Network Performance Measurement
E2E piPEs Overview Eric L. Boyd Internet2 24 February 2019.
Interoperable Measurement Frameworks: Internet2 E2E piPEs and NLANR Advisor Eric L. Boyd Internet2 17 April 2019.
“Detective”: Integrating NDT and E2E piPEs
Internet2 E2E piPEs Project
Presentation transcript:

The Internet2 Network Observatory Rick Summerhill Director Network Research, Architecture, and Technologies Brian Cashman Network Planning Manager Matt Zekauskas Senior Engineer Eric Boyd Director Performance Architecture and Technologies Internet2 Fall Member Meeting 6 December 2006 Chicago, IL

Agenda Introduction History and Motivation What is the Observatory? Examples of Research Projects The New Internet2 Observatory Initial Observatory Realization Measurement Capabilities Hardware Deployment in New Racks Observatory Usage Uses to date Network Research Considerations Future uses (and collections) Sharing Observatory Data and Tools for Inter-domain Use perfSONAR

History and Motivation Original Abilene racks included measurement devices Included a single (somewhat large) PC Early OWAMP, surveyor measurements Optical splitters at some locations Motivation was primarily operations, monitoring, and management - understanding the network and how well it operates Data was collected and maintained whenever possible Primarily a NOC function Available to other network operators to understand the network It became apparent that the datasets were valuable as a network research tool

Rick Summerhill The Abilene Upgrade Network

Upgrade of the Abilene Observatory An important decision was made during the Abilene upgrade process (Juniper T-640 routers and OC-192c) Two racks, one of which was dedicated to measurement Potential for research community to collocate equipment Two components to the Observatory Collocation - network research groups are able to collocate equipment in the Abilene router nodes Measurement - data is collected by the NOC, the Ohio ITEC, and Internet2, and made available to the research community

An Abilene router node Power Out-of-band Eth. Switch T-640 (M-5) Power (48VDC) Measurement Machines (nms) Space for Collocation! Measurement (Observatory) Rack

Dedicated servers at each node Houston Router Node - In this picture: Measurement machines Collocated PlanetLab machines

Example Research Projects Collocation projects PlanetLab – Nodes installed in all Abilene Router Nodes. See The Passive Measurement and Analysis Project (PMA) - The Router clamp. See Projects using collected datasets. See projects.html projects.html “Modular Strategies for Internetwork Monitoring” “Algorithms for Network Capacity Planning and Optimal Routing Based on Time-Varying Traffic Matrices” “Spatio-Temporal Network Analysis” “Assessing the Presence and Incidence of Alpha Flows in Backbone Networks”

The New Internet2 Network Expanded Layer 1, 2 and 3 Facilities Includes SONET and Wave equipment Includes Ethernet Services Greater IP Services Requires a new type of Observatory

The New Internet2 Network

The New Internet2 Observatory Seek Input from the Community, both Engineers and Network Researchers Current thinking is to support three types of services Measurement (as before) Collocation (as before) Experimental Servers to support specific projects - for example, Phoebus (this is new) Support different types of nodes: Optical Nodes Router Nodes For example, as illustrated in the following diagrams Brian, Eric, and Matt will talk further about the Observatory Nodes

Rick Summerhill Router Nodes

Rick Summerhill Optical Nodes

The New York Node - First Installment

Existing Observatory Capabilities One way latency, jitter, loss IPv4 and IPv6 (“owamp”) Regular TCP/UDP throughput tests – ~1 Gbps IPv4 and IPv6; On-demand available (“bwctl”) SNMP Octets, packets, errors; collected 1/min Flow data Addresses anonymized by 0-ing the low order 11 bits Routing updates Both IGP and BGP - Measurement device participates in both Router configuration Visible Backbone – Collect 1/hr from all routers Dynamic updates Syslog; also alarm generation (~nagios); polling via router proxy

Observatory Functions DeviceFunctionDetails nms-rthr1MeasurementBWCTL on-demand 1 Gpbs router throughput, Thrulay nms-rthr2MeasurementBWCTL on-demand 10 Gbps router throughput, Thrulay nms-rexpExperimentalNDT/NPAD nms-rpsvMeasurementNetflow collector nms-rlatMeasurementOWAMP with locally attached GPS timing nms-rphoExperimentalPhoebus 2 x 10GE to Multiservice Switch nms-octrManagementControls Multiservice Switch nms-oexpExperimentalNetFPGA nms-othrMeasurementOn-demand Multiservice Switch 10 Gbps throughput

Router Nodes

Optical Nodes

Observatory Hardware Dell 1950 and Dell 2950 servers Dual Core 3.0 GHz Xeon processors 2 GB memory Dual RAID 146 GB disk Integrated 1 GE copper interfaces 10 GE interfaces Hewlett-Packard 10GE switches 9 servers at router sites, 3 at optical only sites

Observatory Databases – Datа Types Data is collected locally and stored in distributed databases Databases Usage Data Netflow Data Routing Data Latency Data Throughput Data Router Data Syslog Data

Sub-outline: Uses and Futures Some uses of existing datasets and tools Quality Control Network Diagnosis Network Characterization Network Research Consultation with researchers Open questions

Recall: Datasets Usage Data Netflow Data Routing Data Latency Data Throughput Data Router Data Syslog Data ND, NR ND, NC, NR NR QC, ND, NR ND, NR NR And, of course, most used for operations

Quality Control: e-VLBI When starting to connect telescopes, needed to verify inter-site paths Set up throughput testing among sites (using same Observatory tool: bwctl) Kashima, JP Onsala, SE Boston, MA (Haystack) Collect and graph data; distribute via web Quick QC check before applications tests start

Network Diagnosis: e-VLBI Target at the time: 50Mbps Oops: Onsala-Boston: 1Mbps Divide and Conquer Verify Abilene backbone tests look good Use Abilene test point in Washington DC Eliminated European and trans-Atlantic pieces Focus on problem: found oversubscribed link

Quality Control: IP Backbone Machines with 1GE interfaces, 9000 MTU Full mesh IPv4 and IPv6 Expect > 950 Mbps TCP Keep list of “Worst 10” If any path < 900 Mbps for two successive testing intervals, throw alarm

Quality Control: Peerings Internet2 and ESnet have been watching the latency across peering points for a while. Internet2 and DREN have been preparing to do some throughput and latency testing During the course of this set up, found interesting routing and MTU size issues

Network Diagnosis: End Hosts NDT, NPAD servers Quick check from a host that has a browser Easily eliminate (or confirm) last mile problems (buffer sizing, duplex mismatch, …) NPAD can find switch limitations, provided the server is close enough

Network Diagnosis: Generic Generally looking for configuration & loss Don’t forget security appliances Is there connectivity & reasonable latency? (ping -> OWAMP) Is routing reasonable (traceroute, proxy) Is host reasonable (NDT; NPAD) Is path reasonable (BWCTL)

Network Characterization Flow data collected with flow-tools package All data not used for security alerts and analysis [REN-ISAC] is anonymized Reports from anonymized data available (see truncated addresses) Additionally, some Engineering reports

Network Research Projects Major consumption Flows Routes Configuration Nick Feamster (while at MIT) Dave Maltz (while at CMU) Papers in SIGCOMM, INFOCOM Hard to track folks that just pull data off of web sites

Network Research Facilities Grant Thanks to NSF funds, access to network researchers for 1.5 yrs Interviews Presentations at Network Research conferences and workshops This material is based in part on work supported by the National Science Foundation (NSF) under Grant No. SCI Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.

Grant Result Snippets Liked Abilene observatory. Keep passive! Biggest thing – more data But -- network research project driven Security-related: want payload Want some way to get more information from flow data Alternate anonymization techniques Community consensus on passive measurement

Grant Results Snippets Want pool of researcher-developed access tools (sharing among researchers) Want ability to request new data sets Both new sources, and derived data Extend to cover new facilities (they were thinking HOPI and L2VPNs, but…)

Lots of Work to be Done Internet2 Observatory realization inside racks set for initial deployment, including new research projects (NetFPGA, Phoebus) Software and links easily changed Could add or change hardware depending on costs Researcher tools, new datasets Consensus on passive data

Not Just Research Operations and Characterization of new services Finding problems with stitched together VLANs Collecting and exporting data from Dynamic Circuit Service... Ciena performance counters Control plane setup information Circuit usage (not utilization, although that is also nice) Similar for underlying Infinera equipment And consider inter-domain issues

Observatory Requirements Strawman Small group: Dan Magorian, Joe Metzger and Internet2 See document off of Want to start working group under new Network Technical Advisory Committee Interested? Talk to Matt or watch NTAC Wiki on wiki.internet2.edu; measurement page will also have some information…

Strawman: Potential New Focus Areas Technology Issues Is it working? How well? How debug problems? Economy Issues – interdomain circuits How are they used? Are they used effectively? Monitor violation of any rules (e.g. for short-term circuits) Compare with “vanilla” IP services?

Strawman: Potential High-Level Goals Extend research datasets to new equipment Circuit “weathermap”; optical proxy Auditing Circuits Who requested (at suitable granularity) What for? (ex: bulk data, streaming media, experiment control) Why? (add’l bw, required characteristics, application isolation, security)

Inter-Domain Issues Important New services (various circuits) New control plane That must work across domains Will require some agreement among various providers Want to allow for diversity…

Sharing Observatory Data We want to make Internet2 Network Observatory Data: Available: Access to existing active and passive measurement data Ability to run new active measurement tests Interoperable: Common schema and semantics, shared across other networks Single format XML-based discovery of what’s available

What is perfSONAR? Performance Middleware perfSONAR is an international consortium in which Internet2 is a founder and leading participant perfSONAR is a set of protocol standards for interoperability between measurement and monitoring systems perfSONAR is a set of open source web services that can be mixed-and-matched and extended to create a performance monitoring framework

perfSONAR Design Goals Standards-based Modular Decentralized Locally controlled Open Source Extensible Applicable to multiple generations of network monitoring systems Grows “beyond our control” Customized for individual science disciplines

perfSONAR Integrates Network measurement tools Network measurement archives Discovery Authentication and authorization Data manipulation Resource protection Topology

perfSONAR Credits perfSONAR is a joint effort: ESnet GÉANT2 JRA1 Internet2 RNP ESnet includes: ESnet/LBL staff Fermilab Internet2 includes: University of Delaware Georgia Tech SLAC Internet2 staff GÉANT2 JRA1 includes: Arnes Belnet Carnet Cesnet CYNet DANTE DFN FCCN GRNet GARR ISTF PSNC Nordunet (Uninett) Renater RedIRIS Surfnet SWITCH

perfSONAR Adoption R&E Networks Internet2 ESnet GÉANT2 European NRENs RNP Application Communities LHC GLORIAD Distributed Virtual NOC Roll-out to other application communities in 2007 Distributed Development Individual projects (10 before first release) write components that integrate into the overall framework Individual communities (5 before first release) write their own analysis and visualization software

Proposed Data to be made available via perfSONAR First Priorities Link status (CIENA and Infinera data) VLAN SONET (Severely errored seconds, etc.) Light levels SNMP data OWAMP BWCTL Second Priorities Flow data Feedback? Alternate priorities?

What will (eventually) consume data? We intend to create a series of web pages that will display the data Third-party Analysis/Visualization Tools European and Brazilian UIs SLAC-built analysis software More … Real applications Network-aware applications Consume performance data React to network conditions Request dynamic provisioning Future Example: Phoebus