Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Introduction to ESnet and its Services

Similar presentations


Presentation on theme: "An Introduction to ESnet and its Services"— Presentation transcript:

1 An Introduction to ESnet and its Services
William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads and the ESnet Team

2 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

3 ESnet An infrastructure that is critical to DOE’s science mission and that serves all of DOE Focused on the Office of Science Labs Complex and specialized – both in the network engineering and the network management You can’t go out and buy this – ESnet integrates commercial products and in-house software into a complex management system for operating the net You can’t go out and take a class in how to run this sort of network – it is specialized and is learned from experience Extremely reliable in several dimensions ESnet has functioned flawlessly during the current turmoil

4 DOE MICS Office, ESnet program ESnet Steering Committee (ESSC)
Stakeholders DOE MICS Office, ESnet program ESnet Steering Committee (ESSC) represents the Science Offices (strategic needs) ESnet Coordinating Committee (ESCC) site representatives (operational issues) Users Mostly DOE Office of Science NNSA / Defense Programs DOE collaborators A few others (e.g. the NSF LIGO site)

5 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

6 Several Workshops Have Solicited Input from the Science Community
August 13-15, 2002 DOE Organizing Committee Mary Anne Scott, Chair Dave Bader Steve Eckstrand Marvin Frazier Dale Koelling Vicky White Workshop Panel Chairs Ray Bair and Deb Agarwal Bill Johnston and Mike Wilde Rick Stevens Ian Foster and Dennis Gannon Linda Winkler and Brian Tierney Sandy Merola and Charlie Catlett Focused on science drivers for Advanced Infrastructure Middleware Research Network Research Network Provisioning Model Network Governance Model

7 Eight Major DOE Science Areas Analyzed at the August ’02 Workshop
Feature Vision for the Future Process of Science Characteristics that Motivate High Speed Nets Requirements Discipline Networking Middleware Climate (near term) Analysis of model data by selected communities that have high speed networking (e.g. NCAR and NERSC) A few data repositories, many distributed computing sites NCAR - 20 TBy NERSC - 40 TBy ORNL - 40 TBy Authenticated data streams for easier site access through firewalls Server side data processing (computing and cache embedded in the net) Information servers for global data catalogues (5 yr) Enable the analysis of model data by all of the collaborating community Add many simulation elements/components as understanding increases 100 TBy / 100 yr generated simulation data, 1-5 PBy / yr (just at NCAR) Distribute to major users in large chunks for post-simulation analysis Robust access to large quantities of data Reliable data/file transfer Across system / network failures (5+ yr) Integrated climate simulation that includes all high-impact factors 5-10 PBy/yr (at NCAR) Add many diverse simulation elements/components, including from other disciplines - this must be done with distributed, multidisciplinary simulation Virtualized data to reduce storage load Robust networks supporting distributed simulation - adequate bandwidth and latency for remote analysis and visualization of massive datasets Quality of service guarantees for distributed, simulations Virtual data catalogues and work planners for reconstituting the data on demand

8 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

9 ESnet Architecture and Terminology
Applications Application level transport (HTTP, FTP, Telnet, etc.) TCP – Internet reliable transport protocol UDP – Internet unreliable transport protocol RTP, Group Communication, etc. IP – addressing, routing, and basic packet based data transport, 64 Kilobyte max packet size) ATM – Asynchronous Transfer Mode 53 Byte cells, 5B header, 48B data payload IP packets fragmented into ATM data payloads ATM cells are routed between ATM switches Physical layer is mostly local optical fiber or SONET Ethernet - IP encapsulated in Ethernet “packets” for transport of IP packets Physical layer is local area copper wire “twisted” pair or local or wide area optical fabric frame size 1200 By to 9000 By Packet-Over-SONET - IP encapsulated in SONET frames for transport on optical fabric - frame size KB to 10s of KBs) Telecomm SONET ESnet Dense Wave Division Multiplexed (“DWDM” / “lambda”) optical fabric (e.g. the Qwest/ESnet OC48/OC192 ring is 2 lambdas – one receive channel and one transmit channel)

10 Network bandwidth is typically given in bits/second
Terminology Network bandwidth is typically given in bits/second E.g. 1 Gigabit/sec (“G/s”) is 1000 Megabits/sec) Data transport rates are typically given in Bytes/month E.g.1 Terabyte/month is 1,000,000 Megabytes/month

11 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

12 ESnet Physical Infrastructure Interconnects Essentially Every Major DOE Facility
CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) GEANT - Germany - France - Italy - UK - etc Sinet (Japan) Japan – Russia(BINP) Australia CA*net4 Taiwan (TANet2) Singaren CA*net4 CERN Netherlands Russia Taiwan(ASCC) SEA HUB Nevis Yale LIGO PNNL SNV ESnet IP Brandeis Japan NYC HUBS STARLIGHT MIT BNL SAN INEEL SNV TWC SNLL CHI NAP ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC JGI QWEST ATM FNAL NY-NAP PPPL LLNL AMES ANL LBNL SNV HUB SNV CHI HUB 4xLAB-DC MAE-E NERSC ELP GTN&NNSA PAIX-E SLAC Mae-W BECHTEL SNV Fix-W ELP PAIX-W Allied Signal DC HUB NREL JLAB YUCCA MT ORNL ORAU SDSC LANL OSTI ARM GA ALB HUB SNLA NOAA SRS Allied Signal PANTEX DOE-ALB ATL HUB International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s) 42 end user sites ELP HUB Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (2 LIGO, NOAA) Laboratory Sponsored (6)

13 ESnet Site Architecture
New York (AOA) Chicago (CHI) Washington, DC (DC) The Hubs have lots of connections (42 in all) Backbone (optical fiber ring) Atlanta (ATL) Sunnyvale (SNV) ESnet responsibility Site responsibility El Paso (ELP) Hubs (backbone routers and local loop connection points) ESnet border router Site gateway router Site LAN Local loop (Hub to local site) DMZ Site

14 While There is One Backbone Provider, there are Many Local Loop Providers to Get to the Sites
SEA HUB SEA HUB Nevis Yale LIGO PNNL NYC HUBS SNV Brandeis STARLIGHT MIT INEEL SAN INEEL SNV BNL TWC SNLL CHI NAP ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC JGI QWEST ATM FNAL NY-NAP LLNL AMES PPPL LBNL/ CalRen2 ANL SNV HUB SNV HUB SNV CHI HUB 4xLAB-DC MAE-E NERSC PAIX-E SLAC Mae-W ELP DC HUB GTN Fix-W BECHTEL SNV ELP Allied Signal Allied Signal DOE-NNSA PAIX-W JLAB NREL YUCCA MT YUCCA MT ORNL ORAU LANL OSTI SDSC/CENIC ARM GA ALB HUB SNLA NOAA SRS ATL HUB Allied Signal PANTEX DOE-ALB ELP HUB ELP HUB Qwest Owned Qwest Contracted Touch America Contracted/Owned MCI Contracted/Owned Site Contracted/Owned SBC(PacBell) Contracted/Owned FTS2000 Contracted/Owned SPRINT Contracted/Owned

15 ESnet Logical Infrastructure Connects the DOE Community With its Collaborators
ESnet connects to most universities via high-speed Abilene peering points. There are many commercial peers (logical network connections through a fairly small number of physical connections) because there are lots of commercial nets.

16 ESnet Has Experienced Exponential Growth Since 1992
Annual growth in the past five years has increased from 1.7x annually to just over 2.0x annually.

17 Who Generates Traffic, and Where Does it Go?
ESnet Inter-Sector Traffic Summary, Jan 2003 72% 21% Commercial DOE is a new supplier of data because DOE facilities are used by Univ. and commercial, as well as by DOE researchers 14% ESnet 17% DOE sites ~25% R&E 10% 53% 9% DOE collaborator traffic, inc. data International 4% This slide shows a break-down of traffic flows between ESnet sites and other sectors of the Internet for Jan It shows that 72% of incoming (or accepted) traffic came from ESnet sites, while the remaining 28% of incoming traffic came from the various Sectors as shown above. Similarly 53% of the outgoing (or delivered) traffic went to Sites, while the remaining 47% went to the 3 external Sectors This would indicate that DOE is a net exporter of data – i.e. more data sent than received. The data flowing between sites and to the R&E and International sectors could clearly be considered scientific activities. Data flowing to the Commercial sector would be a mix of direct scientific activities and activities in support of science such as making research data available to the general public. It is important to note that external sector traffic can only flow to or from ESnet sites; traffic between external sectors cannot flow over ESnet. This is one major distinction between ESnet and a commercial ISP. The fact that ESnet does not need to provide bandwidth for transit traffic between external sectors is one factor of its cost effectiveness. A second factor is that we do not pay for traffic to/from the external sectors except for the costs to connect to the peering points. Peering Points ESnet Appropriate Use Policy (AUP) All ESnet traffic must originate and/or terminate on an ESnet an site (no transit traffic is allowed) E.g. a commercial site cannot exchange traffic with an international site across ESnet This is effected via routing restrictions ESnet Ingress Traffic = Green ESnet Egress Traffic = Blue Traffic between sites % = of total ingress or egress traffic

18 ESnet Has a Service Compact With its Users
Low and relatively constant latency for packet delivery is essential for the smooth functioning of distributed applications The network core is engineered for less than 50 ms average latency, < 150 ms when the network partitions Re-routes mostly due to scheduled maintenance – also indicates the latency if the ring partitions in various places (this graph actually shows an unusually high number of Qwest maintenance outages)

19 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

20 ESnet is Not Just One Network
Part of the complexity of ESnet is that it is actually four networks The IP Internet (IPv4) network that most people see (as described above) SecureNet that serves the NNSA / Defense Programs Labs (encrypted, encapsulated ATM) IPv6 network backbone (next generation Internet protocol network) IP Multicast Each of these uses a different routing and/or addressing mechanism

21 SecureNet SecureNet connects 9 NNSA (Defense Programs) sites (a 10th site at HQ is being added) The NNSA sites exchange encrypted ATM traffic The data is unclassified when ESnet gets it because it is encrypted before it leaves the NNSA sites with an NSA certified encrypter Runs over the ESnet core backbone as a layer 2 overlay – that is, the SecureNet encrypted ATM is transported over ESnet’s Packet-Over-SONET infrastructure by encapsulating the ATM in a special protocol (MPLS)

22 Primary SecureNet Path
SecureNet – Mid 2003 Backup SecureNet Path AOA-HUB CHI-HUB SNV-HUB DC-HUB LLNL SNLL ORNL KCP DOE-AL Pantex LANL Primary SecureNet Path SNLA SRS ATL-HUB ELP-HUB SecureNet encapsulates payload encrypted ATM in MPLS using the Juniper Router Circuit Cross Connect (CCC) feature.

23 IPv6-ESnet Backbone 9peers 18 peers 6peers 6BONE BNL 7peers
7206 BNL StarLight 7peers StarTap 7206 Distributed 6TAP Abilene LBL PAIX 7206 Chicago ESnet LBNL Sunnyvale 7206 TWC New York ANL FNAL SLAC DC Abilene Albuquerque 7206 IPv6 only IPv4/IPv6 IPv4 only Atlanta SLAC El Paso IPv6 is the next generation Internet protocol, and ESnet is working on addressing deployment issues one big improvement is that while IPv4 has 32 bit – about 4x109 – addresses (which we are running short of), IPv6 has 132 bit – about 1040 – addresses (which we are not ever likely to run short of) another big improvement is native support for encryption of data

24 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

25 Services for Science Collaboration
Seamless voice, video, and data teleconferencing is important for geographically dispersed collaborators ESnet currently provides voice conferencing, videoconferencing (H.320/ISDN scheduled, H.323/IP ad-hoc), and data collaboration services to more than a thousand DOE researchers worldwide Heavily used services, averaging around 4600 port hours per month for H.320 videoconferences, 2000 port hours per month for audio conferences 1100 port hours per month for H.323 approximately 200 port hours per month for data conferencing

26 Voice, Video, and Data Collaboration
Web-Based registration and scheduling for all of these services authorizes users efficiently lets them schedule meetings Such an automated approach is essential for a scalable service – ESnet staff could never handle all of the reservations manually

27 Public Key Infrastructure
Digital Identity certificates issued by ESnet DOEGrids CA are essential for the trust management needed for cross-site resource sharing (e.g. international HEP collaborations) The rapidly expanding customer base of this service will soon make it ESnet’s largest collaboration service by customer count

28 Services for Science Collaboration
Public Key Infrastructure to support cross-site, cross-organization, and international trust relationships that permit sharing computing and data resources Digital identity certificates for people, hosts and services – essential core service for Grid middleware provides formal and verified trust management – an essential service for widely distributed heterogeneous collaboration, e.g. in the International High Energy Physics community Policy Management Authority – negotiates and manages the formal trust instrument (Certificate Policy - CP) Certificate Authority (CA) validates users against the CP and issues digital identity certs. Certificate Revocation Lists are provided This service was the basis of the first routine sharing of HEP computing resources between US and Europe

29 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

30 ESnet is Different from a Commercial ISP or University Network
A fairly small number of very high bandwidth sites (commercial ISPs have thousands of low b/w sites) Runs SecureNet as an overlay network Provides direct support of DOE science through various collaboration services ESnet “owns” all network trouble tickets (even from end users) until they are resolved one stop shopping for user network problems

31 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

32 ESnet is Complex – There are 6 Databases for the State of the Network and Several More for Performance Performance Topology OSPF Metrics Hub Configuration SecureNet IBGP Mesh Physical Topology documents devices & their connections including interface names & addresses Backbone Map shows our connections to Qwest. SecureNet map shows the PVP’s in use between SecureNet sites & encapsulation points. OSPF map shows how we have manually set OSPF metrics to optimize routing. IBGP map shows where we are using full meshing and where we are using route reflection. LANWAN system is another interface into all the site diagrams showing equipment & interconnections at each site. Engineering Web Site Maps & Diagrams – all are clickable, allowing drilldown to finest levels of detail of the underlying databases

33 Drill Down into the Performance DB to Every Physical and Logical Interface level for Every Router
xxx xx Real-time monitoring of traffic levels of some 4400 network entities is one of the primary network diagnosis tools 1 and 2 min, 2 hr, and daily averages hours to months of historical data kept on-line

34 Drill Down into the Topology DB to Operating Characteristics of Every Device
e.g. inlet, hot-point, and exhaust cooling air temperature

35 Drill Down into the Hub Configuration DB for Every Wire Connection
Equipment rack detail at AOA, NYC Hub (one of the core optical ring sites)

36 Equipment wiring detail for one module at the AOA, NYC Hub
The Hub Configuration Database Equipment wiring detail for one module at the AOA, NYC Hub (this particular module allows remote power cycling of all of the equipment)

37 ESnet Equipment @ Qwest
Qwest DS3 DCX Sentry power 48v 30/60 amp panel ($3900 list) AOA Performance Tester ($4800 list) Sentry power 48v 10/25 amp panel ($3350 list) DC / AC Converter ($2200 list) Cisco 7206 AOA-AR1 (links to MIT & PPPL) low speed links ($38,150 list) Lightwave Secure Terminal Server ($4800 list) ESnet Qwest 32 AofA HUB NYC, NY (~$1.8M, list) Juniper T320 AOA-CR1 (Core RTR) ($1,133,000 list) Juniper OC48 Optical Ring Interface (the AOA end of the OC48 to DC-HUB ($65,000 list) Juniper OC192 Optical Ring Interface (the AOA end of the OC192 to CHI ($195,000 list) Juniper M20 AOA-PR1 (peering RTR) ($353,000 list)

38 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

39 Operating Science Mission Critical Infrastructure
ESnet is a visible and critical pieces of DOE science infrastructure if ESnet fails,10s of thousands of DOE and University users know it within minutes if not seconds Requires high reliability and high operational security in the systems that are integral to the operation and management of the network Secure and redundant mail and Web systems are central to the operation and security of ESnet trouble tickets are by engineering communication by engineering database interface is via Web Secure network access to Hub equipment Backup secure telephony access to Hub equipment 24x7 help desk (joint with NERSC) 24x7 on-call network engineer

40 Disaster Recovery and Stability
The network operational services must be kept available even if, e.g., the West coast is disabled by a massive earthquake, etc. Network engineers in four locations across the country Full and partial engineering databases and network operational service replicas in three locations Telephone modem backup access to all hub equipment

41 Disaster Recovery and Stability
Remote Engineer Spectrum Eng Srvr Load Srvr Config Srvr Public Web SEA HUB Engineers Eng Srvr Load Srvr Config Srvr Engineers Spectrum (net mgmt) DNS Eng Srvr Load Srvr Config Srvr Public Web BNL AMES NYC HUBS DNS SNV HUB PPPL CHI HUB LBNL TWC Remote Engineer DC HUB ALB HUB SDSC ATL HUB ELP HUB All core network hubs are co-located in commercial telecommunication facilities with backup power ESnet backbone operated without interruption through the N. Calif. blackout, the 9/11 attacks, and the 9/03 NE States blackout

42 Maintaining Science Mission Critical Infrastructure in the Face of Attack
A Phased Security Architecture is being implemented to protect the network and the sites The phased response ranges from blocking certain site traffic to a complete isolation of the network which allows the sites to continue communicating among themselves in the face of the most virulent attacks Separate ESnet core routing functionality from our external Internet connections by means of a “peering” router that can have a policy different from the core routers Provide a rate limited path to the external Internet that will insure site-to-site communication during an external denial of service attack Allow for Lifeline connectivity that allows downloading of patches, exchange of and viewing web pages (i.e.; , dns, http, https, ssh, etc.) with the external Internet prior to full isolation of the network

43 ESnet WAN Security and Cybersecurity
ESnet security for its own network equipment is provided by secure access to devices patching router operating systems confidentially of configuration data, etc.

44 ESnet WAN Security and Cybersecurity
Cybersecurity is a new dimension of ESnet security Security is now inherently a global problem As the entity with a global view of the network, ESnet has an important role in overall security 30 minutes after the Sapphire/Slammer worm was released, 75,000 hosts running Microsoft's SQL Server were infected. (“The Spread of the Sapphire/Slammer Worm,” David Moore (CAIDA & UCSD CSE), Vern Paxson (ICIR & LBNL), Stefan Savage (UCSD CSE), Colleen Shannon (CAIDA), Stuart Staniford (Silicon Defense), Nicholas Weaver (Silicon Defense & UC Berkeley EECS) )

45 ESnet and Cybersecurity
Sapphire/Slammer worm infection hits creating almost a Gb/s traffic spike on the ESnet backbone

46 ESnet and Cybersecurity
ESnet protects itself and other sites – infected ESnet sites can be blocked, partially or completely ESnet can come also come to the aid of an ESnet site with temporary filters on incoming traffic, etc., if necessary This is one of the very few areas where ESnet might participate directly in site security Request must come from Site Coordinator Not a substitute for good site security

47 ESnet and Cybersecurity
Sapphire/Slammer worm infection hits at approximately 9:30PM PST, Friday night, 25 Jan 03 ESnet applies filters at both the hub and the site to block attack Attacks coming from the site (blocked at hub) Slammer Traffic to Site Site responds ESnet-site border router traffic

48 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

49 Asset Management ESnet Asset Management System tracks all ESnet network and computing equipment throughout the country Approximately 270 assets at 50 locations in the US are tracked in a Remedy database “Cradle-to-Grave” asset movement tracking Received equipment is documented in Sunflower (LBL property database) and Remedy LBL Shipping Documents created electronically All assets tracked through carrier’s tracking system Set up and monitor Return Merchandise Authorizations with vendors Surplusing

50 Asset Management E.g. first 4 locations of 50 (from Remedy database)
E.g. AOA Hub Location # equip. items ALBQ-HUB Albuquerque, NM 4 ALLIED-KCP Kansas City, MO 5 ANL Argonne IL 3 AOA-HUB New York, NY 8 ATL-HUB Forest Park, GA 7 BECHTEL-NV N Las Vegas, NV 2 Item s/n DOE # name Cisco 7206 aoa-ar1 Juniper M20 21050 aoa-pr1 ProTester - aoa-pt1 Juniper T320 26745 aoa-cr1 misc. cables Netgear Hub STS aoa-sts1 Sentry 4820-XL8 219247 Sentry 4870-XL-4 219432

51 Outline Forward ESnet science drivers 30 second tutorial on networking
ESnet physical and logical infrastructure Not just one network Services for science collaboration ESnet is fairly unique ESnet is complex in several dimensions Operating critical science mission infrastructure Asset management Future directions Conclusions

52 Potential Future Capabilities are Continually Investigated
One of the biggest current problems is upgrading the site local loops so that sites are not starved for bandwidth into the backbone circuit capacity

53 Potential Future Capabilities are Continually Investigated
Optical Metropolitan Area Networks (MANs) are being investigated in the SF Bay Area and Chicago Areas as an alternative to expensive local carriers for site connections An optical fiber ring is purchased or leased from a fiber provider that can reach major sites (e.g. SLAC, LLNL, SNLL, LBNL, and NERSC in the SF Bay Area; FNAL, ANL, and Starlight in Chicago area)) A single connection is made from the ESnet core ring to the local ring, which avoids local telecomm carriers Probably only feasible in major metropolitan areas

54 Focused on the Office of Science Labs
Conclusions ESnet is an infrastructure that is critical to DOE’s science mission and that serves all of DOE Focused on the Office of Science Labs Complex and specialized – both in the network engineering and the network management You can’t go out and buy this – ESnet integrates commercial products and in-house software into a complex management system for operating the net You can’t go out and take a class in how to run this sort of network – it is specialized and learned from experience Extremely reliable in several dimensions

55 The ESnet Team William E. Johnston ESnet Manager Mike Collins
Network Engineering Services Group Lead Gizella Kapus Acting Project Administrator Stan Kluz Network Technical Services Group Leader KaRynn Kelly ESnet Administrator

56 The ESnet Team: Network Engineering Services Group (NESG)
Joe Burrescia Deputy NESG lead Multicast Spectrum Performance Centers Yvonne Hines Peering Coordinator DNS Assignments V4,V6 Addressing Publishing Stats Kevin Oberman DNS Management Config Management Eng Tools ESnet LAN support Chin Guok Statistics & Metrics Performance Centers MRTG Web Servers Eng Tools Joe Metzger Eng Web Servers Eng Data Base Dashboard Eng Tools Mike O’Connor Multicast Spectrum Eng Tools ESnet LAN support

57 The ESnet Team: Network Technical Services Group (NTSG)
Jim Gagliardi Network Support Team lead & Network On call John Paul Jones Network On call & ESnet LAN Chris Cavallo Network On call & Assets mgt Mark Redman Network On call & Config. Lab Clint Wadsworth Network On call & collaboration Dan Peterson Network On call, MSWin & security Scott Mason Assets mgt, Windows

58 The ESnet Team: Unix, Database, and Collaboration Support
John Webster UNIX – team leader Don Varner UNIX, AFS, security Ken Pon UNIX, security Roberto Morelli Systems Design Marcy Kamps Web, Oracle, Remedy Mike Pihlman Collaboration

59 The ESnet Team: PKI Project
Tony Genovese Project Lead Mike Helm Security Architect Dhivakaran Muruganantham (Dhiva) Software Engineer


Download ppt "An Introduction to ESnet and its Services"

Similar presentations


Ads by Google