Presentation is loading. Please wait.

Presentation is loading. Please wait.

Networking updates: CHECO

Similar presentations


Presentation on theme: "Networking updates: CHECO"— Presentation transcript:

1 Networking updates: CHECO
Marla Meehl UCAR/FRGP/BiSON/WRN/RMCMOA/RMRISC Manager NETS/FRGP/BiSON

2 Acronym soup University Corporation for Atmospheric Research (UCAR; National Center for Atmospheric Research (NCAR) Front Range GigaPoP (FRGP) Bi-State Optical Network (BiSON) Western Regional Network (WRN) Rocky Mountain Cyberinfrastructure Mentoring and Outreach Alliance(RMCMOA) Rocky Mountain Regional Internet Security Collaboration (RMRISC) Questions and discussions throughout as needed – and as for acronym soup ingredients, please ask if I miss one or check decoder page at

3

4 Current FRGP voting participants - 11
Colorado School of Mines Colorado State University - Fort Collins National Oceanic and Atmospheric Administration - Boulder State of Colorado University Corporation for Atmospheric Research (UCAR/NCAR) University of Colorado - Boulder University of Colorado - Colorado Springs University of Colorado - Denver University of Denver University of Northern Colorado University of Wyoming

5 FRGP participants – non-voting - 20
Auraria Higher Education Center Colorado College Colorado Community College System City of Boulder Colorado Department of Higher Education Colorado Mountain College Colorado State University - Pueblo Colorado Telehealth Network Community College of Denver Educational Access Gateway Learning Environment Network Johnson & Wales University Fort Lewis College I2-USDA JeffcoSD Metropolitan State University of Denver University Information Systems (CU system) NEON NREL State of Wyoming University NAVSTAR Consortium (UNAVCO)

6 Front Range GigaPoP (FRGP)
Continued and expanded LANDER support and partnership PerfSonar in the FRGP continues to expand and be a valuable tool 31 participants CU-CS, CC, CSU-P, JeffcoSD - fully active on BiSON and FRGP at 10G USGS - in discussions on fiber and TIC options SHEPC - in discussions on fiber build City of Golden - in discussions on fiber connectivity Pikes Peak Library District (PPLD) - pending approval

7 FRGP Cont’d 910 15th Level3 fiber leases
Adding NOAA TIC colo - May 2016 CDOT fiber spliced to expand BiSON ring to Golden Level year colo agreement including cross connects signed Level3 fiber leases All paths renewed for 10 years (Boulder/Denver, DREAM, Boulder/Longmont, Center Green) Continued steady growth in all traffic (peering, commodity, I2, caching, etc.) Hiring two Network Engineer IIs

8 FRGP Cont’d Peering Member port maximum and 95th percentiles
Google 2x10G running full so may increase in the fall Comcast - little progress on 10G and peering TR/CPS – still largest single FRGP usage Akamai - continues high utilization Upgrading servers Netflix - continues high utilization Coresite/Any2 – upgraded to 2x10G Direct peering with Syringa Open to other peering opportunities Member port maximum and 95th percentiles Adding aggregate port size comparison

9

10 Statistics 2015 FRGP commodity traffic 2007-2015
FRGP Peering Traffic FRGP "Other" traffic FRGP total traffic including caching FRGP total traffic (including caching)

11 Bi-State Optical Network (BiSON)
Total BiSON Participants = 9: CC (.167), CSM (1), CSU-FC (1), CSU-P (.167), CU-B (1), CU-CS (.167), FRGP (.5), I2-USDA (.5), JeffcoSD (1), NREL (.5), NOAA (1), UCAR (1), UW (1) Expanded BiSON to: Jefferson County School District – Full ring complete CSM – Full ring complete CU-UIS – in design discussions ADVA 10G Research Wave for CSU-P - pending installation UCAR NWSC 100G Wave on order CDOT Higher Ed agreement in review US 36, US 93, I25 North, I25 CSU Ag, I25 South/US 50

12

13

14 Internet2 Marla co-chairing I2 Diversity Initiative – a session planned at I2 Global Summit Survey out to CIOs Working on expanding efffort - webinars, workshops, blogs Awards for Global Summit announced - CSM awarded Non-Member NET+ Program Webinar recording Slides

15 Internet2 Tempe Future Strategies Meeting
This was a meeting of the Internet2 connector/network member principals to engage in a discussion about the strategic direction of Internet2 Network Services Level setting about what the community faces in the next two years What the community will need through 2023 Marla and Jeff attended The following themes were identified from RON papers Improve Network Security and Operations Member Engagement and Membership Expansion Enhance research support Expand the network and Improve general broadband access Deliver shared/collaborative services

16 WRN WTC and WPC calls as needed
100G I2 ports in LA, Seattle, and Chicago Investigating further commodity sharing/aggregation Considering full 10G to Level3 Make sure TSIC and Level3 can back each other up Pacific Wave expanded to include WRN Pete attended meeting to discuss expansion

17

18

19 The Quilt Quilt continues to grow Next CIS RFP 2016
The Quilt now at 46 members 36 participants 10 affiliates Next CIS RFP 2016 Authorized Quilt Providers (AQPs) from the 2016 CIS RFP: CenturyLink Cogent Level 3 NTT America TATA Communications Telia Carrier (formerly called TeliaSonera International Carrier) Verizon

20 Quilt items Quilt strategic relationships
Convened multiple meetings in Washington DC among Quilt leaders and FCC On-going relationship building with NSF FCC, NTIA, and NSF all had reps attend the last Quilt member meeting ESnet and NSF colocated meetings with Quilt in Austin in September very successful Quilt security working group Quilt submitted regional security proposal to NSF Best practices

21 EAGLE-Net Partnered with Zayo to operate and manage network
support-network-for-colorados-eagle-net-alliance-2/ Signed network operator agreement with Zayo (11/30/15) FRGP working in partnership on fiber and service opportunities Decommissioned Deproduction/Openmedia Foundation Transition ECBOCES to EN Zayo fiber opportunities E-rate

22 Rocky Mountain Cyberinfrastructure Mentoring and Outreach Alliance (RMCMOA)
NSF CC*IIE Grant PIs/Organizations: Colorado State University (Pat Burns), UCAR (Marla Meehl), University of Colorado (Thomas Hauser) , University of Utah (Joe Breen), IRON (Michael Guryan – Senior Personnel)

23 RMCMOA Cont’d Third RMCMOA workshop held in conjunction with Westnet Meeting, 1/5/16 Agenda with presentations Well attended Well received per the survey Final RMCMOA Workshop 8/9 - 8/11, CSU in conjunction with RMACC - technical engineering focus Submitted SC ‘15 women funding supplement – WINS (more on that later) “ACI’s plan remains to have another CC* solicitation in I regret not being able to discuss an accurate estimate of a release timeframe, and I think the community will be asked to have just a little more patience with the process.”

24 CC*DNI Related Proposals Submitted - 3/24/15
Boise State - CI Engineer - AWARDED Boise State - Network Infrastructure - AWARDED CSM - CI Engineer - AWARDED CSU-P - Network Infrastructure - AWARDED NMSU - CI Engineer - AWARDED Operating Innovative Networks (OIN) - ESnet, I2, IU - AWARDED UH Mauna Loa - AWARDED UNM - Regional - AWARDED UW - CI Engineer - AWARDED CU-Boulder/Wyoming/NCAR DIBBS - NOT AWARDED

25 Women in IT Networking at SC (WINS)
Submitted 3 year follow-on proposal to NSF Unsolicited Fund at least 5 women each year for 3 years Encourage awardees to attend in following years Attempt to acquire SC or vendor funding for returning awardees Build a pipeline that becomes self-sustaining Invitations to submit coming out soon Please consider having women in your organization apply

26 Rocky Mountain Regional Internet Security Collaboration (RMRISC)
NSF CICI Regional Security Proposal PIs/Organizations: Colorado State University (Scott Baily/Christos Papadopoulos), UCAR(Marla Meehl/Scot Colburn/Paul Dial/John Hernandez), University of Colorado (Thomas Hauser/Erin Shelton) , University of Utah (Joe Breen), University of Wyoming (Dane Skow/Matt Kelly) Partners: CSM, ESnet, I2, The Quilt, UETN

27 Rocky Mountain Regional Internet Security Collaboration (RMRISC)
During the two-year award term, the project team will conduct at least one regional workshop per year focused on securing network infrastructure. The project team will lead or partner with The Quilt, the Energy Science Network (ESnet), and Internet2 to present to the region quarterly focused security webinars. The project team will provide resources for research and education sites by developing online materials covering security best-practices, tools and other resources. On request, the project team will conduct up to eight security reviews and assessments during the project. The project team will also work closely with researchers at CSU to provide a real world testbed for deploying newly developed data analytics tools to detect, alert, and divert attacks.

28 Cheyenne (NWSC-2) HPC System Details
Cheyenne will be built by Silicon Graphics International Corp. (SGI) in conjunction with centralized file system and data storage components provided by DataDirect Networks (DDN) 5.34 petaflops Cluster 4,032 2-socket nodes with 2.3 GHz Intel® processors Total of 145,152 cores (36 cores/node) 864 nodes have 128 GB mem and 3,168 have 64 GB of memory 313 TB total memory 100 Gbps EDR (Enhanced Data Rate) Infiniband interconnect Linux operating system Workload Manager Intel Parallel Studio XE compiler suite Cluster Management Fabric Manager

29 GLADE (NWSC-2) PFS Details
Key features of the new data storage system: The NCAR Globally Accessible Data Environment Over 20 petabytes of usable file system space Can be expanded to over 40 petabytes by adding drives 200 gigabytes per second  aggregate I/O bandwidth 3,360 x 8 TB NL SAS drives 48 x 800 GB mixed-use SSD drives for metadata 24 NSD (Network Shared Disk) servers Red Hat Enterprise Linux operating system IBM GPFS file system  

30 GLADE (NWSC-2) PFS Details
The new data storage system will be integrated with NCAR’s existing GLADE file system It will provide an initial total capacity of over 36 petabytes (20 PB new and 16 PB existing GLADE space), expandable to over 56 petabytes with the addition of extra drives The new disk system will transfer data at the rate of 200 gigabytes per second, more than twice as fast as the current file system’s rate of 90 gigabytes per second

31 High Bandwidth Low Latency HPC and I/O Networks
Current NWSC HPC Architecture Geyser, Caldera, Pronghorn DAV clusters GLADE Central disk resource 16.4 PB, 90 GB/s Yellowstone 1.50 PFLOPS peak High Bandwidth Low Latency HPC and I/O Networks FDR InfiniBand and 10Gb Ethernet NCAR HPSS Archive 160 PB capacity 51 PB current usage ~12 PB/yr growth Our current (familiar) environment is a 1.5 petaflop system (Yellowstone), with the 16 petabyte GLADE file system, backed up by the HPSS archive. Science Gateways RDA, ESG Data Transfer Services 10Gb Ethernet Remote Vis Partner Sites XSEDE Sites

32 High Bandwidth Low Latency HPC and I/O Networks
NWSC HPC Architecture on January 2017 Cheyenne (NWSC-2) ~5.34 PFLOPS peak Geyser, Caldera, Pronghorn DAV clusters GLADE Central disk resource 36 PB, 200 GB/s Yellowstone 1.50 PFLOPS peak High Bandwidth Low Latency HPC and I/O Networks EDR / FDR InfiniBand and 40Gb Ethernet NCAR HPSS Archive 240 PB capacity 66 PB expected usage ~42 PB/yr growth By January 2017, we plan to put into production the NWSC-2 systems:    a. The new ~5 Petaflop NWSC-2 supercomputer    b. A ~20 petabyte augmentation of GLADE - with a 200 GB/sec aggregate        I/O bandwidth to the NWSC-2 supercomputer (over 2x what's currently        available with Yellowstone) One year overlap with Yellowstone Other Changes: * Added EDR InfiniBand network * Upgraded 10Gb ethernet network to 40Gb ethernet * Increased storage capacity, increased storage bandwidth Science Gateways RDA, ESG Data Transfer Services 10Gb Ethernet Remote Vis Partner Sites XSEDE Sites

33 Multi-Purpose Unified LAN Ethernet (MULE) Switch
Juniper QFX10008 Switch - Multi-Purpose Unified LAN Ethernet (MULE) Switch Investigated other vendors and chose Juniper because: Largely Juniper shop Supports GE ports in the 8-slot QFX10008 The QFX supports 16 slots, but that's more than we needed Happy with buffer capacity, which should give over 50 mS of line-rate buffer on all ports The QSFP28 optic should be more standard, dense, and flexible than the menagerie of optics offered by other vendors.

34 Multi-Purpose Unified LAN Ethernet (MULE) Switch
Juniper QFX10008 Switch - Multi-Purpose Unified LAN Ethernet (MULE) Switch MULE serves as a bridge between EDR and FDR Infiniband networks QFX10008 modular switch with 8 slot capacity can host GE or GE ports Modules support QSFP and QSFP28 optics Modules: 30 x 100GE (QSFP28) or 36 x 40GE (QSFP) / 12 x 100GE or 60 x 10G (SFP+) Will use three 36x40GE modules and QSFP 40GE optics with four CWDM 10G lanes

35

36

37 Miscellaneous UCAR Presidential search still in-progress
Implemented ARUBA wireless AP system Investigating clearpass for cert assignment NETS plans to use Aruba's ClearPass (TM) in several roles: a Certificate Authority that issues X.509 certificates to authenticate devices using EAP/TLS. This is best documented in the Clearpass-Guest-User-Guide-6.5.0, in the "Onboard" section. a RADIUS server that allows 802.1X authentication to Aruba Instant wireless Access Points via EAP/TLS. Configuration for this is described in the ClearPass Policy Manager 6.5 User Guide. CPPM, as ClearPass Policy Manager is often called, can be an OSCP (Online Certificate Status Protocol) server, so certificates can be revoked with relative ease.

38 Discussion Any follow up questions or feedback? Other items?


Download ppt "Networking updates: CHECO"

Similar presentations


Ads by Google