The extension of optical networks into the campus Wade Hong Office of the Dean of Science Carleton University.

Slides:



Advertisements
Similar presentations
Duke University SDN Approaches and Uses GENI CIO Workshop – July 12, 2012.
Advertisements

M A Wajid Tanveer Infrastructure M A Wajid Tanveer
Chapter 3: Planning a Network Upgrade
Multi-Layer Switching Layers 1, 2, and 3. Cisco Hierarchical Model Access Layer –Workgroup –Access layer aggregation and L3/L4 services Distribution Layer.
The International Grid Testbed: a 10 Gigabit Ethernet success story in memoriam Bob Dobinson GNEW 2004, Geneva Catalin Meirosu on behalf of the IGT collaboration.
Rationale for GLIF November CA*net 4 Update >Network is now 3 x 10Gbps wavelengths – Cost of wavelengths dropping dramatically – 3 rd wavelength.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Networks, Grids and Service Oriented Architectures eInfrastructures Workshop.
TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF SITE REPORT Corrie Kost Update since Hepix Spring 2005.
In the name of God the Most Compassionate and Merciful.
1 Wide Area Network. 2 What is a WAN? A wide area network (WAN ) is a data communications network that covers a relatively broad geographic area and that.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
The Future of GovNeTT – Where are we Heading?. GovNeTT 2.0 Current State Obscure Governance Framework Design is Difficult to Evolve to Changing Needs.
What is CANARIE? CANARIE runs Canada’s only national high-bandwidth network for research & education Connects one million users at 1,100 institutions.
1 Reliable high-speed Ethernet and data services delivery Per B. Hansen ADVA Optical Networking February 14, 2005.
Impact of “application empowered” networks >The semi-conductor revolution reduced CAPEX and OPEX costs for main frame computer >But its biggest impact.
UCLP Roadmap Bill St. Arnaud CANARIE Inc –
LambdaGRID the NREN (r)Evolution Kees Neggers Managing Director SURFnet Reykjavik, 26 August 2003.
User Managed End-To-End Lightpath Provisioning Over CA*net 4 Jing Wu, Scott Campbell, J. Michel Savoie, Hanxi Zhang, Gregor v. Bochmann, Bill St. Arnaud.
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
CA*net 4 International Grid Testbed Tel:
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Update on CA*net 4 Network
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
What is not and is User Controlled LightPaths (UCLP)? JT Vancouver 2005 Hervé Guy Monday
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
Damir Pobrić, Sr. Network Engineer update Joint Techs Workshop Minneapolis, MN – 12 February 2007.
CA*net 4 Open Grid Services for Management of Optical Networks CENIC Workshop May 6, 2002
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Techs in Paradise 2004, Honolulu / Lambda Networking BOF / Jan 27 NetherLight day-to-day experience APAN lambda networking BOF Erik Radius Manager Network.
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
TRIUMF a TIER 1 Center for ATLAS Canada Steven McDonald TRIUMF Network & Computing Services iGrid 2005 – San Diego Sept 26 th.
A Brief Overview Andrew K. Bjerring President and CEO.
Université d’Ottawa University of Ottawa UCLPv2. 2 Agenda UCLP objectives UCLPv2: Definitions and use cases UCLPv2: Users and privileges.
Chapter 7 Backbone Network. Announcements and Outline Announcements Outline Backbone Network Components  Switches, Routers, Gateways Backbone Network.
UCLP International transit service Bill St. Arnaud CANARIE Inc –
Prospects for the use of remote real time computing over long distances in the ATLAS Trigger/DAQ system R. W. Dobinson (CERN), J. Hansen (NBI), K. Korcyl.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Chapter2 Networking Fundamentals
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
Summary - Part 2 - Objectives The purpose of this basic IP technology training is to explain video over IP network. This training describes how video can.
Points of pain Campus vs backbone Bill St. Arnaud
1 Recommendations Now that 40 GbE has been adopted as part of the 802.3ba Task Force, there is a need to consider inter-switch links applications at 40.
Hierarchical Topology Design. 2 Topology Design Topology is a map of an___________ that indicates network segments, interconnection points, and user communities.
Computer Networks part II 1. Network Types Defined Local area networks Metropolitan area networks Wide area networks 2.
. Large internetworks can consist of the following three distinct components:  Campus networks, which consist of locally connected users in a building.
Networks, Grids and Service Oriented Architectures
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
Fundació i2CAT “Interconnection of existing test-beds for experimental purposes based on UCLP (User Controlled Lightpath Provisioning) ” Sergi Figuerola.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
M.C. Vetterli; SFU/TRIUMF Simon Fraser ATLASATLAS SFU & Canada’s Role in ATLAS M.C. Vetterli Simon Fraser University and TRIUMF SFU Open House, May 31.
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
 Introduction to Wide Area Networks 2 nd semester
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
CAnet 4 LHC network resources US LHC network working group meeting Oct. 23, 2006 Thomas Tam CANARIE Inc.
SURFnet6: the Dutch hybrid network initiative
Wide Area Network.
The SURFnet Project Bram Peeters, Manager Network Services
Chapter 1: WAN Concepts Connecting Networks
Chapter 7 Backbone Network
GCSE ICT Setting up a Network.
GCSE ICT Setting up a Network.
Connectors, Repeaters, Hubs, Bridges, Switches, Routers, NIC’s
Presentation transcript:

The extension of optical networks into the campus Wade Hong Office of the Dean of Science Carleton University

Outline Motivation CA*net 4 IGT From the Carleton U Perspective Issues Lessons learned

Motivation Large scale distributed scientific experiments (LHC - ATLAS, SNOLab, Polaris, NEES Grid... ) Access to regional distributed HPC resources (HPCVL, SharcNet, WestGrid, TRIUMF Tier 1.5,...) Federating growing research-based computing resources on campus Allowing the end users to access these resources in an unencumbered way CA*net 4 customer empowered networking last mile

CA*net 4 IGT CANARIE funded directed research project build a testbed to experiment with customer empowered networking, pt2pt optical networks, network performance, long haul 10 GbE, UCLP, last mile issues, etc. participants from the HEP community from across Canada, the provincial ORANs, CERN, StarLight, SURFnet, and potentially others setup end to end GbE and 10 GbE lightpaths between institutions in Canada and CERN

CA*net 4 Network

CA*net 4 IGT Sites

CA*net 4 IGT interoperability testing with 10 GbE WAN PHY and OC-192 used IXIA traffic generators to characterize the trams-atlantic link transferred real experimental data from ATLAS FCAL beam tests (GbE and 10 GbE) demonstrated native end-to-end 10 GbE between CERN and Ottawa for the ITU Telecom World 2003

Planned CA*net 4 IGT Activities complete the last mile connectivity for most of the participating Canadian sites third OC-192 across Canada being brought up using Nortel OME 6500s continuing long haul native 10 GbE experiments (Foundry MG8s) TRIUMF to CERN, TRIUMF to Carleton, Carleton to CERN CERN to Tokyo via Canada HEPix Robust Transfer Challenge - sustained disk to disk transfers between TRIUMF and CERN

Planned CA*net 4 IGT Activities Real-time remote farms for ATLAS CERN to U of Alberta Data transfer of End Cap Calorimeter data from the combined beam tests to several Canadian sites one beam test just completed (~1TB) second test to start late August (significantly more data) Transfer of CDF MC data from the Big Mac Cluster establish a GbE lightpath between UofT and FermiLab

Planned CA*net 4 IGT Activities Experimentation with bulk data transfer investigating RDMA/IP (sourcing NICs) establish GbE lightpaths between Canadian sites

Carleton University located in Ottawa, the nation’s capital at the southern end of the world’s longest outdoor skating rink Canada’s Capital University student population of 22,000 students, 1700 faculty and staff over $100M in research funding in the past year CFI contribution significant about half to Physics Bill St. Arnaud’s alma mater

Carleton University

External Network Connectivity commodity Internet Telecom Ottawa - was the largest metro 10 GbE deployment R&E traffic finally connected to ORION (Dec 2003), the new ORAN, just prior to the decommissioning of ONET EduNet non profit, OCRI managed dial-up and High Speed Internet for higher education institutions in Ottawa dial-up ISP has a dedicated link back to campus

Carleton U Network Upgrade campus has been in the process of planning a campus network upgrade for the past 3 to 4 years several false starts application to funding agencies based on requirements of research activities may have missed the window of opportunity finally proceeding with the network upgrade RFPs currently being evaluated

Network Upgrade Proposal original proposal phase one (Year 1) build the campus core network phase two (Year 2) build the distribution layer phase three (Year 3) rewire the buildings for access not my preferred ordering!

Proposed Topology

Differing Viewpoints debate over how to handle high capacity research traffic flows necessity of routing traffic through the proposed high capacity campus core on the other hand optical bypasses would simplify and reduce the complexity and cost of the campus network 4 fibre pairs between Herzberg Laboratories and Robertson Hall cost about $4K CDN - we prevailed reality check current campus network cannot handle the high volume and high speed flows

Motivations Revisited Large scale distributed scientific experiments

Motivations Revisited Access to regional distributed HPC resources other HPCVL sites (Queens, UofO, RMC, Ryerson U) TRIUMF ATLAS Canada computing centre SNOLab shared ORION and CA*net 4 connectivity is only at GbE high capacity flows probably dictate pt2pt optical bypass interconnectivity can be static or dynamic fully statically meshed or scheduled dynamic connectivity on demand - probably the latter

Motivations Revisited Federating growing research-based computing resources into a campus grid HPCVL Linux cluster upgrade ( CPUs) Physics research cluster upgrade ( CPUs) Civil Engineering (~128 CPUs) Architecture/Psychology visualization cluster (>128 CPUs) Systems and Computer Engineering ( 64 CPUs) debating a condominium or distributed model most likely a hybrid with optical fibre as the interconnecting fabric probably static pt2pt optical bypass for ease of use and user control

Motivations Revisited federated the Physics research computing cluster with part of the HPCVL Linux cluster last summer for about 2 months clusters located on different floors pt2pt link established - much easier than routing through the campus network completed half of the MC regeneration for the third SNO paper similar arrangement this summer to add part of the HPCVL cluster to the Carleton U Physics contribution to the LHC Computing Grid till the end of the year

Issues control central management and control vs end user empowerment disruptive network complexity using pt2pt ethernet links for high capacity flows should simplify campus networks (reduce costs?) security disruptive - bypassing DMZ for the uses considered here, the pt2pt links are inherently secure - non routed private subnets

Issues why not copper? it could be but with fibre greater distances requires less active devices along the path management and control - device at each end under the control of the end users is ideal consistent device characteristics - jumbo frames, port speed, duplex, etc. inter-building connectivity is fibre and planned vertical cabling will be fibre

Issues last mile connectivity demarcation point end user device (NIC) or an edge device (switch, CWDM mux) location of the demarc at the end user or a common shared location technology used to extend the end to end lightpath into the campus pt2pt GbE optical GbE NIC - patched thru to GbE interface on ONS media converter - copper to optical

Issues pt2pt 10GbE LAN PHY to WAN PHY conversion to OC192c on ONS 15454/OME 6500 wavelength conversion CWDM media converters - copper to colored wavelength colored GBICS for GbE switch optical link charateristics padding (attenuation), proper power budget, etc. end user shouldn’t need to be an optical networking expert

Lessons Learned good to be rich in fibre provides greater flexibility support of ORANs, national R&E network, and international partners is essential - all have been very supportive need to convince local campus networking folks that this is not really too disruptive will simplify and not burden the campus production network need a more coherent way of dealing with optical access in the last mile still lots to learn!

Thank You! Wade Hong