INDIANAUNIVERSITYINDIANAUNIVERSITY High Performance Networking for Colleges and Universities: From the Last Kilometer to a Global Terabit Research Network.

Slides:



Advertisements
Similar presentations
INDIANAUNIVERSITYINDIANAUNIVERSITY HPIIS Performance Review October 25, 2000 Jim Williams.
Advertisements

February 2002 Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO.
INDIANAUNIVERSITYINDIANAUNIVERSITY 1 APAN – An historical perspective and a view toward the future Michael McRobbie PhD Vice President for Research Vice.
M A Wajid Tanveer Infrastructure M A Wajid Tanveer
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
Internet Access for Academic Networks in Lorraine TERENA Networking Conference - May 16, 2001 Antalya, Turkey Infrastructure and Services Alexandre SIMON.
High Performance Computing Course Notes Grid Computing.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Introduction to Unitas Global Managed IT Infrastructure Service Provider February 2012 North America Los Angeles, USA
Access for Achievement California K-12 High Speed Network (K12HSN) A state-funded program to ensure network access for CA’s K-12 February 2007.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Technical Review Group (TRG)Agenda 27/04/06 TRG Remit Membership Operation ICT Strategy ICT Roadmap.
The Office of Information Technology Welcomes the Chinese National Natural Science Foundation.
University of Kansas A KTEC Center of Excellence 1 Victor S. Frost Director, Information & Telecommunication Technology Center Dan F. Servey Distinguished.
Internet2 A Project of the University Corporation for Advanced Internet Development Ted Hanss Director, Applications Development VIEWNET April 1998.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Data Centers and IP PBXs LAN Structures Private Clouds IP PBX Architecture IP PBX Hosting.
© The Trustees of Indiana University Centralize Research Computing to Drive Innovation…Really Thomas J. Hacker Research & Academic Computing University.
INDIANAUNIVERSITYINDIANAUNIVERSITY 1 IT FUNDING STRATEGIES "IT IS DO YOU KNOW WHERE YOUR IT DOLLAR IS?" EDUCAUSE Midwestern Regional Conference.
Assessment of Core Services provided to USLHC by OSG.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Information Systems Today: Managing in the Digital World TB4-1 4 Technology Briefing Networking.
Supercomputing Center Jysoo Lee KISTI Supercomputing Center National e-Science Project.
Research Cyberinfrastructure Alliance Working in partnership to enable computationally intensive, innovative, interdisciplinary research for the 21 st.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
OSC (Ohio Supercomputer Center) 1224 Kinnear Road, Columbus, OH (614) www.osc.edu THIRD FRONTIER OVERVIEW OPLIN MEETING April.
Global High Performance Networks N+I Tokyo’98 Session Chair : Kilnam Chon Speakers : George Strawn / NSF US Next Generation Internet Projects.
November 18, 1999 Internet 2 Mike Rackley Head of Information Technology Services
© Copyright AARNet Pty Ltd AARNet 3 George McLaughlin Director, International Developments.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
November 2005 Advanced Research Networks Conference BCNET UVic is Wired Leverage the Power.
What is Internet2? Ted Hanss, Internet2 5 March
Florida International UniversityAMPATH AMPATH Julio E. Ibarra Director, AMPATH Global Research Networking Summit Brussels, Belgium.
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
Top Issues Facing Information Technology at UAB Sheila M. Sanders UAB Vice President Information Technology February 8, 2007.
CSG - Research Computing Redux John Holt, Alan Wolf University of Wisconsin - Madison.
Information Resources and Communications University of California, Office of the President UC-Wide Activities in Support of Research and Scholarship David.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
© Copyright AARNet Pty Ltd TEIN APII Koren Symposium Australia and the Pacific Region Infrastructure to support global e-science George McLaughlin Director,
9 Systems Analysis and Design in a Changing World, Fourth Edition.
 2007 Level 3 Communications, Inc. All Rights Reserved. 1 Business Markets Group: Reliable and Responsive Prepared for CUSTOMER NAME PRESENTER’S NAME.
Authors: Ronnie Julio Cole David
Illinois Century Network Illinois Broadband Opportunity Partnership – East Central Project.
Connecting Advanced Networks in Asia-Pacific Kilnam Chon APAN Focusing on Applications -
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
May Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO Indiana.
. Large internetworks can consist of the following three distinct components:  Campus networks, which consist of locally connected users in a building.
Higher Computing Networking. Networking – Local Area Networks.
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
May Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO Indiana.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Campus Texas STaR Chart Presentation for Los Fresnos HS Technology Leadership EDTC Project 2 Jaime Villarreal.
What is GPON?. Introduction and Market Overview: The Need for Fiber The way people use the Internet today creates a great demand for very high bandwidth:
IPCEI on High performance computing and big data enabled application: a pilot for the European Data Infrastructure Antonio Zoccoli INFN & University of.
Internet2 Applications & Engineering Ted Hanss Director, Applications Development.
Developing sustainable and cost effective compute platforms for NREN PRESENTED BY EJOVI AROR GROUP MD IPNX NIGERIA LIMITED 25 th NOVEMBER 2013.
Designing Cisco Data Center Unified Fabric
Vietnam Research and Education Network (vinaREN)
Clouds , Grids and Clusters
Innovative Solutions from Internet2
The SURFnet Project Bram Peeters, Manager Network Services
Presentation transcript:

INDIANAUNIVERSITYINDIANAUNIVERSITY High Performance Networking for Colleges and Universities: From the Last Kilometer to a Global Terabit Research Network TERENA Networking Conference 2001 Antalya, Turkey Michael A. McRobbie PhD Vice President for Information Technology and Chief Information Officer Steven Wallace Chief Technologist and Director Advanced Network Management Laboratory Indiana Pervasive Computing Research Initiative

INDIANAUNIVERSITYINDIANAUNIVERSITY Network-Enabled Science and Research in the 21 st Century Science and research is becoming progressively more global with network-enabled world wide collaborative communities rapidly forming in a broad range of areas Many are based around a few expensive – sometimes unique – instruments or distributed complexes of sensors that produce vast amounts of data These global communities will carry out research based on this data

INDIANAUNIVERSITYINDIANAUNIVERSITY Network-Enabled Science and Research in the 21 st Century This data will be: –collected via geographically distributed instruments –analyzed by supercomputers and large computer clusters –visualized with advanced 3-D display technology and –stored in massive or large data storage systems All of this will be distributed globally

INDIANAUNIVERSITYINDIANAUNIVERSITY Examples of Network-Enabled Science NSF funded Grid Physics Network’s (GriPhyN) need for petascale virtual data grids (i.e. capable of analyzing petabyte datasets) –Compact Muon Selenoid (CMS) and A Toroidal LHC Apparatus (ATLAS) experiments using the Large Hadron Collider (LHC) located at (CERN) [> 2.5 Gb/s] –Laser Interferometer Gravitational Wave Observatory (LIGO) [200GB-5TB data sets needing 2.5 Gb/s or greater for reasonable transfer times] –Atacama Large Millimeter Array (ALMA) Collaborative video (e.g. HDTV) [20Mb/s] Sloan Digital Sky Survey (SDSS) [> 1Gb/s]

INDIANAUNIVERSITYINDIANAUNIVERSITY A Vision for 21 st Century Network- Enabled Science and Research The vision is for this global infrastructure and data to be integrated into “Grids” – seamless global collaborative environments tailored to the specific needs of individual scientific communities

INDIANAUNIVERSITYINDIANAUNIVERSITY Components of Global Grids High performance networks are fundamental to integrating Global Grids together There are, very broadly speaking, three components to Global Grids: –Campus Networks (the last kilometer) –National and Regional Research and Education Networks (NRRENs) –Global connections between NRRENs

INDIANAUNIVERSITYINDIANAUNIVERSITY Impediments to Global Grids Of these three components, on a world-wide scale, investment and engineering is only adequate for NRRENs –Campus networks rarely provide scalable bandwidth to the desktop commensurate with speeds of campus connections to NRRENs –Global connections between NRRENs are major bottlenecks – they are very slow compared to NRREN backbone speeds

INDIANAUNIVERSITYINDIANAUNIVERSITY Building Global Grids To build a true Global Grid requires: Scalable campus networks providing ubiquitous high bandwidth connections to every desktop commensurate with campus connections to NRRENs Global connectivity between NRRENs of comparable speeds to the NRREN backbones, which is also stable, persistent and of production quality like the NRRENs themselves

INDIANAUNIVERSITYINDIANAUNIVERSITY Presentation Overview This talk describes: 1.Some NRRENs and their common characteristics 2.“Grid Ready” Campus Networks with Indiana University’s network architecture and management of IT as an example 3.A solution to the global connectivity problem that scales to a terabit global research network

INDIANAUNIVERSITYINDIANAUNIVERSITY 1. NRRENs Abilene –OC48 ->OC192 –OC48 connected GigaPoPs (moving to min. OC12) –ITN provider –1 Gb/s sustained data rates seen CAnet3 US Fed nets (e.g. ESnet) DANTE -> GEANT APAN CERNET

INDIANAUNIVERSITYINDIANAUNIVERSITY

INDIANAUNIVERSITYINDIANAUNIVERSITY

INDIANAUNIVERSITYINDIANAUNIVERSITY

INDIANAUNIVERSITYINDIANAUNIVERSITY

INDIANAUNIVERSITYINDIANAUNIVERSITY

INDIANAUNIVERSITYINDIANAUNIVERSITY

INDIANAUNIVERSITYINDIANAUNIVERSITY NRRENs OC48 (2.4 Gb/s) backbone implemented today –Moving to OC192 (9.6 Gb/s) as next evolution Institutions access backbone at OC12 or greater (a few connections at OC48) Native high-speed IPv4 Support for IPv6 (but at much lower performance due to router constraints)

INDIANAUNIVERSITYINDIANAUNIVERSITY NRRENs Advanced Services –Typically run as open (visible) networks, not a commercial service –IP multicast deployed in most backbones, but still not as production as unicast; but not reliable internationally –QoS – mixed results, still in it’s infancy. Very little going across more than one network Intra-regional interconnect speeds range from OC3 to OC12. Soon to be OC48 in some cases

INDIANAUNIVERSITYINDIANAUNIVERSITY 2.1 Campus Networks: the Critical Last Kilometer Grid applications will require “guaranteed” multi-Mbps bandwidth “per” application – “end to end" That is, these speeds must be sustained from the desktop, through the various levels of the campus network to the NRREN and then to the application target Thus campus networks must be architected to provide scalable levels of connectivity to NRRENs as their speeds increase It makes no sense to have a shared 10Mbps desktop connection (delivering at most about 1 Mbps) into a 10,000 Mbps (10Gbps) NRREN!! Providing appropriate levels of connectivity to the desktop and scalable campus network architectures to support them is a top priority in IT strategic planning at US Universities A sizable portion of telecommunications budgets can go to this (e.g. $25M at Indiana University in 00/01)

INDIANAUNIVERSITYINDIANAUNIVERSITY Grid-Ready Campus Network Architectures The three main levels of campus network architectures are: –Desktop/intrabuilding (the last 100 meters) –Interbuilding (connecting groups of buildings) –Backbone (connecting those groups) Each level must provide progressively more capacity and the whole architecture must be scalable Campus networks commonly have two external connections –Commercial Internet –Regional gigaPoPs (to NRRENs)

INDIANAUNIVERSITYINDIANAUNIVERSITY Campus Networking (last 100 meters) 10Mb/s switched to the desktop –Adequate for VHS-quality video distribution Video conferencing Digital library such as CD quality music –Gigabyte datasets transferred in 15 minutes 100Mb/s switched to the desktop –Cost of 10/100Mb switch ports less than $50USD –Gigabyte datasets transferred in 2 minutes –Suitable for HDTV distribution 1000Mb/s switched to the desktop now practical over Cat5 copper –Cost of 100/1000Mb switch ports less than $500USD –Terabyte datasets transferred in 2.5 hours Fiber to the desktop probably reserved for 10Gb/s and beyond but necessary between buildings

INDIANAUNIVERSITYINDIANAUNIVERSITY Grid Ready Campus Network Enablers Gigabit wire-speed ASIC-based routers –currently providing 1 Gb/s uplinks, next generation will support 10 Gb/s uplinks –QoS support in hardware Commodity priced Ethernet switches that support 10/100 Mb/s and 1 Gb/s connections

INDIANAUNIVERSITYINDIANAUNIVERSITY 2.2 Indiana University’s Grid Ready Campus Network Switched 10Mbps standard for all 55,000+ desktops Switched 100Mbps available on request 1Gbps available in selected cases OC12 (650 Mb/s) Internet2 connectivity Native support for IP multicast in both the layer 2 Ethernet switches and the layer 3 routers Support for DiffServ based quality of service

INDIANAUNIVERSITYINDIANAUNIVERSITY 2.3 Managing IT in US Higher Education In the US IT is recognized as being: –of central importance in higher education –fundamental to teaching, learning & research –essential to responsible and accountable institutional management –a source of institutional competitive advantage It is also a major source of expenditure in US universities: (fully costed) between 5 & 10% of an institution’s total budget

INDIANAUNIVERSITYINDIANAUNIVERSITY Responsibility for Managing IT US Universities have elevated IT to a portfolio of central importance reporting directly to the president or chief academic officer This portfolio tends to be the responsibility of the chief information officer (CIO) University CIOs are typically responsible for central IT and support of distributed IT (e.g. in departments, schools, faculties) This parallels earlier developments in US business

INDIANAUNIVERSITYINDIANAUNIVERSITY Strategic Planning for IT Given the vital importance of IT, US universities have developed IT strategic plans These plans guide the institution’s future development and investment in IT They are also used to leverage considerable additional public and private funding for university IT infrastructure

INDIANAUNIVERSITYINDIANAUNIVERSITY 2.4 An Example: Indiana University Founded in 1820 State public university with: -$1.9B budget (99-00) with 27% from the State of Indiana -7 campuses State-wide (two largest and research intensive campuses in Bloomington and Indianapolis) -97,150 students -4,276 faculty -9,844 appointed staff -42,000+ course sections -$1B endowment

INDIANAUNIVERSITYINDIANAUNIVERSITY IT at Indiana University CIO position created in 96; reports directly to IU President Responsible for central IT on all campuses -Central IT budget from all sources $100M -1,200 staff -Departments, schools & faculties expend about a further $50M Central IT comprises -telecommunications (e.g. data, voice, video on campus, intra & interstate, & internationally) represents 35% of the budget -research & academic computing (e.g. supercomputing, massive data storage, large-scale VR) represents 13% of the budget -teaching & learning technologies (e.g. user support/education, desktop life-cycle funding, classroom IT, enterprise software licensing, student labs, Web support) represents 28% of the budget -administrative computing (enterprise information systems – e.g. student, financial, HR & library systems, enterprise databases & storage) represents 24% of the budget

INDIANAUNIVERSITYINDIANAUNIVERSITY Strategic Planning for IT at Indiana University Goal of Indiana University to be a leader in the “…use and application of information technology”. CIO responsible for developing IT Strategic Plan to achieve this goal: -first University-wide IT Strategic Plan -used IT Committee system; 200 people involved in preparation -prepared December 97 to May 98, then discussed University-wide & approved by President and Board of Trustees, December 98 -CIO responsible for implementation -5 year plan consisting of 10 major recommendations; 68 actions ( -Implementation Plan and full costings developed in parallel -Full cost $210M over 5 years; $120M in new funding from the State, $90M in re-programmed University funding

INDIANAUNIVERSITYINDIANAUNIVERSITY 3.1 Towards a Global Terabit Research Network (GTRN) Global network-enabled collaborative Grids require true high-speed global research and education network that: –is of production quality (managed as production service, redundant, stable, secure) –is persistent (is based on a long-term agreement with carrier(s) and others) –provides a uniform form of connection globally through global network access points (GNAPs) –provides interconnect speeds comparable to NRRENS backbone speeds (presently OC48 going to OC192) –scales to a terabit per second data rate during this decade

INDIANAUNIVERSITYINDIANAUNIVERSITY Impediments to Building Global Grids International connections very slow compared with NRREN backbone speeds Long term funding uncertain (e.g. NSF HPIIS program) Global connection effort not well-coordinated Overly reliant on transit through US infrastructure Frequently connections are via ATM or IP clouds, making management of advanced services difficult Poor coordination of advanced service deployment Extreme difficulty ensuring reasonable end-to-end performance

INDIANAUNIVERSITYINDIANAUNIVERSITY Connectivity to US Transit Infrastructure Asia Pacific Americas Europe

INDIANAUNIVERSITYINDIANAUNIVERSITY STAR TAP and International Transit Service (ITN) STAR TAP, CAnet3 and Abilene provide some level of International transit across North America –Abilene offers convenient international transit at multiple landing sites, however transit not offered to other NRRENs (e.g. ESnet) STAR TAP requires a connection to AADS best effort ATM service (reducing the ability to deploy QoS)

INDIANAUNIVERSITYINDIANAUNIVERSITY Indiana University Firsthand experience with these difficulties through its Global NOC

INDIANAUNIVERSITYINDIANAUNIVERSITY

INDIANAUNIVERSITYINDIANAUNIVERSITY 3.2 Towards a GTRN A single global backbone interconnecting global network access points (GNAPs) that provide peering within a country or region Global backbone speeds comparable to those at NRRENS, i.e. OC192 in 2002 Based on stable carrier infrastructure Persistent based on long-term (5-10 year) agreements with carriers, router vendors and optical transmission equipment vendors

INDIANAUNIVERSITYINDIANAUNIVERSITY Towards a GTRN Scalable – e.g. OC768 by 2004, multiple wavelengths running striped OC768s by 2005, terabit/sec transmission by 2006 GNAPs connect at OC48 and above. To scale up as backbone speeds scale up Production service with 24x7x365 management through a global NOC Coordinated global advanced service deployment (e.g. QoS, IPv6, multicast)