I2 2004 Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
Information Society Technologies programme 1 IST Programme - 8th Call Area IV.2 : Computing Communications and Networks Area.
Generalized Multiprotocol Label Switching: An Overview of Signaling Enhancements and Recovery Techniques IEEE Communications Magazine July 2001.
DRAGON Dynamic Resource Allocation via GMPLS Optical Networks Tom Lehman University of Southern California Information Sciences Institute (USC/ISI) National.
Towards Virtual Routers as a Service 6th GI/ITG KuVS Workshop on “Future Internet” November 22, 2010 Hannover Zdravko Bozakov.
EU-GRID Work Program Massimo Sgaravatto – INFN Padova Cristina Vistoli – INFN Cnaf as INFN members of the EU-GRID technical team.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
Integrating Network and Transfer Metrics to Optimize Transfer Efficiency and Experiment Workflows Shawn McKee, Marian Babik for the WLCG Network and Transfer.
Introduction and Overview “the grid” – a proposed distributed computing infrastructure for advanced science and engineering. Purpose: grid concept is motivated.
1 © 2006 Cisco Systems, Inc. All rights reserved. MS Network Symposium6 Thoughts on the MS Network Research Workshop Fred Baker.
Transport SDN: Key Drivers & Elements
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
Abstraction and Control of Transport Networks (ACTN) BoF
1 Introduction to Optical Networks. 2 Telecommunications Network Architecture.
Table Of Contents Introduction What is WAN? Point to Point Links Circuit Switching Packet Switching WAN Virtual Circuits WAN Dialup Services WAN Devices.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Connect. Communicate. Collaborate VPNs in GÉANT2 Otto Kreiter, DANTE UKERNA Networkshop 34 4th - 6th April 2006.
Metro/regional optical network architectures for Internet applications Per B. Hansen, Dir. Bus. Dev. Internet2’s Spring Member Meeting May 3, 2005.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
WG Goals and Workplan We have a charter, we have a group of interested people…what are our plans? goalsOur goals should reflect what we have listed in.
Rick Summerhill Chief Technology Officer, Internet2 Internet2 Fall Member Meeting 9 October 2007 San Diego, CA The Dynamic Circuit.
Internet2 Performance Update Jeff W. Boote Senior Network Software Engineer Internet2.
HENP, Grids and the Networks They Depend Upon Shawn McKee March 2004 National Internet2 Day.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
©2015 EarthLink. All rights reserved Cloud Express ™ Optimize Your Business & Cloud Networks.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
1 4/23/2007 Introduction to Grid computing Sunil Avutu Graduate Student Dept.of Computer Science.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Department of Energy Office of Science ESCC & Internet2 Joint Techs Workshop Madison, Wisconsin.July 16-20, 2006 Network Virtualization & Hybridization.
What is Bandwidth on Demand ? Bandwidth on Demand (BoD) is based on a technology that employs a new way of managing and controlling SONET-based equipment.
A PRESENTATION “SEMINAR REPORT” ON “ GENERALIZED MULTIPROTOCOL LABEL SWITCHING“
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
The Internet2 HENP Working Group Internet2 Spring Meeting April 9, 2003.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 Dynamic Service Provisioning in Converged Network Infrastructure Muckai Girish Atoga Systems.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
Connect. Communicate. Collaborate Operations of Multi Domain Network Services Marian Garcia Vidondo, DANTE COO TNC 2008, Bruges May.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
Internet2 Strategic Directions October Fundamental Questions  What does higher education (and the rest of the world) require from the Internet.
1 Revision to DOE proposal Resource Optimization in Hybrid Core Networks with 100G Links Original submission: April 30, 2009 Date: May 4, 2009 PI: Malathi.
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
LHCONE NETWORK SERVICES: GETTING SDN TO DEV-OPS IN ATLAS Shawn McKee/Univ. of Michigan LHCONE/LHCOPN Meeting, Taipei, Taiwan March 14th, 2016 March 14,
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
1 Network related topics Bartosz Belter, Wojbor Bogacki, Marcin Garstka, Maciej Głowiak, Radosław Krzywania, Roman Łapacz FABRIC meeting Poznań, 25 September.
Multiprotocol Label Switching (MPLS) Routing algorithms provide support for performance goals – Distributed and dynamic React to congestion Load balance.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
Grid Optical Burst Switched Networks
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
The UltraLight Program
Presentation transcript:

I Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University of Michigan

I Arlington, VA April 20th, 2004 Shawn McKee UltraLight Topics Introduction: What is the UltraLight Program? History Program Goals and Details Current Status and Summary I could give an Overview talk: BUT I would rather have an open discussion about the topics UltraLight plans to address

I Arlington, VA April 20th, 2004 Shawn McKee Possible discussion topics What relative role will packet-switched, circuit-switched and hybrid (GMPLS,…) network modes have in future networks? —In 2 years? —In 4 years? Beyond…? How can we effectively integrate new options into existing networks? —Control planes? —Application changes (adaptive applications) —Migration and integration strategies What additions to network toolboxes are required to enable and support “UltraLight” like networks? What are the metrics for “success”? Use cases for various application domains: How would having an UltraLight like network support various applications to meet their requirements? What are the implied requirements that an UltraLight network imposes on middleware and policy?

I Arlington, VA April 20th, 2004 Shawn McKee UltraLight is a program to explore the integration of cutting-edge network technology with the grid computing and data infrastructure of HEP/Astronomy The program intends to explore network configurations from common shared infrastructure (current IP networks) thru dedicated optical paths point-to-point. A critical aspect of UltraLight is its integration with two driving application domains in support of their national and international eScience collaborations: LHC-HEP and eVLBI- Astronomy The Collaboration includes: —Caltech —Florida Int. Univ. —MIT —Univ. of Florida —Univ. of Michigan What is UltraLight? ― UC Riverside ― BNL ― FNAL ― SLAC ― UCAID/Internet2

I Arlington, VA April 20th, 2004 Shawn McKee Some History… The UltraLight Collaboration was originally formed in Spring 2003 in response to an NSF Experimental Infrastructure in Networking (EIN) RFP in ANIR After not being selected, the program was refocused on LHC/HEP and eVLBI/Astronomy and submitted to “Physics at the Information Frontier” (PIF) in MPS at NSF Collaboration was notified at the end of 2003 that the PIF program was being postponed 1 year. Suggested that proposals be redirected to the NSF ITR program. ITR Deadline was February 25 th, We are awaiting word of our proposal status…

I Arlington, VA April 20th, 2004 Shawn McKee HENP Network Roadmap LHC Physics will require large bandwidth capability over a globally distributed network. The HENP Bandwidth Roadmap is shown in the table below:

I Arlington, VA April 20th, 2004 Shawn McKee eVLBI and UltraLight e-VLBI is a major thrust of UltraLight and can directly complement LHC-HEPs mode of using the network, allowing us to explore new strategies for network conditioning and bandwidth management. The e-VLBI work under this proposal will be multi-pronged in an effort to leverage the many new capabilities provided by UltraLight network and to provide the national and international VLBI community with advanced tools and services that are tailored to the e-VLBI application. e-VLBI stands to benefit from an UltraLight infrastructure in numerous ways: 1.Higher sensitivity 2.Faster turnaround 3.Lower costs 4.Quick diagnostics and tests 5.New correlation methods —e-VLBI will provide a different eScience perspective and validate the operation and efficiency of network bandwidth sharing between disparate scientific groups

I Arlington, VA April 20th, 2004 Shawn McKee UltraLight Architecture UltraLight envisions extending and augmenting the existing grid computing infrastructure (currently focused on CPU/storage) to include the network as an integral component. A second aspect is strengthening and extending “end-to- end” monitoring and planning

I Arlington, VA April 20th, 2004 Shawn McKee UltraLight Proposal Outline

I Arlington, VA April 20th, 2004 Shawn McKee Workplan and Phased Deployment UltraLightUltraLight envisions a 4 year program to deliver a new, high-performance, network-integrated infrastructure: Phase I will last 12 months and focus on deploying the initial network infrastructure and bringing up first services Phase II will last 18 months and concentrate on implementing all the needed services and extending the infrastructure to additional sites UltraLightPhase III will complete UltraLight and last 18 months. The focus will be on a transition to production in support of LHC Physics and eVLBI Astronomy

I Arlington, VA April 20th, 2004 Shawn McKee UltraLight Network: PHASE I Implementation via “sharing” with HOPI/NLR MIT not yet “optically” coupled

I Arlington, VA April 20th, 2004 Shawn McKee UltraLight Network: PHASE II Move toward multiple “lambdas” Bring in BNL and MIT

I Arlington, VA April 20th, 2004 Shawn McKee UltraLight Network: PHASE III Move into production Optical switching fully enabled amongst primary sites Integrated international infrastructure

I Arlington, VA April 20th, 2004 Shawn McKee Equipment and Interconnects The UltraLight optical switching topology is shown UltraLight plans to integrate data caches and CPU resources to provide integration testing and optimization

I Arlington, VA April 20th, 2004 Shawn McKee UltraLight Network UltraLight is a hybrid packet- and circuit-switched network infrastructure employing ultrascale protocols and dynamic building of optical paths to provide efficient fair-sharing on long range networks up to the 10 Gbps range, while protecting the performance of real-time streams and enabling them to coexist with massive data transfers. Circuit switched: “Intelligent photonics” (using wavelengths dynamically to construct and tear down wavelength paths rapidly and on demand through cost-effective wavelength routing) are a natural match to the peer-to-peer interactions required to meet the needs of leading-edge, data-intensive science. Packet switched: Many applications can effectively utilize the existing, cost effective networks provided by shared packet switched infrastructure. A subset of applications require more stringent guarantees than a best-effort network can provide, and so we are planning to utilize MPLS as an intermediate option

I Arlington, VA April 20th, 2004 Shawn McKee MPLS Topology Current network engineering knowledge is insufficient to predict what combination of “best-effort” packet switching, QoS-enabled packet switching, [G]MPLS and dedicated circuits will be most effective in supporting these applications. We will use [G]MPLS and other modes of bandwidth management, along with dynamic adjustments of optical paths and their provisioning, in order to develop the means to optimize end-to-end performance among a set of virtualized disk servers, a variety of real-time processes, and other traffic flows.

I Arlington, VA April 20th, 2004 Shawn McKee Logical Diagram of UltraLight Grid Enabled Analysis An “UltraLight” user’s perspective of the system: Important to note that the system helps interpret and optimize itself while “summarizing” the details for ease of use

I Arlington, VA April 20th, 2004 Shawn McKee Summary and Status UltraLight promises to deliver the critical missing component for future eScience: the integrated, managed network We have a strong team in place, as well as a detailed plan, to provide the needed infrastructure and services for production use by LHC turn-on at the end of 2007 Currently we are awaiting the results of the ITR process We will need to augment the proposal with additional grants to enable us to reach our goal of having UltraLight be a pervasive and effective infrastructure for LHC physics and eVLBI Astronomy

I Arlington, VA April 20th, 2004 Shawn McKee Questions? Questions? (or Answers)?