US LHC NWG Dynamic Circuit Services in US LHCNet Artur Barczyk, Caltech Joint Techs Workshop Honolulu, 01/23/2008.

Slides:



Advertisements
Similar presentations
1 On the Management Issues over Lambda Networks 2005 / 08 / 23 Te-Lung Liu Associate Researcher NCHC, Taiwan.
Advertisements

Research Challenges in the Emerging Hybrid Network World Tom Lehman University of Southern California Information Sciences Institute (USC/ISI)
All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
High Performance Computing Course Notes Grid Computing.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
Hybrid MLN DOE Office of Science DRAGON Multi-Layer, Multi-Domain Control Plane Hybrid Networks Architecture Current Status and Future Issues Andy Lake,
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Interoperable Intelligent Optical Networking: Key to future network services and applications OIF Carrier Group Interoperability: Key issue for carriers.
Dan Nae California Institute of Technology US LHCNet Update.
TERENA Networking Conference 2004, Rhodes, Greece, June Differentiated Optical Services and Optical SLAs Afrodite Sevasti Greek Research and.
TeraPaths TeraPaths: establishing end-to-end QoS paths - the user perspective Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
QoS Support in High-Speed, Wormhole Routing Networks Mario Gerla, B. Kannan, Bruce Kwan, Prasasth Palanti,Simon Walton.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Lambda Station Project Andrey Bobyshev; Phil DeMar; Matt Crawford ESCC/Internet2 Winter 2008 Joint Techs January 22; Honolulu, HI
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
DRAGON Dynamic Resource Allocation via GMPLS Optical Networks API Overview Jaroslav Flidr, Peter O’Neil, Chris Tracy Mid-Atlantic Crossroads.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Rick Summerhill Chief Technology Officer, Internet2 Internet2 Fall Member Meeting 9 October 2007 San Diego, CA The Dynamic Circuit.
InterDomain Dynamic Circuit Network Demo Joint Techs - Hawaii Jan 2008 John Vollbrecht, Internet2
1 VINCI : Virtual Intelligent Networks for Computing Infrastructures An Integrated Network Services System to Control and Optimize Workflows in Distributed.
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Bandwidth-on-Demand evolution Gerben van Malenstein Fall 2011 Internet2 Member Meeting Raleigh, North Carolina, USA – October 3, 2011.
Hybrid MLN DOE Office of Science DRAGON Hybrid Network Control Plane Interoperation Between Internet2 and ESnet Tom Lehman Information Sciences Institute.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
OGF DMNR BoF Dynamic Management of Network Resources Documents available at: Guy Roberts, John Vollbrecht.
Packet switching network Data is divided into packets. Transfer of information as payload in data packets Packets undergo random delays & possible loss.
Lucy Yong Young Lee IETF CCAMP WG GMPLS Extension for Reservation and Time based Bandwidth Service.
Dynamic Lightpath Services on the Internet2 Network Rick Summerhill Director, Network Research, Architecture, Technologies, Internet2 TERENA May.
Connect. Communicate. Collaborate AAI scenario: How AutoBAHN system will use the eduGAIN federation for Authentication and Authorization Simon Muyal,
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
1 MonALISA Team Iosif Legrand, Harvey Newman, Ramiro Voicu, Costin Grigoras, Ciprian Dobre, Alexandru Costan MonALISA capabilities for the LHCOPN LHCOPN.
1 TeraPaths and dynamic circuits  Strong interest to expand testbed to sites connected to Internet2 (especially US ATLAS T2 sites)  Plans started in.
OSCARS Roadmap Chin Guok Feb 6, 2009 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
Connect. Communicate. Collaborate Global On-demand Light Paths – Developing a Global Control Plane R.Krzywania PSNC A.Sevasti GRNET G.Roberts DANTE TERENA.
(Slide set by Norvald Stol/Steinar Bjørnstad
Dynamic Circuit Network An Introduction John Vollbrecht, Internet2 May 26, 2008.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Internet2 Dynamic Circuit Services and Tools Andrew Lake, Internet2 July 15, 2007 JointTechs, Batavia, IL.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
1 Revision to DOE proposal Resource Optimization in Hybrid Core Networks with 100G Links Original submission: April 30, 2009 Date: May 4, 2009 PI: Malathi.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
1 Network related topics Bartosz Belter, Wojbor Bogacki, Marcin Garstka, Maciej Głowiak, Radosław Krzywania, Roman Łapacz FABRIC meeting Poznań, 25 September.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
Multi-layer software defined networking in GÉANT
California Institute of Technology
InterDomain Dynamic Circuit Network Demo
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Grid Network Services: Lessons from SC04 draft-ggf-bas-sc04demo-0.doc
Integration of Network Services Interface version 2 with the JUNOS Space SDK
University of Technology
OSCARS Roadmap Chin Guok
Presentation transcript:

US LHC NWG Dynamic Circuit Services in US LHCNet Artur Barczyk, Caltech Joint Techs Workshop Honolulu, 01/23/2008

US LHC NWG US LHCNet Overview Mission oriented network: Provide trans-Atlantic network infrastructure to support the US LHC program Mission oriented network: Provide trans-Atlantic network infrastructure to support the US LHC program Four PoPs:  CERN  Starlight (→ Fermilab)  Manlan (→ Brookhaven)  SARA Four PoPs:  CERN  Starlight (→ Fermilab)  Manlan (→ Brookhaven)  SARA CERN SARA Manlan Starlight 2008: 30 (40) Gbps trans-Atlantic bandwidth 2008: 30 (40) Gbps trans-Atlantic bandwidth (roadmap: 80 Gbps by 2010) 2008: 30 (40) Gbps trans-Atlantic bandwidth 2008: 30 (40) Gbps trans-Atlantic bandwidth (roadmap: 80 Gbps by 2010)

US LHC NWG ALICE  pp  s =14 TeV L=10 34 cm -2 s -1  27 km Tunnel in Switzerland & France CMS Atlas Higgs, SUSY, Extra Dimensions, CP Violation, QG Plasma, … the Unexpected Physicists & Engineers 250+ Institutes 60+ Countries Challenges: Analyze petabytes of complex data cooperatively Harness global computing, data & network resources Large Hadron CERN LHCb Start in 2008

US LHC NWG The LHC Data Grid Hierarchy Emerging Vision: A Richly Structured, Global Dynamic System 10 Gbps CERN/Outside Ratio ~1:4 T0/(  T1)/(  T2) ~1:2:2 ~40% of Resources in Tier2s US T1s and T2s Connect to US LHCNet PoPs Online Online 10 – 40 Gbps GEANT2+NRENS Germany T1 BNL T1 USLHCNet + ESnet Outside/CERN Ratio Larger; Expanded Role of Tier1s & Tier2s: Greater Reliance on Networks

US LHC NWG The Roles of Tier Centers Tier 0 (CERN) (CERN) Tier 2 Tier 3 11 Tier1s, over 100 Tier2s → LHC Computing will be more dynamic & network-oriented 11 Tier1s, over 100 Tier2s → LHC Computing will be more dynamic & network-oriented Requirements for Dynamic Circuit Services in US LHCNet  Prompt calibration and alignment  Reconstruction  Store complete set of RAW data  Reprocessing  Store part of processed data  Monte Carlo Production  Physics Analysis Physics Analysis Tier 1 Defines the dynamism of data transfers

US LHC NWG CMS Data Transfer Volume (May – Aug. 2007) 10 PetaBytes transferred Over 4 Mos. = 8.0 Gbps Avg. (15 Gbps Peak)

US LHC NWG 88 Gbps Peak; 80+ Gbps Sustainable for Hours, Storage-to-Storage 40 G In 40 G Out End-system capabilities growing

US LHC NWG Managed Data Transfers  The scale of the problem and the capabilities of the end-systems require a managed approach with scheduled data transfer requests  The dynamism of the data transfers defines the requirements for scheduling  Tier0 → Tier1, linked to duty cycle of the LHC  Tier1 → Tier1, whenever data sets are reprocessed  Tier1 → Tier2, distribute data sets for analysis  Tier2 → Tier1, distribute MC produced data  Transfer Classes  Fixed allocation  Preemptible transfers  Best effort  Priorities  Preemption  Use LCAS to squeeze low(er) priority circuits  Interact with End-Systems  Verify and monitor capabilities All of this will happen “on demand” from Experiment’s Data Management systems Needs to work end-to-end: collaboration in GLIF, DICE

US LHC NWG Managed Network Services Operations Scenario  Receive request, check capabilities, schedule network resources  “Transfer N Gigabytes from A to B with target throughput R1”  Authenticate/authorize/prioritize  Verify end-host rate capabilities R2 (achievable rate)  Schedule bandwidth B > R2; estimate time to complete T(0)  Schedule path with priorities P(i) on segment S(i)  Check progress periodically  Compare rate R(t) to R2, update time to complete T(i) to T(i-1)  Trigger on behaviours requiring further action  Error (e.g. segment failure)  Performance issues (e.g. poor progress, channel underutilized, long waits)  State change (e.g. new high priority transfer submitted)  Respond dynamically: to match policies and optimize throughput  Change channel size(s)  Build alternative path(s)  Create new channel(s) and squeeze others in class

US LHC NWG Managed Network Services: End-System Integration  Integration of network services and end-systems  Requires end-to-end view of the network and end-systems, real-time monitoring  Robust, real-time and scalable messaging infrastructure  Information extraction and correlation  e.g. network state, end-host state, transfer queues-state  Obtain via network services  end-host agent (EHA) interactions  Provide sufficient information for decision support  Cooperation of EHAs and network services  Automate some operational decisions using accumulated experience  Increase level of automation to respond to: increases in usage, number of users, and competition for scarce network resources Required for a robust end-to-end production system

US LHC NWG Lightpaths in US LHCNet domain (Virtual Intelligent Networks for Computing Infrastructures in Physics) Control Plane Data Plane Dynamic setup and reservation of lightpaths has been successfully demonstrated by the VINCI project controlling optical switches

US LHC NWG Planned Interfaces I-NNI: I-NNI: VINCI (custom) protocols E-NNI: E-NNI: Web Services (DCN IDC) UNI: UNI: VINCI custom protocol, client = EHA  Most, if not all, LHC data transfers will cross more than one domain  E.g. in order to transfer data from CERN to Fermilab:  CERN → US LHCNet → ESnet → Fermilab  VINCI Control Plane for intra-domain,  DCN (DICE/GLIF) IDC for inter-domain provisioning UNI: UNI: DCN IDC? LambdaStation? TeraPaths?

US LHC NWG Protection Schemes  Mesh-protection at Layer 1  US LHCNet links are assigned to primary users  CERN – Starlight for CMS  CERN – Manlan for Atlas  In case of link failure cannot blindly use bandwidth belonging to the other collaboration  Carefully choose protection links, e.g. use the indirect path (CERN- SARA-Manlan)  Designated Transit Lists, and DTL- Sets  High-level protection features implemented in VINCI  Re-provision lower priority circuits  Preemption, LCAS Needs to work end-to-end: collaboration in GLIF, DICE

US LHC NWG Basic Functionality To-Date 14 High performance servers Ciena CoreDirectors US LHCNet routers Ultralight routers Pre-production (R&D) setup: Local domain: Local domain: routing of private IP subnets onto tagged VLANs Core network (TDM): Core network (TDM): VLAN based Virtual Circuits Pre-production (R&D) setup: Local domain: Local domain: routing of private IP subnets onto tagged VLANs Core network (TDM): Core network (TDM): VLAN based Virtual Circuits  Semi-automatic intra-domain circuit provisioning  Bandwidth adjustment (LCAS)  End-host tuning by the End-Host Agent  End-to-End monitoring  Semi-automatic intra-domain circuit provisioning  Bandwidth adjustment (LCAS)  End-host tuning by the End-Host Agent  End-to-End monitoring

US LHC NWG CERN Geneva Manlan USLHCnet MonALISA: Monitoring the US LHCNet Ciena CDCI Network SARA Starlight

US LHC NWG Roadmap Ahead  The current capabilities include  End-to-End monitoring  Intra-domain circuit provisioning  End-host tuning by the End-Host Agent  Towards a production system (intra-domain)  Integrate existing end-host agent, monitoring and measurement services  Provide a uniform user/application interface  Integration with experiments’ Data Management Systems  Automated fault handling  Priority-based transfer scheduling  Include Authorisation, Authentication and Accounting  Towards a production system (inter-domain)  Interface to DCN IDC  Work with DICE, GLIF on IDC protocol specification  Topology exchange, routing, end-to-end path calculation  Extend AAA infrastructure to multi-domain

US LHC NWG Summary and Conclusions  Movement of LHC data will be highly dynamic  Follow LHC data grid hierarchy  Different data sets (size, transfer speed and duration), different priorities  Data Management requires network-awareness  Guaranteed bandwidth end-to-end (storage-system to storage-system)  End-to-end monitoring including end-systems  We are developing the intra-domain control plane for US LHCNet  VINCI project, based on MonALISA framework  Many services and agents are already developed or in advanced state  Use Internet2’s IDC protocol for inter-domain provisioning  Collaboration with Internet2, ESNet, LambdaStation, Terapaths on end-to-end circuit provisioning