T0-T1 Networking Meeting 16th June Meeting

Slides:



Advertisements
Similar presentations
David Foster, CERN NorduNet Meeting April 2008 NorduNet 2008 LHCOPN Present and Future David Foster Head, Communications and Networks CERN.
Advertisements

HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Steinbuch Centre for Computing (SCC)
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
Service Challenge Meeting “Service Challenge 2 update” James Casey, IT-GD, CERN IN2P3, Lyon, 15 March 2005.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Connect communicate collaborate LHCONE L3VPN Status Update Mian Usman LHCONE Meeting Rome 28 th – 29 th Aprils 2014.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
John Gordon STFC-RAL Tier1 Status 9 th July, 2008 Grid Deployment Board.
1 LHC-OPN 2008, Madrid, th March. Bruno Hoeft, Aurelie Reymund GridKa – DE-KIT procedurs Bruno Hoeft LHC-OPN Meeting 10. –
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
SC4 Planning Planning for the Initial LCG Service September 2005.
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
Connect communicate collaborate LHCONE Diagnostic & Monitoring Infrastructure Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
Operations model Maite Barroso, CERN On behalf of EGEE operations WLCG Service Workshop 11/02/2006.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Connect. Communicate. Collaborate LHCOPN lambda and fibre routing – Episode 3 Michael Enrico Network Engineering & Planning Manager DANTE LHC Meeting,
David Foster, CERN GDB Meeting April 2008 GDB Meeting April 2008 LHCOPN Status and Plans A lot more detail at:
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
David Foster, CERN LHC T0-T1 Meeting, Seattle, November November Meeting David Foster SEATTLE.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
LHC-OPN operations Roberto Sabatino LHC T0/T1 networking meeting Amsterdam, 31 January 2006.
David Foster, CERN LHC T0-T1 Meeting, Cambridge, January 2007 LHCOPN Meeting January 2007 Many thanks to DANTE for hosting the meeting!! Thanks to everyone.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks ENOC status LHC-OPN meeting – ,
LHCOPN lambda and fibre routing Episode 4 (the path to resilience…)
LHC[OPN/ONE]  IPv6  status
LHC T0/T1 networking meeting
Fermilab T1 infrastructure
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
Computing Operations Roadmap
“A Data Movement Service for the LHC”
LHCONE status and future
The LHC Computing Environment
LCG Service Challenge: Planning and Milestones
LHCOPN update Brookhaven, 4th of April 2017
LCG France Network Infrastructures
GÉANT2 update - II Otto Kreiter, DANTE.
Database Readiness Workshop Intro & Goals
Networking for the Future of Science
Update on Plan for KISTI-GSDC
WLCG: TDR for HL-LHC Ian Bird LHCC Referees’ meting CERN, 9th May 2017.
LHCONE L3VPN Status update Mian Usman LHCOPN-LHCONE meeting
Summary from last MB “The MB agreed that a detailed deployment plan and a realistic time scale are required for deploying glexec with setuid mode at WLCG.
Project Status Report Computing Resource Review Board Ian Bird
Workshop Summary Dirk Duellmann.
Thanks to everyone for attending!!
LHC Data Analysis using a worldwide computing grid
LHC Tier 2 Networking BOF
an overlay network with added resources
Wide-Area Networking at SLAC
IPv6 update Duncan Rand Imperial College London
Presentation transcript:

T0-T1 Networking Meeting 16th June Meeting David Foster Cambridge LHC T0-T1 Meeting, Cambridge, June 2006

Current State We have had a number of meetings and agreed on an overall approach with many open questions. Some results have been published http://lcg.web.cern.ch/LCG/activities/networking/nw-grp.html And we have a wiki site for gathering operational information – important outcome of these meetings. https://uimon.cern.ch/twiki/bin/view/LHCOPN/WebHome As we move more and more towards providing an production infrastructure needed for service challenges in 2006, many basic issues still remain. LHC T0-T1 Meeting, Cambridge, June 2006

Key Dates in 2006 31st March. 6 Tier-1’s (3 via Geant) with their final links. We did not quite achieve this milestone. 30th April. SC4 Setup 31st May. SC4 Stable Service 30th September. End SC4/Start Initial LHC Service 1) 8 Tier-1s and 20 Tier-2s must have demonstrated availability better than 90% of the levels specified in Annex 3 of the WLCG MoU [adjusted for sites that do not provide a 24 hour service] 2) Success rate of standard application test jobs greater than 90% (excluding failures due to the applications environment and non-availability of sites) 3) Performance and throughput tests complete: Performance goal for each Tier-1 is the nominal data rate that the centre must sustain during LHC operation: CERN-disk > network > Tier-1-tape. Throughput test goal is to maintain for one week an average throughput of 1.6 GB/s from disk at CERN to tape at the Tier-1 sites. All Tier-1 sites must participate. LHC T0-T1 Meeting, Cambridge, June 2006

Todays Meeting The purpose of todays meeting is to focus on the problems of resiliance as agreed at the Rome meeting. We have primary circuits well defined. We can purchase physically diverse backup circuits Done for the US Possible on the GEANT footprint Other initiatives bring opportunities GLIF, Cross Border Fiber We need to converge on a layer-3 design that allows the re-direction of traffic in case of failure that utilises the available L1/L2 infrastructure as appropriate. Need to understand the level of service this will provide also. The questions to be answered are: Can we identify for each primary link failure where the traffic will flow? Do we have a necessary and sufficient L3 design? What circuits can be considered as available and configured into the OPN? What additional circuits should be provisioned? LHC T0-T1 Meeting, Cambridge, June 2006

Circuits – 2007 View LHC T0-T1 Meeting, Cambridge, June 2006 Number Source Destination Service Provider Capacity 1 CERN IN2P3 Dedicated RENATER 10Gbit 2 Netherlight GN2 3 4 SARA 5 FZK 6 PIC 7 NDGF 8 CNAF 9 RAL 10 Manlan Commercial <10GBit 11 Starlight <10Gbit 12 ASGC 13 TRIUMF CANARIE 14 GridKa DFN/SWITCH 10 Gbit To be discussed!!! 15 DFN/SWITCH/GARR Now 16 DFN/RENATER Q4 2006 17 DFN/SURFNET Q2 2006 18 SURFNET/NORDUNET 10 GBit LHC T0-T1 Meeting, Cambridge, June 2006

T1 – 2007 View LHC T0-T1 Meeting, Cambridge, June 2006 Source Destination Primary Primary e2e Capacity Alternate Alternate Provider Alternate e2e capacity CERN IN2P3 1 10Gbit ASGC 2,12 TRIUMF 3,13 SARA 4 FZK 5 PIC 6 NDGF 7 CNAF 8 RAL 9 BNL 10, EsNet <10GBit FNAL 11, EsNet <10Gbit LHC T0-T1 Meeting, Cambridge, June 2006

Conclusions From April 2006 T1-T1 requirements becoming better understood and will be an issue. Lots of new initiatives, do we have a complete overview? Need to now discuss in much more detail the issue of the operational schedule, planned/unplanned outages and backup strategies including CBF. Routing Document produced and on Twiki. NOC information on the Twiki. But clearly not 24x7. Need a meeting dedicated to this. No T1-T1 traffic but not excluded. Basic ACL monitoring in place. Allowed IP addresses in the RIPE database. Need: BGP on all sites. Operations ENOC model and integration with NREN’s and DANTE will continue. Dante deploying Perfsonar and integrate with ESNet, Canarie and Taiwan. Plan to report on the experience at next OPN meeting Security Document has been produced. Robin to liaise and agree with site security officers. Report on feedback at the next meeting. Monitoring Document has been produced and published. PerfSONAR will provide the basic level-1 monitoring. T1 volunteers for prototype monitoring tests. Need to demonstrate value through volunteers. Will try and use this for demonstration purposed. Piggyback on the perfsonar activity, but get more information? Site Reports Campus Status US Labs complete both have redundant 10G paths IN2P3, NDGF, RAL, TRIUMF Complete but backup unclear SARA,CNAF,FZK complete, Backup via CBF with each other. ASGC Complete, Backup via US (in 2007). T2’s will have 10G links. PIC ? Next meetings Next to be end June16th. Progress reports on the above topics. LHC T0-T1 Meeting, Cambridge, June 2006