“A Data Movement Service for the LHC”

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
The LCG Service Challenges: Experiment Participation Jamie Shiers, CERN-IT-GD 4 March 2005.
Service Challenge Meeting “Service Challenge 2 update” James Casey, IT-GD, CERN IN2P3, Lyon, 15 March 2005.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
3D Workshop Outline & Goals Dirk Düllmann, CERN IT More details at
CSCS Status Peter Kunszt Manager Swiss Grid Initiative CHIPP, 21 April, 2006.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
LCG Storage Workshop “Service Challenge 2 Review” James Casey, IT-GD, CERN CERN, 5th April 2005.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Service Challenge Meeting “Review of Service Challenge 1” James Casey, IT-GD, CERN RAL, 26 January 2005.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
LHC-OPN operations Roberto Sabatino LHC T0/T1 networking meeting Amsterdam, 31 January 2006.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
INFSO-RI Enabling Grids for E-sciencE File Transfer Service Patricia Mendez Lorenzo CERN (IT-GD) / CNAF Tier 2 INFN - SC3 Meeting.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
INFN Tier1/Tier2 Cloud WorkshopCNAF, 22 November 2006 Conditions Database Services How to implement the local replicas at Tier1 and Tier2 sites Andrea.
Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.
T0-T1 Networking Meeting 16th June Meeting
Bob Jones EGEE Technical Director
WLCG IPv6 deployment strategy
ALICE Computing Data Challenge VI
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Grid Operations Centre Progress to Aug 03
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
The Beijing Tier 2: status and plans
LCG Service Challenge: Planning and Milestones
Status Report on LHC_2 : ATLAS computing
U.S. ATLAS Grid Production Experience
NL Service Challenge Plans
LHCOPN update Brookhaven, 4th of April 2017
LCG 3D Distributed Deployment of Databases
James Casey, IT-GD, CERN CERN, 5th September 2005
Service Challenge 3 CERN
Data Challenge with the Grid in ATLAS
Dagmar Adamova, NPI AS CR Prague/Rez
Workshop Summary Dirk Duellmann.
Summary of Service Challenges
LHC Data Analysis using a worldwide computing grid
5th EU DataGrid Conference
LHC Tier 2 Networking BOF
IPv6 update Duncan Rand Imperial College London
The LHCb Computing Data Challenge DC06
Presentation transcript:

“A Data Movement Service for the LHC” Grid Deployment Board “A Data Movement Service for the LHC” James Casey, IT-GD, CERN NIKHEF/SARA, 13 October 2004

A Data Movement Service for the LHC Service Overview CERN Configuration Tier-1 Network and Storage Setup Transfer software Service Challenge Meeting at SARA/NIKHEF CERN IT-GD

Service Overview Aim is to prototype the data movement system needed for LCG startup Progressive milestones to have all components in place 3 months before first data from LHC Details in Les’s slides Many of the components already exist But have not been proven together Nor at the required data rates and reliability levels Need to get the service teams who already look after infrastructure connected CERN IT-GD

Projected data rates and bandwidth requirements RAL Fermilab Brookhaven Karlsruhe IN2P3 CNAF PIC Data Rate (MB/sec) 148 69 152 283 Total Bandwidth (Gb/sec) 3.6 1.7 3.7 6.8 3.5 Assumed provisioned bandwidth 10.00 Taipei Tokyo Nordugrid TRIUMF SARA/NIKHEF Data Rate (MB/sec) 142 72 148 Total Bandwidth (Gb/sec) 3.4 1.7 3.6 Assumed Provisioned bandwidth 10.00 * Projections as of 30-09-04 Note: Total Required bandwidth = (total * 1.5 (headroom) ) * 2 (capacity) CERN IT-GD

Milestones Dec04 - Service Challenge 1 complete mass store-mass store, CERN+3 sites, 500 MB/sec between sites, 2 weeks sustained Mar05 - Service Challenge 2 complete reliable file transfer service, mass store-mass store, CERN+5 sites, 500 MB/sec between sites, 1 month sustained We will try and use first version of reliable file transfer service for Service Challenge 1 But focus is on network and storage infrastrucure CERN IT-GD

CERN Configuration 10 x Itanium2 machines Each has 1Gb link that gets aggregated into a 10Gb switch All nodes run CERN SLC3 Configured to run the following services: 5 gridftp servers, non-load balanced A 4-node load-balanced SRM/Gridftp system (not yet complete) 1 control node, upon which the transfer management software will run Direct connections to external network 10 Gb connection to GEANT 10 Gb link to Chicago (via Starlight) 10 Gb link to SARA/NIKHEF (via Surfnet) CERN IT-GD

Current Network Layout OpenLab FZK GEANT CERN Internal Network Desy IN2P3 GEANT CERN NIKHEF/ SARA SURFnet BNL DataTag STM64 Fermi EsNet Abilene StarLight Force10 Chicago POP CERN IT-GD

Required Resources at a Tier-1 Network Preferably a dedicated network connection to CERN A defined dedicated subnet which will be routed to the CERN transfer cluster Storage Gridftp will be the initial transfer protocol Desirable for disk pool manager to manage space to load balance between storage nodes CERN IT-GD

Steps to add a new Tier-1 4 steps to getting a new Tier-1 connected and tested Initial connectivity and routing configuration of a site to the test cluster at CERN Deployment of managed storage at the site short term transfers carried out to test and measure untuned performance Site tuning (at network and storage level) “Service Challenge”. Can last a few days for a site with non-dedicated network Aim is to show that the network is the bottleneck A few weeks for a site with sufficient bandwidth and hardware Aim is to show reliable transfers at peak data rate (500MB/s disk-to-disk) CERN IT-GD

Current Status     RAL Fermilab Brookhaven Karlsruhe IN2P3 CNAF PIC 1. Network Configuration   2. Storage Configation 3. Site Tuning 4. “Service Challenge” Taipei Tokyo Nordugrid TRIUMF SARA/NIKHEF DESY 1. Network Configuration   2. Storage Configation 3. Site Tuning 4. “Service Challenge” KEY  = Complete  = In Progress CERN IT-GD

First measurements Measurements taken at CERN Router Most traffic is to FNAL ~ 100MB/s peak CERN IT-GD

Next steps Each step of process takes approx. 2 weeks to complete Aim to go through process with all Tier-1s by early 2005 First draft of document - “ ‘Robust Data Transfer’ Service Challenge Tier-1 Information” Describes procedure for joining Details on CERN configuration Sent out on the hep-forum-wan-tier1 mailing list Attached to agenda Next steps Start process with other European Tier-1s Then tackle non-European ones Additional problems due to long network path length Start to schedule ‘slots’ for December/January for sites to have Service Challenge CERN IT-GD

Transfer Software Proposal for File Movement System created by LCG-GD and EGEE-DM teams. Aims to Create architecture that will let components from both projects interoperate Allow sites to use their own existing storage and transfer tools where possible First draft of architecture – July 2004 https://edms.cern.ch/document/490347/2 Based on ideas from current systems CMS TMDB, Globus RFT, Condor Stork CERN IT-GD

Status Simple CLI Tools created Simple Transfer daemon exists lcg-move-start [--credentials cred] [--verbose] lcg-move-submit [--id jobId] [ --source src --dest dest || --file inputs ] lcg-move-job-ls [-l] [-a] lcg-move-cancel [--id jobId] [--verbose] lcg-move-get-job-summary [--id jobId] [--verbose] lcg-move-get-job-status [--id jobId] [--verbose] Simple Transfer daemon exists Started work with Condor Stork, but limitations showed up Now use a simple perl multi-process daemon to transfer files gLite working on WS interfaces and Web-based UI CERN IT-GD

NIKHEF/SARA Meeting First meeting on Service Challenges held yesterday Participants from NIKHEF/SARA, CERN, UvA, Surfnet, RAL, Karlsruhe General presentations including ATLAS computing model LCG Milestones Service overview NL NREN, Amsterdam and SARA network configurations Afternoon consisted of detailed planning for CERN to NIKHEF/SARA transfers See slides in CERN Agenda for more details CERN IT-GD

Outcomes of meeting Detailed planning for CERN – NIKHEF/SARA transfers done Testing via Geant until after SC2004 Move to dedicated Surfnet link for tuning (2 weeks) Service Challenge – Early December Defined possible upgrade paths to achieve more bandwidth on surfnet link Try and get as much of the 10Gb as possible Plans for Service Challenges in January with Karlsruhe, Fermi (TBC) ? Next meeting : Karlsruhe, 2 December CERN IT-GD

Summary Service Challenges have aggressive deadlines Basic infrastructure in place at several sites Need to work on next layer of infrastructure – transfer service tools Need to get some other sites onboard Scheduling of Service Challenge slots has started CERN IT-GD