T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

ALICE Operations short summary LHCC Referees meeting June 12, 2012.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
ALICE DATA ACCESS MODEL Outline ALICE data access model - PtP Network Workshop 2  ALICE data model  Some figures.
Tom Dietel University of Cape Town for the ALICE Collaboration Computing for ALICE at the LHC.
ALICE data access WLCG data WG revival 4 October 2013.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
ALICE Roadmap for 2009/2010 Patricia Méndez Lorenzo (IT/GS) Patricia Méndez Lorenzo (IT/GS) On behalf of the ALICE Offline team Slides prepared by Latchezar.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
ALICE – networking LHCONE workshop 10/02/ Quick plans: Run 2 data taking Both for Pb+Pb and p+p – Reach 1 nb -1 integrated luminosity for rare.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
CERN – IT Department CH-1211 Genève 23 Switzerland t Working with Large Data Sets Tim Smith CERN/IT Open Access and Research Data Session.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
Handling ALARMs for Critical Services Maria Girone, IT-ES Maite Barroso IT-PES, Maria Dimou, IT-ES WLCG MB, 19 February 2013.
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Data Formats and Impact on Federated Access
Ian Bird WLCG Workshop San Francisco, 8th October 2016
ALICE internal and external network
Belle II Physics Analysis Center at TIFR
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
evoluzione modello per Run3 LHC
Workshop Computing Models status and perspectives
Vanderbilt Tier 2 Project
Bernd Panzer-Steindel, CERN/IT
Update on Plan for KISTI-GSDC
Status and Prospects of The LHC Experiments Computing
RDIG for ALICE today and in future
Project Status Report Computing Resource Review Board Ian Bird
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Development of LHCb Computing Model F Harris
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

T1 at LBL/NERSC/OAK RIDGE General principles

RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk buffer disk buffer disk buffer disk buffer out Tape Copy 1 (full) Copy 2 (partial)

Data processing Raw data ESD × 2 AOD × 3 Reconstruction Filtering MC data 800 users

Hirearchy Tier 2 Tier 1 Tier 0 (CERN & Budapest) … … 10 Gb/s Connectivity – anyone to anyone, T0 T1 is 10GB/sec (LHCOPN)

Grid node CPU 3GB/core RAM 10GB/core HDD CPU 3GB/core RAM 10GB/core HDD AliEn + grid services Disk storage xrootd management 1TB/3cores Disk storage xrootd management 1TB/3cores Batch services IN – data replicas from other centres OUT – data to other centres Local traffic – one copy of all locally processed data + all analysis jobs I/O MC & analysis OUT – remote WNs data requests (small volumes)

Processing principles All resources are pooled together Any site performs any kind of tasks (except RAW data access limited to T0 & T1s) Even this is not ‘a rule’ Data placement guided by topological location of sites Storage auto discovery Job goes to the data Network scales with #users and amount of data

Some numbers ALICE has been collecting data since : 0.9 – 7 TeV TeV (MB); L int = 3 μb : 2.76 – 7 TeV (MB & rare) 2.76 TeV (MB & rare); L int = 80 μb : 8 TeV (rare) 5.02 TeV (MB & rare); L int = 30 nb -1

RAW Data collection TeV TeV TeV TeV TeV TeV

Processing needs ~ 10 HEP-Spec06 / core ~ 50 HEP-Spec06 × s / PbPb event ~ 1 TB / 50,000 PbPb events

Processing capacities Tier 2 Tier 1 Tier 0 … … 1,300 1,500 2, CPU Disk Tape 8-core PB 24% 27% 49% 28% 29% 45%

Capacity evolution

Capacity evolution (2)

T1 definition From WLCG MoUWLCG MoU acceptance of an agreed share of raw data from the Tier0 Centre, keeping up with data acquisition; acceptance of an agreed share of first-pass reconstructed data from the Tier0 Centre; acceptance of processed and simulated data from other centres of the WLCG; recording and archival storage of the accepted share of raw data (distributed back-up); provision of managed disk storage providing permanent and temporary data storage for files and databases; provision of access to the stored data by other centres of the WLCG and by named AF’s; operation of a data-intensive analysis facility; provision of other services according to agreed Experiment requirements;

T1 services Ensure high-capacity network bandwidth and services for data exchange with the Tier0 Centre, as part of an overall plan agreed amongst the Experiments, Tier1 and Tier0 Centres; Ensure network bandwidth and services for data exchange with Tier1 and Tier2 Centres, as part of an overall plan agreed amongst the Experiments, Tier1 and Tier2 Centres; Administration of databases required by Experiments at Tier1 Centres. All storage and computational services shall be “grid enabled” according to standards agreed between the LHC Experiments and the regional centres. Services must be provided on a long-term basis, with excellent reliability, a high level of availability and rapid responsiveness to problems.

QoS

Approval process (KISTI T1) Preparation/running of CPU, disk storage, local networking – existing Setting up tape copy through xrootd – 3 months RAW data replication/reconstruction tests – 2 months

Summary Being T1 is the only reasonable choice for a large computing centre Most of the components are already in place, additional elements add expertise Important for the progress of the centre itself Can be a source of substantial R&D programme The ‘prestige’ factor should not be ignored – there are 150+ T2 and only 8 T1s