Grid and Data handling Gonzalo Merino, Port d’Informació Científica / CIEMAT Primeras Jornadas del CPAN, El Escorial, 25/11/2009.

Slides:



Advertisements
Similar presentations
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Why Grids Matter to Europe Bob Jones EGEE.
Advertisements

Computing for LHC Dr. Wolfgang von Rüden, CERN, Geneva ISEF students visit CERN, 28 th June - 1 st July 2009.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Assessment of Core Services provided to USLHC by OSG.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
SICSA student induction day, 2009Slide 1 Social Simulation Tutorial Session 6: Introduction to grids and cloud computing International Symposium on Grid.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
A short introduction to GRID Gabriel Amorós IFIC.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Dr. Andreas Wagner Deputy Group Leader - Operating Systems and Infrastructure Services CERN IT Department The IT Department & The LHC Computing Grid –
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
PIC port d’informació científica EGEE – EGI Transition for WLCG in Spain M. Delfino, G. Merino, PIC Spanish Tier-1 WLCG CB 13-Nov-2009.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
PIC the Spanish LHC Tier-1 ready for data taking EGEE09, Barcelona 21-Sep-2008 Gonzalo Merino,
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Operations Automation Team Kickoff Meeting.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Spanish National Research Council- CSIC Isabel.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Response of the ATLAS Spanish Tier2 for.
Computing Model José M. Hernández CIEMAT, Madrid On behalf of the CMS Collaboration XV International Conference on Computing in High Energy and Nuclear.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Resource Provisioning EGI_DS WP3 consolidation workshop, CERN Fotis Karayannis, GRNET.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
IT-DSS Alberto Pace2 ? Detecting particles (experiments) Accelerating particle beams Large-scale computing (Analysis) Discovery We are here The mission.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
(Prague, March 2009) Andrey Y Shevel
Grid site as a tool for data processing and data analysis
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
The LHC Computing Grid Visit of Her Royal Highness
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Upgrade Predrag Buncic
Connecting the European Grid Infrastructure to Research Communities
LHC Data Analysis using a worldwide computing grid
EGI Webinar - Introduction -
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

Grid and Data handling Gonzalo Merino, Port d’Informació Científica / CIEMAT Primeras Jornadas del CPAN, El Escorial, 25/11/2009

Disclaimer Though the title of this talk is very generic, I will focus in describing the LHC Grid and data handling as an example. This is the community with the largest and more imminent computing needs, as well as my area of work. I will try and address the Grid-related activities in other CPAN areas. The information presented does not aim to show a complete catalogue of Grid activities, but to describe the general view and provide a handful of URL pointers to further information. 25/11/20092

LHC computing needs The LHC is one of the world largest scientific machines Proton-proton collider, 27 Km perimeter, 100 m underground, superconducting magnets at 1,9 K Four detectors will record the outcome of collisions 1 GHz collisions  Hz trigger  near 1 GB/s  PB /yr Adding up processed data, simulation, replicas: PB/year  years lifetime  LHC in the Exabyte scale Managing this huge amount of data and enabling its analysis by 1000s of scientists worldwide is a technological challenge. No way to concentrate such computing power and storage capacity at CERN. Decided to adopt the Grid paradigm for the LHC computing. 25/11/20093

4 LHC Grid: layered structure It comes from the early days (1999, MONARC). Then mainly motivated for limited network connectivity among sites. Today, the network is not the issue but the Tiered model is still used to organise work and data flows. Tier-1 (11 centres): Online to DAQ (24x7) Long term storage of RAW data copy, massive data reconstruction. Connect to CERN with dedicated 10 Gbps links. Tier-0 at CERN: DAQ and prompt reconstruction, Long term data curation. Tier-2 (>150 centres): End-user analysis and simulation. Connect to T1s with gral. purpose research networks. 25/11/2009

Worldwide LHC Computing Grid More than 170 centres in 34 countries ~ 86k CPUs, 68 PB disk, 65 PB tape CIEMAT UAM USC IFCA PIC IFAE UB IFIC PIC Tier-1ATLAS Tier-2CMS Tier-2LHCb Tier-2 (ATLAS, CMS, LHCb)IFICCIEMATUB IFAEIFCAUSC UAM Spain contributes with 1 Tier-1 and 7 Tier-2 (target ~5% of total T1/T2 capacity) 25/11/20095

Distribution of resources Experiment computing requirements for the run at the different WLCG Tiers More than 80% of the resources are outside CERN The Grid MUST work from day 1! 25/11/20096

LHC computing requirements The computing and storage capacity needs for WLCG are enormous. Capacity planning managed through the WLCG MoU: Yearly process where requirements and pledges are updated and agreed ~ today cores 25/11/20097

LHC Experiments Computing Models

Experiments Computing Models Every LHC experiment develops and maintains a Computing Model that aims to describe the organisation of the data and the computing infrastructure that is needed to process and analyse them. Example: input parameters to the ATLAS Computing Model 25/11/20099

ATLAS Computing Model Tier-0 CAF Prompt Reconstruction Calibration & Alignment Express Stream Analysis Tier-1 RAW Re-processing HITS Reconstruction Tier-2 Simulation and Analysis 650 MB/s MB/sec MB/s MB/sec 25/11/200910

CMS Computing Model Tier-0 CAF Prompt Reconstruction Calibration Express Stream Analysis Tier-1 Re-reconstruction Skimming & selection Tier MB/s MB/s MB/sec Simulation and analysis 25/11/200911

LHCb Computing Model Tier-0 CAF Reconstruction Stripping Analysis Calibration Express Stream Analysis Tier-1 Tier MB/s 10 MB/s Few MB/sec Simulation Tier-1 Reconstruction Stripping Analysis 25/11/200912

13 Data Analysis on the Grid The original vision: Application thin layer interacting with a powerful middleware layer Output User Algorithms Dataset Query Workload Management System Other Services “Super-WMS” to which the user throws input dataset queries plus algorithms and it spits the result out. 25/11/2009

14 Data Analysis User Analysis: Single interface for the whole analysis cycle, hide the complexity of the Grid (Ganga, CRAB, DIRAC, Alien …) Workload Management: Pilot jobs, late scheduling, VO- steered prioritisation (DIRAC, Alien, Panda …) Data Management: Topology aware higher level tools, capable of managing complex data flows (Phedex, DDM …) To use the Grid at such large scale is not an easy business! Reality today: LHC experiments have build increasingly sophisticated s/w stacks to interact with the Grid. On top of basic services: CE, SE, FTS, LFC Grid middleware Basic Services (FTS, LFC …) VO-specific user interface VO-specific WMS, DMS Computing and Storage resources 25/11/2009

Testing the LHC Grid

WLCG Service Challenges Large scale test campaigns by which the readiness of the overall LHC Computing Service to meet the requirements of the experiments has been tested. 2005, SC3: The first one where all Tier-1 centres participated. Transfer “dummy” data to try and reach high transfer throughput between sites. 2006, SC4: Target transfer rate 1,6 GB/s out of CERN reached during 1 day. Sustained 80% of this rate during long periods. More realistic data. 2008, CCRC08: Focus on having all 4 experiments testing all workflows simultaneously and keeping the service stable for a long period. 2009, STEP09: Last chance to stress-test the system before LHC start. Focus on multi-experiment workloads never tested before at large scale (e.g. massive data re-reconstruction recalling from tape) 25/11/200916

CCRC08 test (June 2008) MB/s STEP09 test (June 2009) Testing data export from CERN Example of data export CERN  Tier-1s as tested by ATLAS: June-08: 2 days at 1 GB/s June-09: 2 weeks at 4 GB/s 25/11/200917

18 Performance: data volumes CMS has been transferring 100 – 200 TB per day (1 PB/week) on the Grid since more than 2 years TB/day Last June ATLAS added 4 PB in 11 days to their total of 12 PB on the Grid + 4 PB 25/11/2009

WLCG CPU Workload The CPU accounting of all Grid sites is centrally stored. Available from: Monthly CPU walltime (millions of kSI2K·hrs) ksi2k·month ~ simult. busy cores 25/11/ (Data up to 22-Nov-2009)

20 Availability Setting up and deploying robust operational tools is crucial for building reliable services on the Grid. One of the key tools for WLCG: The Service Availability Monitor 25/11/2009

21 Availability Setting up and deploying robust operational tools is crucial for building reliable services on the Grid. One of the key tools for WLCG: The Service Availability Monitor 25/11/2009

Improving Reliability An increasing number of more realistic sensors, plus a powerful monitoring framework that ensures peer pressure, guarantees that reliability of WLCG service will keep improving. 25/11/200922

CMS PIC transfers since Jan ,8 PB into PIC 4 PB out from PIC 25/11/ Level of testing of the system: Moving almost 10 TB daily avg. during 3 years.

24 Data Transfers to Tier-2s Reconstructed data sent to the T2s for analysis. Bursty nature. Experiment requirements very fuzzy for this dataflow (as fast as possible) –Links to all SP/PT Tier-2s certified with MB/s sustained –CMS Computing Model: sustained transfers to > 40 T2s worldwide ATLAS transfers PIC  T2s daily avg. 200 MB/s CMS transfers PIC  T2s daily avg. 100 MB/s 25/11/2009

Multi-discipline Grids for scientific research

Enabling Grids for E-sciencE EU funded project to build a production quality Grid infrastructure for scientific research in Europe. Three phases: 2004 – Outcome: the largest, most widely used multi-disciplinary Grid infrastructure in the world. – WLCG is built on top of EGEE (and OSG in the USA) Many VOs and applications registered as EGEE users. Look for yours in the App database: From 25/11/200926

Enabling Grids for E-sciencE The EGEE project contained all of the Grid stakeholders: Vision beyond EGEE-III: migrate the existing production European Grid from a project-based model to a sustainable infrastructure. – Infrastructure: European Grid Initiative ( Federated infrastructure based on National Grid Initiatives for multi- disciplinary use. Spanish Ministry of Science and Innovation signed the EGI MoU and designated CSIC as coordinator of the Spanish NGI. – Applications User community organized in Specialized Support Centers (SSCs). – Middleware Development in a separated project. Infrastructure and Applications can become “customers”. InfrastructureMiddlewareApplications 25/11/200927

EGI related projects submitted to EU Presented by C. Loomis in EGEE09 workshop, Sep-09 (link)link Astrophysics: MAGIC, et al. HEP: LHC, FAIR, et al. 25/11/200928

Spanish Network for e-Science A Network Initiative Funded by the Spanish Ministry of Science and Education. Officially Approved on Dec UPV is the coordinating institution. More than 900 researchers, 89 research groups. Organised in four areas: Grid infr.Supercomputing infr. ApplicationsMiddleware The Applications area coordinates the activities of the different users communities (see active groups and applications in the Area wiki)wiki 25/11/200929

Astroparticles MAGIC (IFAE, PIC, UCM, INSA) – Data centre at PIC Data storage, reduction and access for the collaboration Resources and tools for users’ analysis (in prep.) Publish data to the Virtual Observatory (in prep.) – Monte Carlo production “on-demand” AUGER (UAH, CETA-CIEMAT) – Run simulations on the Grid: CORSIKA, ESAF, AIRES … Two presentations from astroparticles in the last meeting of the “Red Española de e-ciencia” (Valencia Oct-09, see slides)slides 25/11/200930

Facility for Antiproton and Ion Research One of the largest projects of the ESFRI Road Map. Will provide high energy and intensity ions and antiproton beams for basic research. The computing and storage requirements for FAIR are expected to be of the order of those of the LHC or above. A detailed evaluation is under way. Two of the experiments (PANDA and CBM) have already started using the Grid for detector simulations. FAIR Baseline Technical Report: 2500 scientists, 250 institutions, 44 countries. Spain one of the 14 countries that signed the agreement for the construction of FAIR. Contributing with 2% of the cost. Civil construction expected to start in First beam expected in 2015/16. 25/11/200931

Summary In the recent years we are witnessing an explosion of the scientific data. – More precise and complex experiments. – Large international collaborations. Geographically dispersed users need to access the data. The LHC has been largely driving the activity in the last years, with the pressure of the Petabytes of data (now yes) around the corner. – WLCG, the largest Grid infrastructure in the world, has been deployed and is ready for storing, processing and analysing the LHC data. Since early 2000s, a series of EU funded projects (EGEE) have been in the core of the deployment of a Grid for scientific research in Europe. – Next round of EU projects focused in consolidating this into a sustainable infrastructure: federated model (NGIs). – Projects call closed yesterday. Stay tuned for the activity on the “Grid Users/Applications” arena (SSCs). 25/11/200932

thank you Gonzalo Merino Port d’Informació Científica (

Backup Slides

PIC Tier-1 Reliability Tier-1 reliability targets have been met for most of the months 25/11/200935

36 T0/T1 ↔ PIC data transfers Target ATLAS+CMS+LHCb ~ 210 MB/s CMS data imported from T1s CMS data exported to T1s Target ATLAS+CMS+LHCb ~ 100 MB/s ATLAS daily rate CERN  PIC June 2009 Target: 76 MB/s CMS daily rate CERN  PIC June 2009 Target: 60 MB/s Data import from CERN and transfers with other Tier-1s successfully tested above targets 25/11/2009

Networking Tier-1 ↔Tier-2 in Spain 25/11/200937

EGI-User interaction User community organized into series of Specialized Support Centers (SSCs) Goals of an SSC: – Increase number of active users in the community – Promote use of grid technologies within the community – Encourage cooperation within the community – Safeguard grid knowledge and expertise of the community – Build scientific collaboration within and between communities An SSC will be a central, long-lived hub for grid activities within a given scientific community. (Presented by Cal Loomis in EGEE09 conference) 25/11/200938

25/11/200939