Presentation is loading. Please wait.

Presentation is loading. Please wait.

The LHC Computing Grid Visit of Prof. Friedrich Wagner

Similar presentations


Presentation on theme: "The LHC Computing Grid Visit of Prof. Friedrich Wagner"— Presentation transcript:

1 The LHC Computing Grid Visit of Prof. Friedrich Wagner
President elect of the European Physical Society Frédéric Hemmer Deputy Head, IT Department September 20, 2006 The LHC Computing Grid – September 2006

2 Outline The Challenge The (W)LCG Project The LCG Infrastructure
The LCG Service Beyond LCG The LHC Computing Grid – September 2006

3 New frontiers in data handling
ATLAS experiment: ~150 million 40MHz ~ 10 million Gigabytes per second Massive data reduction on-line Still ~1 Gigabyte per second to handle The LHC Computing Grid – September 2006

4 The Data Challenge The accelerator will be completed in 2007 and run for years LHC experiments will produce million Gigabytes of data each year (about 20 million CDs!) LHC data analysis requires a computing power equivalent to ~100,000 of today's fastest PC processors. Requires many cooperating computer centres, CERN providing only ~20% of the computing resources The LHC Computing Grid – September 2006

5 (extracted by physics topic)
Data Handling and Computation for Physics Analysis reconstruction detector event filter (selection & reconstruction) analysis processed data event summary data raw data batch physics analysis event reprocessing simulation analysis objects (extracted by physics topic) event simulation interactive physics analysis The LHC Computing Grid – September 2006

6 Tier-0 Input Buffers (12 Nodes)
CMS Tier-0 September 2006 Tier-0 Input Buffers (12 Nodes) MB Signal HLT soup Streams Pre-CSA ’06 Activities AODs Merge Prompt Calibration Merge Prompt Reconstruction Merge FEVT CSA ’06 Dataflow Tier-0 Export Buffers (35 Nodes) Tier-1 Castor Tape Archiving Export The LHC Computing Grid – September 2006

7 LCG Service Hierarchy Tier-0 – the accelerator centre
Data acquisition & initial processing Long-term data curation Data Distribution to Tier-1 centres Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany –Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY) Tier-1 – “online” to the data acquisition process  high availability Managed Mass Storage –  grid-enabled data service All re-processing passes Data-heavy analysis National, regional support Tier-2 – ~100 centres in ~40 countries Simulation End-user analysis – batch and interactive Services, including Data Archive and Delivery, from Tier-1s The LHC Computing Grid – September 2006

8 Summary of Computing Resource Requirements
~ 100K of today’s fastest processors Summary of Computing Resource Requirements All experiments From LCG TDR - June 2005 CERN All Tier-1s All Tier-2s Total CPU (MSPECint2000s) 25 56 61 142 Disk (PetaBytes) 7 31 19 57 Tape (PetaBytes) 18 35 53 CPU Disk Tape The LHC Computing Grid – September 2006

9 WLCG Collaboration The Collaboration 4 LHC experiments
~120 computing centres 12 large centres (Tier-0, Tier-1) 38 federations of smaller “Tier-2” centres Growing to ~40 countries Memorandum of Understanding Agreed in October 2005, now being signed Resources Commitment made each October for the coming year 5-year forward look The LHC Computing Grid – September 2006

10 The LHC Computing Grid – September 2006

11 The new European Network Backbone
LCG working group with Tier-1s and national/ regional research network organisations New GÉANT 2 – research network backbone  Strong correlation with major European LHC centres Swiss PoP at CERN The LHC Computing Grid – September 2006

12 LHC Computing Grid Project - a Collaboration
Building and operating the LHC Grid – a global collaboration between The physicists and computing specialists from the LHC experiments The national and regional projects in Europe and the US that have been developing Grid middleware The regional and national computing centres that provide resources for LHC The research networks Researchers Computer Scientists & Software Engineers Service Providers The LHC Computing Grid – September 2006

13 LCG depends on 2 major science grid infrastructures …
The LCG service runs & relies on grid infrastructure provided by: EGEE - Enabling Grids for E-Science OSG - US Open Science Grid The LHC Computing Grid – September 2006

14 LCG Service planning Pilot Services – stable service from 1 June 06
2006 cosmics LHC Service in operation – 1 Oct 06 over following six months ramp up to full operational capacity & performance 2007 LHC service commissioned – 1 Apr 07 first physics 2008 full physics run The LHC Computing Grid – September 2006

15 Service Challenge 4 (SC4) the Pilot LHC Service from June 2006
A stable service on which experiments can make a full demonstration of their offline chain DAQ  Tier-0  Tier-1 data recording, calibration, reconstruction Offline analysis - Tier-1  Tier-2 data exchange simulation, batch and end-user analysis And sites can test their operational readiness LCG services -- monitoring  reliability Grid services Mass storage services, including magnetic tape Extension to most Tier-2 sites Target for service by end September – Service metrics  90% of MoU service levels Data distribution from CERN to tape at Tier-1s at nominal LHC rates The LHC Computing Grid – September 2006

16 CERN Tier-1 Data Distribution
Sustaining ~900 MBytes/sec The data rate required during LHC running for all four experiments is 1.6 GBytes/sec – -- which was demonstrated during a test period in April ATLAS alone has moved 1 PetaByte of data during its data challenge between 19 June and 7 August CMS has moved 3.8 PetaBytes of data between sites during a four months period this week The LHC Computing Grid – September 2006

17 CMS Data Transfers The LHC Computing Grid – September 2006

18 Production Grids for LHC what has been achieved
Basic middleware A set of baseline services agreed and initial versions in production Pro-active grid operation – distributed across several sites All major LCG sites active ~50K jobs/day > 10K simultaneous jobs during prolonged periods on the EGEE grid Reliable data distribution service demonstrated at 1.6 GB/sec CERNTier-1s mass storage to mass storage = nominal LHC data rate The LHC Computing Grid – September 2006

19 Impact of the LHC Computing Grid in Europe
LCG has been the driving force for the European multi-science Grid EGEE (Enabling Grids for E-sciencE) EGEE is now a global effort, and the largest Grid infrastructure worldwide Co-funded by the European Commission (~130 M€ over 4 years) EGEE already used for >20 applications, including… Bio-informatics Education, Training Medical Imaging The LHC Computing Grid – September 2006

20 The EGEE Project Infrastructure operation Middleware
Currently includes >200 sites across 40 countries Continuous monitoring of grid services & automated site configuration/management Middleware Production quality middleware distributed under business friendly open source licence User Support - Managed process from first contact through to production usage Training Documentation Expertise in grid-enabling applications Online helpdesk Networking events (User Forum, Conferences etc.) Interoperability Expanding interoperability with related infrastructures The LHC Computing Grid – September 2006

21 EGEE Grid Sites : Q1 2006 sites EGEE: Steady growth over the lifetime of the project CPU EGEE: > 180 sites, 40 countries > 24,000 processors, ~ 5 PB storage The LHC Computing Grid – September 2006

22 Applications on EGEE More than 20 applications from 7 domains
Astrophysics MAGIC, Planck Computational Chemistry Earth Sciences Earth Observation, Solid Earth Physics, Hydrology, Climate Financial Simulation E-GRID Fusion Geophysics EGEODE High Energy Physics 4 LHC experiments (ALICE, ATLAS, CMS, LHCb) BaBar, CDF, DØ, ZEUS Life Sciences Bioinformatics (Drug Discovery, Xmipp_MLrefine, etc.) Medical imaging (GATE, CDSS, gPTM3D, SiMRI 3D, etc.) Multimedia Material Sciences The LHC Computing Grid – September 2006

23 Example: EGEE Attacks Avian Flu
EGEE used to analyse 300,000 possible potential drug compounds against bird flu virus, H5N1. 2000 computers at 60 computer centres in Europe, Russia, Taiwan, Israel ran during four weeks in April - the equivalent of 100 years on a single computer. Potential drug compounds now being identified and ranked Neuraminidase, one of the two major surface proteins of influenza viruses, facilitating the release of virions from infected cells. Image Courtesy Ying-Ta Wu, AcademiaSinica. The LHC Computing Grid – September 2006

24 Example: Geocluster industrial application
The first industrial application successfully running on EGEE Developed by the Compagnie Générale de Géophysique (CGG) in France, doing geophysical simulations for oil, gas, mining and environmental industries. EGEE technology helps CGG to federate its computing resources around the globe. The LHC Computing Grid – September 2006

25 Fusion and the GRID: Present Pilot Applications
Applications with distributed calculations: Monte Carlo, Separate estimates, … Multiple Ray Tracing: e. g. TRUBA J. L. Vázquez-Poleti. “Massive Ray Tracing in Fusion Plasmas on EGEE”. User Forum, 2006 Stellarator Optimization: VMEC V. Voznesensky. “Genetic Stellarator Optimisations in Grid”. User Forum, 2006. Transport and Kinetic Theory: Monte Carlo Codes The LHC Computing Grid – September 2006

26 European e-Infrastructure Coordination
Evolution European e-Infrastructure Coordination EGEE EGEE-II EDG EGEE-III Testbeds Utility Service Routine Usage The LHC Computing Grid – September 2006

27 The LHC Computing Grid – September 2006


Download ppt "The LHC Computing Grid Visit of Prof. Friedrich Wagner"

Similar presentations


Ads by Google