Jürgen Knobloch/CERN Slide 1 Grid Computing by Jürgen Knobloch CERN IT-Department Presented at Physics at the Terascale DESY, Hamburg December 4, 2007 Computing at the Petascale
Jürgen Knobloch/CERN Slide 2 LHC gets ready …
… what about Computing? The challenge Starting to get the picture Going for the Grid Are we there? What other use is the Grid? Where do we go from here? Jürgen Knobloch/CERN Slide 3
LHC Computing Challenge Jürgen Knobloch/CERN Slide 4
Jürgen Knobloch/CERN Slide 5 Timeline LHC Computing LHC approved ATLAS & CMS approved ALICE approved LHCb approved “Hoffmann” Review 7x10 7 MIPS 1,900 TB disk ATLAS (or CMS) requirements for first year at design luminosity ATLAS&CMS CTP 10 7 MIPS 100 TB disk LHC start Computing TDRs 55x10 7 MIPS 70,000 TB disk (140 MSi2K)
Options as seen in 1996 Jürgen Knobloch/CERN Slide 6
CSC 2007 LCG Evolution of CPU Capacity at CERN SC (0.6GeV) PS (28GeV) ISR (300GeV) SPS (400GeV) ppbar (540GeV) LEP (100GeV) LEP II (200GeV) LHC (14 TeV) Costs (2007 Swiss Francs) Includes infrastructure costs (comp.centre, power, cooling,..) and physics tapes Slide from Les Robertson
Requirements Match? Jürgen Knobloch/CERN Slide 8 Tape & disk requirements: >10 times CERN possibility Substantial computing required outside CERN Data compiled by Les Robertson
Jürgen Knobloch/CERN Slide 9 Timeline - Grids Data Challenges First physics Cosmics GRID 3 EGEE 1 LCG 1 Service Challenges EU DataGrid GriPhyN, iVDGL, PPDG EGEE 2 OSG LCG 2 EGEE WLCG CCRC08 Common Computing Readiness Challenge
Jürgen Knobloch/CERN Slide 10 Centers around the world form a Global Computer System The EGEE and OSG projects are the basis of the Worldwide LHC Computing Grid Project WLCG Inter-operation between Grids is working!
WLCG Collaboration Jürgen Knobloch/CERN Slide 11
Events at LHC Jürgen Knobloch/CERN Slide 12 Luminosity : cm -2 s MHz – every 25 ns 20 events overlaying
Trigger & Data Acquisition Jürgen Knobloch/CERN Slide 13
Tier-0 Recording Jürgen Knobloch/CERN Slide GB/sec (Ions)
Tier Jürgen Knobloch/CERN Slide 15 Tier-0 (CERN): Data recording First-pass reconstruction Data distribution Tier-1 (11 centres): Permanent storage Re-processing Analysis Tier-2 (>200 centres): Simulation End-user analysis
Requirements 2006 Jürgen Knobloch/CERN Slide 16 CPUDisk CERN: ~ 10%
Moore Delivered for CPU & Disk Jürgen Knobloch/CERN Slide 17 Slides from 1996 Expectations fulfilled! We are now here
Network was a Concern … Jürgen Knobloch/CERN Slide 18 We are now here
Tier-0-1: Optical Private Network Jürgen Knobloch/CERN Slide 19
OPN Traffic Jürgen Knobloch/CERN Slide 20
Data Transfer out of Tier-0 Jürgen Knobloch/CERN Slide 21
Middleware Security –Virtual Organization Management (VOMS) –MyProxy Data management –File catalogue (LFC) –File transfer service (FTS) –Storage Element (SE) –Storage Resource Management (SRM) Job management –Work Load Management System(WMS) –Logging and Bookeeping (LB) –Computing Element (CE) –Worker Nodes (WN) Information System –Monitoring: BDII (Berkeley Database Information Index), RGMA (Relational Grid Monitoring Architecture) aggregate service information from multiple Grid sites, now moved to SAM (Site Availability Monitoring) –Monitoring & visualization (Gridview, Dashboard, Gridmap etc.) Jürgen Knobloch/CERN Slide 22
Jürgen Knobloch/CERN Slide 23 Site reliability
Site Reliability Note: For a site to be reliable, many things have to work simultaneously!
Jürgen Knobloch/CERN Slide 25 ARDA Dashboard
Gridmap Jürgen Knobloch/CERN Slide 26
Increasing workloads ⅓ non-LHC Data compiled by Ian Bird
Many Grid Applications At present there are about 20 applications from more than 10 domains on the EGEE Grid infrastructure –Astronomy & Astrophysics - MAGIC, Planck –Computational Chemistry –Earth Sciences - Earth Observation, Solid Earth Physics, Hydrology, Climate –Fusion –High Energy Physics - 4 LHC experiments (ALICE, ATLAS, CMS, LHCb) BaBar, CDF, DØ, ZEUSATLAS –Life Sciences - Bioinformatics (Drug Discovery, Xmipp_MLrefine, etc.) –Condensed Matter Physics –Computational Fluid Dynamics –Computer Science/Tools –Civil Protection –Finance (through the Industry Task Force) Jürgen Knobloch/CERN Slide 28
Grid Applications Jürgen Knobloch/CERN Slide 29 Medical Seismology Chemistry Astronomy Fusion Particle Physics
Available EGEE Infrastructure 30 Data compiled by Ian Bird
Ramp-up Needed for Startup Jul Sep Apr X Sep Jul Apr Sep Jul Apr X 2.9 X Sep Jul Apr Sep Jul Apr X 3.7 X target usage usage pledge installed
Jürgen Knobloch/CERN Slide 32 3D - Distributed Deployment of Databases for LCG ORACLE Streaming with Downstream Capture (ATLAS, LHCb) SQUID/FRONTIER Web caching (CMS)
The Next Step Jürgen Knobloch/CERN Slide 33
EGI - Consortium members Johannes Kepler Universität Linz (GUP) Greek Research and Technology Network S.A. (GRNET) Istituto Nazionale di Fisica Nucleare (INFN) CSC - Scientific Computing Ltd. (CSC) CESNET, z.s.p.o. (CESNET) European Organization for Nuclear Research (CERN) Verein zur Förderung eines Deutschen Forschungsnetzes - DFN-Verein (DFN) Science & Technology Facilities Council (STFC) Centre National de la Recherche Scientifique (CNRS) Jürgen Knobloch/CERN Slide 34
EGI – European Grid Initiative EGI Design Study started Sept 07 to establish a sustainable pan-European grid infrastructure after the end of EGEE-3 in 2010 The main foundations of EGI are 37 National Grid Initiatives (NGIs) Project is funded by the European Commission's 7 th Framework Program Jürgen Knobloch/CERN Slide 35
Jürgen Knobloch/CERN Slide 36 Tier-1 Centers: TRIUMF (Canada); GridKA(Germany); IN2P3 (France); CNAF (Italy); SARA/NIKHEF (NL); Nordic Data Grid Facility (NDGF); ASCC (Taipei); RAL (UK); BNL (US); FNAL (US); PIC (Spain) The Grid is now in operation, working on: reliability, scaling up, sustainability Computing at the Terra-Scale