Download presentation
Presentation is loading. Please wait.
Published byGyles Chapman Modified over 9 years ago
1
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008
2
Computing challenges at LHC 2008-05-092
3
“Full chain” of HEP data processing 2008-05-093 Slide adapted from Ch.Collins-Tooth and J.R.Catmore
4
ATLAS Monte Carlo data production flow (10 Mevents) 2008-05-094 Very different tasks/algorithms (ATLAS experiment in this example) Single “job” lasts from 10 minutes to 1 day Most tasks require large amounts of input and produce large output data Very different tasks/algorithms (ATLAS experiment in this example) Single “job” lasts from 10 minutes to 1 day Most tasks require large amounts of input and produce large output data
5
LHC computing specifics Data-intensive tasks Large datasets, large files Lengthy processing times Large memory consumption High throughput is necessary Very distributed computing and storage resources CERN can host only a small fraction of needed resources and services Distributed computing resources of modest size Produced and processed data are hence distributed, too Issues of coordination, synchronization, data integrity and authorization are outstanding 2008-05-095
6
Software for HEP experiments Written by very many different authors in different languages (C++, Java, Python, Fortran) Dozens of external components Occupy as much as ~10 GB of disk space each release Massive pieces of software Every experiment produces a release as often as once a month during the preparation phase (which is now for LHC) Frequent releases Experiments can not afford supporting different operating systems and different computer configurations Difficult to set up outside the lab ALICE, ATLAS, PHENIX etc – all in many versions For a small university group it is very difficult to manage different software sets and maintain hardware Solution: use the Grid 2008-05-096
7
Grid is a result of IT progress 2008-05-097 Graph from “The Triumph of the Light”, G. Stix, Sci. Am. January 2001 Computer speed doubles every 18 months Network speed doubles every 9 months Network vs. computer performance: Computers: 500 times faster Networks: 340000 times faster 1986 to 2000: Computers: 60 times faster Networks: 4000 times faster 2001 to 2010 (projected): Excellent wide area networks provide for a distributed supercomputer – the Grid “Operating system” of such a computer is Grid middleware
8
2008-05-098 Some Grid projects; originally byVicky White, FNAL
9
Grids in LHC experiments 2008-05-099 Almost all Monte Carlo and data processing today is done via Grid There are 20+ Grid flavors out there Almost all are tailored for a specific application and/or specific hardware LHC experiments make use of 3 Grid middleware flavors: gLite ARC OSG All experiments develop own higher-level Grid middleware layers ALICE – AliEn ATLAS – PANDA and DDM LHCb – DIRAC CMS – ProdAgent and PhEDEx
10
ATLAS Experiment at CERN - Multi- Grid Infrastructure 2008-05-0910 Graphics from a slide by A.Vaniachine
11
Nordic DataGrid Facility 2008-05-0911 Provides a unique distributed “Tier1” center via NorduGrid/ARC Involves 7 largest Nordic academic HPC centers …plus a handful of University centers (Tier2 service) Connected to CERN directly with GEANT 10GBit fiber Inter-Nordic shared 10Gbit network from NORDUnet Budget: staff only, 2 MEUR/year, by Nordic research councils
12
Swedish contribution: SweGrid InvestmentTime Cost, KSEK Six clusters (6x100 cores) including 12 TB FC disk Dec 200310 173 Disk storage part 1, 60 TB SATA May 2004 2 930 Disk storage part 2, 86.4 TB SATA May 2005 2 119 2008-05-0912 Centre Tape volume, TB Cost, KSEK HPC2N1201000 PDC1201000 NSC1201000 SweGrid in 2003-2007 LocationProfile HPC2N (Umeå)IT UPPMAX (Uppsala)IT, HEP PDC (Stockholm)IT C3SE (Gothenburg)IT NSC (Linköping)IT Lunarc (Lund)IT, HEP Co-funded by the Swedish Research Council and the Knut and Alice Wallenberg foundation One technician per center Middleware: ARC, gLite 1/3 allocated to LHC Computing
13
SweGrid and NDGF usage 2008-05-0913
14
Swedish contribution to LHC-related Grid R&D NorduGrid (Lund, Uppsala, Umeå, Linköping, Stockholm and others) Produces ARC middleware, 3 core developers are in Sweden SweGrid: tools for Grid accounting, scheduling, distributed databases Used by NDGF, other projects NDGF: interoperability solutions EU KnowARC (Lund, Uppsala + 7 partners) 3 MEUR project (3 years), develops next generation ARC. Project’s technical coordinator is in Lund EU EGEE (Umeå, Linköping, Stockholm) 2008-05-0914
15
Summary and outlook Grid technology is vital for the success of LHC Sweden contributes very substantially with hardware, operational support and R&D Very high efficiency Sweden has signed MoU with LHC Computing Grid in March 2008 Pledge of long-term computing service for LHC SweGrid2 is coming A major upgrade of SweGrid resources Research Council granted 22.4 MSEK for investments and operation in 2007-2008 43 MSEK more are being requested for years 2009-2011 Includes not just Tier1, but also Tier2 and Tier3 support 2008-05-0915
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.