Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.

Slides:



Advertisements
Similar presentations
Conference xxx - August 2003 Sverker Holmgren SNIC Director SweGrid A National Grid Initiative within SNIC Swedish National Infrastructure for Computing.
Advertisements

Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
Grid activities in Sweden Paula Eerola IT seminar, Vetenskapsrådet,
Swedish participation in DataGrid and NorduGrid Paula Eerola SWEGRID meeting,
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Nordic Data Grid Facility NDGF – Paula Eerola, paula.eerola [at] hep.lu.se paula.eerola [at] hep.lu.sepaula.eerola [at] hep.lu.se 1st Iberian.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
SNIC SweGrid and the views of SNIC Anders YnnermanUppsala Department of Science and Technology Linköping University Sweden.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
Grid and High Energy Physics Paula Eerola Lunarc, Artist’s view on Grid, by Ursula Wilby, Sydsvenskan
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Tord.Ekelöf, Uppsala UniversityPartikeldagarna Karlstad Tord Ekelöf Uppsala University Partikeldagarna Karlstad
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Conference xxx - August 2003 Anders Ynnerman Director Swedish National Infrastructure for Computing Linköping University Sweden Griding the Nordic Supercomputer.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Alex Read, Dept. of Physics Grid Activity in Oslo CERN-satsingen/miljøet møter MN-fakultetet Oslo, 8 juni 2009 Alex Read.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Grid Computing Status Report Jeff Templon PDP Group, NIKHEF NIKHEF Scientific Advisory Committee 20 May 2005.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
October LHCUSA meeting BNL Bjørn S. Nilsen Update on NSF-ITR Proposal Bjørn S. Nilsen The Ohio State University.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
A Distributed Tier-1 An example based on the Nordic Scientific Computing Infrastructure GDB meeting – NIKHEF/SARA 13th October 2004 John Renner Hansen.
…building the next IT revolution From Web to Grid…
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
Computing Coordination Aspects for HEP in Germany International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science nLCG.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Performance of The NorduGrid ARC And The Dulcinea Executor in ATLAS Data Challenge 2 Oxana Smirnova (Lund University/CERN) for the NorduGrid collaboration.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
, VilniusBaltic Grid1 EG Contribution to NEEGRID Martti Raidal On behalf of Estonian Grid.
1 Welcome from the Local Organizers Erwin Laure Director PDC-HPC
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Response of the ATLAS Spanish Tier2 for.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
ARDA Massimo Lamanna / CERN Massimo Lamanna 2 TOC ARDA Workshop Post-workshop activities Milestones (already shown in December)
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
NDGF – a Joint Nordic Production Grid Lars Fischer ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science Cracow, 2 October.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
NDGF and the distributed Tier-I Center Michael Grønager, PhD Technical Coordinator, NDGF dCahce Tutorial Copenhagen, March the 27th, 2007.
A Nordic Tier-1 for LHC Mattias Wadenstein Systems Integrator, NDGF Grid Operations Workshop Stockholm, June the 14 th, 2007.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
JRA1 Middleware re-engineering
(Prague, March 2009) Andrey Y Shevel
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
I Brazilian LHC Computing Workshop Welcome
Update on Plan for KISTI-GSDC
Russian Regional Center for LHC Data Analysis
Readiness of ATLAS Computing - A personal view
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
High Energy Physics and Grid
Presentation transcript:

Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008

Computing challenges at LHC

“Full chain” of HEP data processing Slide adapted from Ch.Collins-Tooth and J.R.Catmore

ATLAS Monte Carlo data production flow (10 Mevents) Very different tasks/algorithms (ATLAS experiment in this example) Single “job” lasts from 10 minutes to 1 day Most tasks require large amounts of input and produce large output data Very different tasks/algorithms (ATLAS experiment in this example) Single “job” lasts from 10 minutes to 1 day Most tasks require large amounts of input and produce large output data

LHC computing specifics  Data-intensive tasks  Large datasets, large files  Lengthy processing times  Large memory consumption  High throughput is necessary  Very distributed computing and storage resources  CERN can host only a small fraction of needed resources and services  Distributed computing resources of modest size  Produced and processed data are hence distributed, too  Issues of coordination, synchronization, data integrity and authorization are outstanding

Software for HEP experiments Written by very many different authors in different languages (C++, Java, Python, Fortran) Dozens of external components Occupy as much as ~10 GB of disk space each release Massive pieces of software Every experiment produces a release as often as once a month during the preparation phase (which is now for LHC) Frequent releases Experiments can not afford supporting different operating systems and different computer configurations Difficult to set up outside the lab ALICE, ATLAS, PHENIX etc – all in many versions For a small university group it is very difficult to manage different software sets and maintain hardware Solution: use the Grid

Grid is a result of IT progress Graph from “The Triumph of the Light”, G. Stix, Sci. Am. January 2001 Computer speed doubles every 18 months Network speed doubles every 9 months Network vs. computer performance: Computers: 500 times faster Networks: times faster 1986 to 2000: Computers: 60 times faster Networks: 4000 times faster 2001 to 2010 (projected): Excellent wide area networks provide for a distributed supercomputer – the Grid “Operating system” of such a computer is Grid middleware

Some Grid projects; originally byVicky White, FNAL

Grids in LHC experiments  Almost all Monte Carlo and data processing today is done via Grid  There are 20+ Grid flavors out there  Almost all are tailored for a specific application and/or specific hardware  LHC experiments make use of 3 Grid middleware flavors:  gLite  ARC  OSG  All experiments develop own higher-level Grid middleware layers  ALICE – AliEn  ATLAS – PANDA and DDM  LHCb – DIRAC  CMS – ProdAgent and PhEDEx

ATLAS Experiment at CERN - Multi- Grid Infrastructure Graphics from a slide by A.Vaniachine

Nordic DataGrid Facility Provides a unique distributed “Tier1” center via NorduGrid/ARC Involves 7 largest Nordic academic HPC centers …plus a handful of University centers (Tier2 service) Connected to CERN directly with GEANT 10GBit fiber Inter-Nordic shared 10Gbit network from NORDUnet Budget: staff only, 2 MEUR/year, by Nordic research councils

Swedish contribution: SweGrid InvestmentTime Cost, KSEK Six clusters (6x100 cores) including 12 TB FC disk Dec Disk storage part 1, 60 TB SATA May Disk storage part 2, 86.4 TB SATA May Centre Tape volume, TB Cost, KSEK HPC2N PDC NSC SweGrid in LocationProfile HPC2N (Umeå)IT UPPMAX (Uppsala)IT, HEP PDC (Stockholm)IT C3SE (Gothenburg)IT NSC (Linköping)IT Lunarc (Lund)IT, HEP  Co-funded by the Swedish Research Council and the Knut and Alice Wallenberg foundation  One technician per center  Middleware: ARC, gLite  1/3 allocated to LHC Computing

SweGrid and NDGF usage

Swedish contribution to LHC-related Grid R&D  NorduGrid (Lund, Uppsala, Umeå, Linköping, Stockholm and others)  Produces ARC middleware, 3 core developers are in Sweden  SweGrid: tools for Grid accounting, scheduling, distributed databases  Used by NDGF, other projects  NDGF: interoperability solutions  EU KnowARC (Lund, Uppsala + 7 partners)  3 MEUR project (3 years), develops next generation ARC.  Project’s technical coordinator is in Lund  EU EGEE (Umeå, Linköping, Stockholm)

Summary and outlook  Grid technology is vital for the success of LHC  Sweden contributes very substantially with hardware, operational support and R&D  Very high efficiency  Sweden has signed MoU with LHC Computing Grid in March 2008  Pledge of long-term computing service for LHC  SweGrid2 is coming  A major upgrade of SweGrid resources  Research Council granted 22.4 MSEK for investments and operation in  43 MSEK more are being requested for years  Includes not just Tier1, but also Tier2 and Tier3 support