Download presentation
Presentation is loading. Please wait.
Published byValentina Gentleman Modified over 10 years ago
1
Enabling e-Research over GridPP Dan Tovey University of Sheffield
2
28 th March 2006University of Sheffield2 ATLAS Large Hadron Collider (LHC) under construction at CERN in Geneva. When commences operation in 2007 will be the world’s highest energy collider. Sheffield key member of ATLAS collaboration building one of two General Purpose Detectors on LHC ring. Main motivations for building LHC and ATLAS: –Finding the Higgs boson –Finding evidence for Supersymmetry – believed to be next great discovery / layer in our understanding of the universe.
3
28 th March 2006University of Sheffield3 ATLAS @ Sheffield Sheffield leads Supersymmetry (SUSY) searches at ATLAS Also coordinates all ATLAS physics activities in the UK including Higgs and SUSY searches. Sheffield responsible for building ATLAS Semiconductor Tracker (SCT) detector, and writing event reconstruction software. SUSY (= Nobel Prize) SM NB: This is a simulation!
4
28 th March 2006University of Sheffield4 Construction
5
28 th March 2006University of Sheffield5 Event Selection 9 orders of magnitude
6
28 th March 2006University of Sheffield6 The Data Deluge Understand/interpret data via numerically intensive simulations e.g. 1 SUSY event (ATLAS Monte Carlo Simulation) = 20 mins/3.5 MB on 1 GHz PIII 16 Million channels 100 kHz LEVEL-1 TRIGGER 1 MegaByte EVENT DATA 200 GigaByte BUFFERS 500 Readout memories 3Gigacell buffers 500 Gigabit/s Gigabit/s SERVICE LAN PetaByte ARCHIVE EnergyTracks Networks 1 Terabit/s (50000 DATA CHANNELS) 20TeraIPS EVENT BUILDER EVENT FILTER 40 MHz COLLISION RATE ChargeTimePattern Detectors Grid Computing Service 300TeraIPS Many events –~10 9 events/experiment/year –>~1 MB/event raw data –several passes required Worldwide LHC computing requirement (2007): –100 Million SPECint2000 (=100,000 of today’s fastest processors) –12-14 PetaBytes of data per year (=100,000 of today’s highest capacity HDD).
7
28 th March 2006University of Sheffield7 LCG Aim to use Grid techniques to solve this problem CERN LHC Computing Grid (LCG) project coordinating activities in Europe. Similar projects in US (Grid3/OSG) and Nordic countries (NorduGrid). LCG prototype went live in September 2003 in 12 countries including UK. Extensively tested by the LHC experiments
8
28 th March 2006University of Sheffield8 What is GridPP? 19 UK Universities, CCLRC (RAL & Daresbury) and CERN Funded by the Particle Physics and Astronomy Research Council (PPARC) GridPP1 - 2001-2004 £17m "From Web to Grid" GridPP2 - 2004-2007 £16m "From Prototype to Production" UK contribution to LCG.
9
28 th March 2006University of Sheffield9 UK Core e-Science Programme Institutes Tier-2 Centres CERN LCG EGEE GridPP GridPP in Context Tier-1/A Middleware, Security, Networking Experiments Grid Support Centre Not to scale! Apps Dev Apps Int GridPP
10
28 th March 2006University of Sheffield10 ARDA Expmts EGEE LCG Deployment Board Tier1/Tier2, Testbeds, Rollout Service specification & provision User Board Requirements Application Development User feedback Metadata Workload Network Security Info. Mon. PMB CB Storage
11
28 th March 2006University of Sheffield11 Tier Structure Tier-1 Tier-0 (CERN) Tier-1 (Lyon) Tier-1 (BNL) Tier-1 (RAL) NorthGridSouthGrid ScotGridULGrid Tier-2
12
28 th March 2006University of Sheffield12 UK Tier-1/A Centre Rutherford Appleton Laboratory High quality data services National and international role UK focus for international Grid development 1400 CPU 80 TB Disk 60 TB Tape (Capacity 1PB) Grid Resource Discovery Time = 8 Hours 2004 CPU Utilisation
13
28 th March 2006University of Sheffield13 UK Tier-2 Centres ScotGrid Durham, Edinburgh, Glasgow NorthGrid Daresbury, Lancaster, Liverpool, Manchester, Sheffield (WRG) SouthGrid Birmingham, Bristol, Cambridge, Oxford, RAL PPD, Warwick LondonGrid Brunel, Imperial, QMUL, RHUL, UCL
14
28 th March 2006University of Sheffield14 NorthGrid Tier-2 collaboration between Sheffield (WRG), Lancaster, Liverpool, Manchester and Daresbury Lab.
15
28 th March 2006University of Sheffield15 WRG & NorthGrid White Rose Grid contributing to NorthGrid and GridPP with new SRIF2 funded machine at Sheffield (Iceberg). LCG component to Iceberg provides a base of 230kSI2k and on demand up to 340kSI2k, with state-of-the-art 2.4 GHz Opteron cpus. Delivered 2 nd highest GridPP Tier-2 throughput for ATLAS in 2005. http://lcg.shef.ac.uk/ganglia
16
28 th March 2006University of Sheffield16 GridPP Deployment Status Three Grids on Global scale in HEP (similar functionality) sites CPUs LCG (GridPP)228 (19) 17820 (3500) Grid3 [USA]29 2800 NorduGrid30 3200 GridPP deployment is part of LCG Currently the largest Grid in the world
17
28 th March 2006University of Sheffield17 ATLAS Data Challenges DC2 (2005): 7.7 M GEANT4 events and 22 TB DC3/CSC (2006): > 20M G4 events UK ~20% of LCG Ongoing.. (3) Grid Production Largest total computing requirement Small fraction of what ATLAS needs.. Now in Grid Production Phase LCG now reliably used for production
18
28 th March 2006University of Sheffield18 Further Info http://www.gridpp.ac.uk
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.