 Changes to sources of funding for computing in the UK.  Past and present computing resources.  Future plans for computing developments. UK Status &

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

05/07/00LHCb Computing Model Meeting (CERN) LHCb(UK) Computing Status Glenn Patrick Prototype (Tier 1) UK national computing centre Bid to Joint.
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
GridPP News NeSC opening “Media” dissemination Tier 1/A hardware Web pages Collaboration meetings Nick Brook University of Bristol.
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
EU funding for DataGrid under contract IST is gratefully acknowledged GridPP Tier-1A Centre CCLRC provides the GRIDPP collaboration (funded.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
14th April 1999Hepix Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Robin Middleton RAL/PPD DG Co-ordination Rome, 23rd June 2001.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Andrew McNabNorthGrid, GridPP8, 23 Sept 2003Slide 1 NorthGrid Status Andrew McNab High Energy Physics University of Manchester.
T. Bowcock Liverpool Sept 00. Sept LHCb-GRID T. Bowcock 2 University of Liverpool Successes Issues Improving the system Comments.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
MAP Project T. Bowcock, A. Kinvig, I. Last M. McCubbin, A. Moreton C. Parkes, G. Patel University of Liverpool.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
WP8 Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid WP8 Meeting, 16th November 2000 Glenn Patrick (RAL)
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
Large Scale Parallel File System and Cluster Management ICT, CAS.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
19th September 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Royal Holloway 19 th September 2003.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Dave Newbold, University of Bristol8/3/2001 UK Testbed 0 Sites Sites that have committed to TB0: RAL (R) Birmingham (Q) Bristol (Q) Edinburgh (Q) Imperial.
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Tier1 Andrew Sansum GRIDPP 10 June GRIDPP10 June 2004Tier1A2 Production Service for HEP (PPARC) GRIDPP ( ). –“ GridPP will enable testing.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
LHCb Grid MeetingLiverpool, UK GRID Activities Glenn Patrick Not particularly knowledgeable-just based on attending 3 meetings.  UK-HEP.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
UK GridPP Tier-1/A Centre at CLRC
UK Status and Plans Scientific Computing Forum 27th Oct 2017
LHCb(UK) Computing Status Glenn Patrick
Presentation transcript:

 Changes to sources of funding for computing in the UK.  Past and present computing resources.  Future plans for computing developments. UK Status & Planning for LHC(b) Computing Andrew Halley CERN LHCb Software Week 26th November Outline

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 2 Funding Sources for PP Computing in the UK Until recently (last 2 years), UK funding for particle physics computing had two components: Direct funding to the individual University Groups Central funding to the IT Group of the CCLRC in Rutherford-Appleton Lab. Enter, the New Concept from UK Government Outcomes Installed equipment small scale, but well-tailored at Universities. Large but needs expt’s to motivate changes. Get the individual experiments and/or University Groups to bid for (big) money.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 3 New external sources of computing funding  Joint Research Equipment Initiative (JREI) The aim of JREI is to contribute to the physical research infrastructure and to enable high-quality research to be undertaken, particularly in areas of basic and strategic priority for science and technology, such as those identified by Foresight. £99M in 1999, the 4th round  Joint Infrastructure Fund (JIF) £700M over three years The money will enable universities to finance essential building, refurbishment and equipment projects to ensure that they remain at the forefront of international scientific research.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 4 Personal summary: PPARC comp. JREI bids Following represents a summary of what information is available from various sources, including PPARC.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 5 Personal summary: PPARC comp. JIF bids Following represents a summary of what information is available from various sources, excluding PPARC.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 6 Particle Physics Bids (1)  BaBar JREI 98 - awarded £800K for disk and servers at 10 UK sites 12.5TB RAID ~10TB usable Sun won tender, installation soon  LHCb JREI 98 - awarded MAP - Montecarlo Array Processor 300 Linux PCs in custom-built chassis  CDF JIF 98 - submitted December 98 postponed until next round T-Quarc at FNAL, 10TB disk, 4 SMP workstations at RAL, 5TB disk, 5TB tape, SMP and line to FNAL at 4 univs, single cpu machine and 1.7TB of disk

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 7 Particle Physics Bids (2)  BaBar JIF 99 - submitted April 99 Line to SLAC computers for analysis, big SMP at RAL, smaller SMPs at each site, Linux farm(s) for simulation.  LHCb JREI 99 - submitted May PCs with 1TB of disk each to store data generated by MAP and analyse it.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 8 Timescales and deadlines for bid procedures Recent round needed to be submitted by May 1999 with decisions not expected before January The 1999 bids had to be submitted by Spring 1999 with decision expected around November Next round submission dates is 11th October 1999 for the “decision point” expected to be March 2000.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 9 Current Computing Resources in the UK Considering the central facilities currently available: Large central datastore, combining both a large disk pool, together with backing store and tape robots. Central CPU farms and servers currently comprising of :  CSF facility based on Hewlett-Packard processors.  Windows NT facility based on Pentium processors.  Upgraded Linux-CSF facility based on Pentium processors. In addition, home Universities have considerable power in workstation clusters and dedicated farms, often “harvested” by software-”robots” which serve out tasks remotely.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 10 Current usage statistics of the RAL datastore Typically, ~10->15 Tb accessible from the datastore, but only ~5 Tb actively used, at any recently given time,

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 11 Usage of the HP CSF facility at RAL As an example snapshot of the use of the service, from April ‘99 to September ‘99, average use is ~80%.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 12 Linux CSF farm and its usage at RAL The Linux farm now consists of: Forty Pentium II 450 MHz CPU’s with 256Kb memory, 10 Gb fast local disk, 100 Mb/s fast ethernet. maximum capacity Currently well used by active experiments, and with excellent potential for upgrades.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 13 Windows NT Farm and usage at RAL Ten dual processor machines with 450 MHz CPUs added to the farm. Upgrade increases the capacity of the farm by factor of ~5. Service used heavily by both ALEPH and LHCb for MC production. Will be used as part of LHCb plans to generate large numbers (10 6 ) inclusive bbar events in the near future. Automatic job submission software set-up for LHCb System software replication set-up so it’s now very easy to extend the system as appropriate.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 14 New computing resources outside of RAL  300 processors 400MHz PII 128 Mbytes memory 3 Gbytes disk D-Link 100BaseT ethernet +hubs commercial units BUT custom boxes for packing and cooling On the basis of the new funding arrangements in the UK, University of Liverpool was given funds to make MAP, a large MC processor based on cut-down linux nodes: The nodes are rack mounted and running a stripped down version of RedHat Linux 5.2. Tailored for production using a 1 Tb local mounted disk, but needs corresponding solution for analysing the data locally.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 15 Computing resources outside of RAL: MAP Master External Ethernet MAP Slaves Hub 100BaseT System is scalable, can be increased by adding more slaves, and/or network hubs. Benefits from bulk purchase of uniform hardware…. The idea: and in reality:

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 16 Future plans for cpu upgrades etc. Intention is to develop the Linux RAL: Order 30 new dual-processor 600 MHz nodes to be added to the existing cluster. Add more hardware around April/May next financial year to keep up with demand. Also plans to augment MAP at Liverpool with subsystems at additional LHCb UK sites, also: Developing COMPASS, a model for LHC analyses. Using a fast Linux server to check large disk pool read/write speeds of 50/20 Mbs with over 1 Tb of data space attached.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 17 Service Centre Future “plans” for LHC computing in the UK Given the new funding arrangements in the UK, and the challenges facing us with the LHC computing needs: CER N Tier-1 Regional Centre Tier 1 regionalCentre Tier-2 Regional Centre Institutes UK plans to operate a Tier 1 Regional Centre RAL, with several Tier 2 Centres (such as MAP/COMPASS) at the Universities. Submission of an LHC-wide UK JIF bid for capital funding for the years through the LHC start-up.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 18 Ramping up the UK resources for the LHC The resources needed are dependent, somewhat, on the computing models adopted by the experiments, but are currently: An additional tape robot will be purchased in 2003, to allow datastore extensions to 320 Tb. Network bandwidth to CERN is assumed to be 50 Mbs with similar performances achieved to Tier 2 centres in 2003 and increased thereafter to 500 Mbs.

26/11/99LHCb Software Week at CERN, Andrew Halley (CERN) 19 Tentative conclusions and summary. Clearly, the field is evolving quickly. Status can be broken down into : upgraded linux (NT?) farms ~doubling capacity every year or so, increases in datastore size. new massive simulation facilities like MAP coming online, analyses engines being developed to cope with generated data rates. development of Tier 1 and 2 data centres with 2 orders of magnitude increases in stored data & cpu power, 2-3 orders of magnitude in bandwidth improvements in network access.