UCL HEP Computing Status HEPSYSMAN, RAL, 2005-04-27.

Slides:



Advertisements
Similar presentations
WGISS #19 Plenary, CONAE, Cordoba, Argentina, March 2005 Cluster and Grid Project: Status & Update Pakorn Apaphant Geo-Informatics and Space Technology.
Advertisements

Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Condor use in Department of Computing, Imperial College Stephen M c Gough, David McBride London e-Science Centre.
Liverpool HEP – Site Report May 2007 John Bland, Robert Fay.
Liverpool HEP - Site Report June 2008 Robert Fay, John Bland.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Oxford PP Computing Site Report HEPSYSMAN 28 th April 2003 Pete Gronbech.
23rd April 2002HEPSYSMAN April Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Birmingham site report Lawrie Lowe HEP System Managers Meeting, RAL,1 st July 2004.
24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
RAL Particle Physics Dept. Site Report. Gareth Smith RAL PPD About 2 staff mainly on windows and general infrastructure About 1.5 staff on departmental.
A couple of slides on RAL PPD Chris Brew CCLRC - RAL - SPBU - PPD.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
9th May 2006HEPSYSMAN RAL - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Cambridge Site Report Cambridge Site Report HEP SYSMAN, RAL th June 2010 Santanu Das Cavendish Laboratory, Cambridge Santanu.
Gareth Smith RAL PPD HEP Sysman. April 2003 RAL Particle Physics Department Site Report.
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
20th October 2003Hepix Vancouver - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility HEPiX – Fall, 2005.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
11th Oct 2005Hepix SLAC - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager and South Grid Technical Co-ordinator.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
1st July 2004HEPSYSMAN RAL - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
London Tier 2 Status Report GridPP 11, Liverpool, 15 September 2004 Ben Waugh on behalf of Owen Maroney.
Manchester Site report Sabah Salih HEPP The University of Manchester UK HEP Tier3.
Site Report Bristol University HEP group March 2000Jean-Pierre Melot, Bristol2 The People 7 Academics Brian Foster, ZEUS spokesman Greg Heath,
Brunel University, School of Engineering and Design, Uxbridge, UB8 3PH, UK Henry Nebrensky (not a systems manager) SIRE Group.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Presenter Name Facility Name UK Testbed Status and EDG Testbed Two. Steve Traylen GridPP 7, Oxford.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Brunel University, Department of Electronic and Computer Engineering, Uxbridge, UB8 3PH, UK Dr Peter R Hobson C.Phys M.Inst.P SIRE Group.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Macromolecular Crystallography Workshop 2004 Recent developments regarding our Computer Environment, Remote Access and Backup Options.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

UCL HEP Computing Status HEPSYSMAN, RAL,

Computers Desktop PCs: ~50, from 733 MHz to 3.2 GHz Various laptops, inc. a few iBooks Group batch farm: 10 dual 2.4 GHz Dell rackmount CDF batch farm: 10 dual 2.4 GHz Dell rackmount ATLAS trigger testbed: 6 rackmount machines Mail and web servers LCG front-end nodes (CE, SE, MON) Windows Terminal Server Various dedicated and development machines

Operating Systems Desktops, farms, servers almost all SLC3 – recently completed changeover from RH7.3 Windows 2000 Terminal Server – some problems with SAMBA Windows machines for AutoCAD, hardware control Laptops a mix of Linux, Windows, OSX

LCG Front-End Machines Dedicated service nodes: CE, SE, MON Jobs go to HEP batch farm (normally) Support for separate CE and batch server – not provided as part of standard LCG – relies on recipes from third parties – is getting easier – well documented for LCG by Steve Traylen

Central Computing Cluster UCL SRIF-funded facility 96 dual 2.8 GHz P4 nodes Half dedicated to LCG Other half uses Sun Grid Engine (non-HEP use) Managed by Information Systems Hope to integrate HEP and non-HEP parts – use SGE info provider from LeSC – but need to change again for gLite?

Storage User file server with ~300 GB One 1.2 TB IDE RAID for data One 1.2 TB IDE RAID for backup One 3.2 TB RAID for data/backup Various RAIDS bought by MINOS and CDF Tape drive for backup

File backup Up to now: – backup selected areas to disk (tar) – secondary backup to tape Problem: – tape solution expensive – need to spend more money to use with SLC Solution: – RLBackup – Currently keeping old disk backup + RLBackup

Changes Bob Cranfield retiring later this year Ben Waugh to be group computing coordinator Gianfranco Sciacca started recently Machine room moving to new building (early 2006?) Moving behind campus firewall

Issues Desktops vs laptops Management of laptops – OS choice – support – updates Remote administration of servers and farms Management and support using fractions of people's time