Cambridge Site Report John Hill 20 June 20131SouthGrid Face to Face.

Slides:



Advertisements
Similar presentations
Southgrid Status Pete Gronbech: 21 st March 2007 GridPP 18 Glasgow.
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Alastair Dewhurst, Dimitrios Zilaskos RAL Tier1 Acknowledgements: RAL Tier1 team, especially John Kelly and James Adams Maximising job throughput using.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
UKI-SouthGrid Overview GridPP30 Pete Gronbech SouthGrid Technical Coordinator and GridPP Project Manager Glasgow - March 2012.
Wahid Bhimji Andy Washbrook And others including ECDF systems team Not a comprehensive update but what ever occurred to me yesterday.
- CENTRU DE RESURSE GRID ICI - Bucharest 1 Grid site RO-12-ICI Grid site RO-12-ICI Bildea Ana ICI Bucharest, December 5, 2008.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Marilyn T. Smith, Head, MIT Information Services & Technology DataSpace IS&T Data CenterMIT Optical Network 1.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Cambridge Site Report Cambridge Site Report HEP SYSMAN, RAL th June 2010 Santanu Das Cavendish Laboratory, Cambridge Santanu.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES News on monitoring for CMS distributed computing operations Andrea.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
UKI-SouthGrid Overview Face-2-Face Meeting Pete Gronbech SouthGrid Technical Coordinator Oxford June 2013.
London Tier 2 Status Report GridPP 12, Brunel, 1 st February 2005 Owen Maroney.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
RAL Site Report HEPiX Fall 2013, Ann Arbor, MI 28 Oct – 1 Nov Martin Bly, STFC-RAL.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator GridPP 24 - RHUL 15 th April 2010.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Quarterly report SouthernTier-2 Quarter P.D. Gronbech.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
SouthGrid Status Pete Gronbech: 2 nd April 2009 GridPP22 UCL.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Andrew McNabNorthGrid, GridPP8, 23 Sept 2003Slide 1 NorthGrid Status Andrew McNab High Energy Physics University of Manchester.
Lucien Boland and Sean Crosby Research Computing.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Southgrid Technical Meeting Pete Gronbech: 26 th August 2005 Oxford.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
CSCS Status Peter Kunszt Manager Swiss Grid Initiative CHIPP, 21 April, 2006.
Southgrid Technical Meeting Pete Gronbech: 24 th October 2006 Cambridge.
Southgrid Technical Meeting Pete Gronbech: May 2005 Birmingham.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
BaBar Cluster Had been unstable mainly because of failing disks Very few (
RALPP Site Report HEP Sys Man, 11 th May 2012 Rob Harper.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Farming Andrea Chierici CNAF Review Current situation.
RECENT DEVELOPMENTS IN THE CONTRIBUTION OF DFCTI/IFIN-HH TO THE WLCG COLLABORATION Department of Computational Physics and Information Technologies (DFCTI)
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
WLCG IPv6 deployment strategy
London Tier-2 Quarter Owen Maroney
Pete Gronbech GridPP Project Manager April 2016
Cluster / Grid Status Update
HEPiX Spring 2014 Annecy-le Vieux May Martin Bly, STFC-RAL
Andrea Chierici On behalf of INFN-T1 staff
Oxford Site Report HEPSYSMAN
Small site approaches - Sussex
GridPP Tier1 Review Fabric
ETHZ, Zürich September 1st , 2016
RHUL Site Report Govind Songara, Antonio Perez,
Presentation transcript:

Cambridge Site Report John Hill 20 June 20131SouthGrid Face to Face

Hardware (1) Worker Nodes Mix of hardware : DELL PE GHz) (2007) Viglen HX2225i 2.5GHz) (2008) SUN Fire X GHz) (2010) DELL PER GHz) (2010) DELL PER GHz) (2011) DELL PER GHz) (2012) DELL PEC GHz) (2013) Recent purchase of DELL PEC6220 (4 M’Boards in 2U – 64 cores in total) was with group funds Now have 204 CPU cores in 21 servers Provides SI2K = 2583 HS06 Running SL5/EMI-2 20 June 20132SouthGrid Face to Face

Hardware (2) Storage Pool nodes are again a mix of hardware: Viglen HS216a (2007) Viglen HS316i (2008) Viglen HS224a (2009 and 2010) Viglen HX425S-36i (2011) DELL PER510 (2012) All running SL5/EMI-2 (DPM ) Total of 278TiB available So far have been able to manage a smooth and gradual decommissioning of 2007 kit 20 June 20133SouthGrid Face to Face

Hardware (3) Service Nodes SE on bare metal PE1950: SL5/EMI-2 (DPM ). WebDAV and xrootd have been enabled and appear to work as far as I can tell. perfSonar on PER610 (part of DRI purchase). Other service nodes on VMs (underlying servers are PER410 or PER610). Most service nodes running SL5/EMI-2. Main exception is Site BDII, which is SL6/EMI-2. ARGUS Server, which is installed but not yet in production, is also SL6/EMI June 20134SouthGrid Face to Face

Hardware (4) Network 10Gbps local backbone (courtesy DRI). Most of storage, perfSonar server and latest service VM server connected at 10Gbps WNs only at 1Gbps at present. 10Gbps connection to University backbone (CUDN) (no throttling!). CUDN backbone is 10Gbps with fully-redundant mesh. JANET connection is 10Gbps (with a second 10Gbps link acting as fallback). This summer – plan to upgrade to 20Gbps (via 2x10Gbps bonded). Also this summer – Computing Service moving to West Cambridge (and the main JANET connection is moving with it). 20 June 20135SouthGrid Face to Face

20 June 2013SouthGrid Face to Face6 Incoming Outgoing Network Traffic to/from site

20 June 2013SouthGrid Face to Face7 Recent Availability/Reliability

Miscellaneous VOs Mainly support ATLAS and LHCb Also have local “camont” VO: In the past this supported Camtology and Imense In the future it might support either or both of the two “older” projects in Computational Radiotherapy the group is involved in: VoxTox and Accel-RT We also now have some involvement (via Frederic Brochu) in a related project called GHOST (Geant Human Oncology Simulation Tool). The use of CamGrid is being actively investigated - GridPP might be looked at as well. Also might support ILC VO in future – we have an active local user working on SiW calorimeter simulations who is interested in using the GRID Staff Currently 0.5 FTE GridPP funding – used to part-fund me I’m the only Cambridge person supporting GRID work at present 20 June 20138SouthGrid Face to Face

Future Plans Upgrade to SL6/EMI-3 when the dust created by the early adopters has settled a little. Also move to using ARGUS/gLExec at the same time. Already using puppet (set up by Santanu just before he left) to configure and maintain GRID nodes – will need to understand whether what he has set up is appropriate for configuring GRID software as well, or whether it is better to start again from scratch. Andy Parker becomes Head of Department on 1 August – I don’t expect this to have an impact on our commitment to GridPP though. The University is planning to build a new Data Centre on the West Cambridge site (currently scheduled for completion ~end-2014). There will be a lot of pressure on departments to move computing equipment into here – it is already essentially impossible to have an “official” computer room in any new building. 20 June 20139SouthGrid Face to Face

Issues Obvious manpower questions beyond GridPP4. If no manpower support in GridPP5 then would try to find resources from the consolidated grant (there is no other obvious source at present). If no or little equipment support in GridPP5 then situation becomes more difficult – we wouldn’t wish to put in significant group funds if GridPP5 wasn’t prepared to support us financially. Site-wide power cut to deal with on 20 July. Computing Service move may cause disruption, especially as the primary JANET connection has to move a couple of km. On the positive side, they are moving about 250m away from us (into the old Microsoft Research building). In a sane world, of course, they wouldn’t move until the Data Centre was ready… 20 June SouthGrid Face to Face