Winnie Lacesso Bristol Site Report May 2006. 2 Scope User Support / Servers / Config Security / Network UKI-SOUTHGRID-BRIS-HEP Upcoming: major infrastructure.

Slides:



Advertisements
Similar presentations
Open Science Grid Living on the Edge: OSG Edge Services Framework Kate Keahey Abhishek Rana.
Advertisements

QMUL e-Science Research Cluster Introduction (New) Hardware Performance Software Infrastucture What still needs to be done.
SouthGrid Status Pete Gronbech: 12 th March 2008 GridPP 20 Dublin.
Northgrid Status Alessandra Forti Gridpp24 RHUL 15 April 2010.
Condor use in Department of Computing, Imperial College Stephen M c Gough, David McBride London e-Science Centre.
Liverpool HEP – Site Report May 2007 John Bland, Robert Fay.
Liverpool HEP - Site Report June 2008 Robert Fay, John Bland.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Birmingham site report Lawrie Lowe HEP System Managers Meeting, RAL,1 st July 2004.
UCL HEP Computing Status HEPSYSMAN, RAL,
RAL Particle Physics Dept. Site Report. Gareth Smith RAL PPD About 2 staff mainly on windows and general infrastructure About 1.5 staff on departmental.
A couple of slides on RAL PPD Chris Brew CCLRC - RAL - SPBU - PPD.
Site Report: The Linux Farm at the RCF HEPIX-HEPNT October 22-25, 2002 Ofer Rind RHIC Computing Facility Brookhaven National Laboratory.
Ted Krueger SQL Server MVP Data Architect Building a SQL Server Test Lab.
Tasks in Setting Up a Hard Disk
Lecture 10 Sharing Resources. Basics of File Sharing The core component of any server is its ability to share files. In fact, the Server service in all.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
MUNIS Platform Migration Project WELCOME. Agenda Introductions Tyler Cloud Overview Munis New Features Questions.
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
LAL Site Report Michel Jouvin LAL / IN2P3
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Cambridge Site Report Cambridge Site Report HEP SYSMAN, RAL th June 2010 Santanu Das Cavendish Laboratory, Cambridge Santanu.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
Gareth Smith RAL PPD HEP Sysman. April 2003 RAL Particle Physics Department Site Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Winnie Lacesso Bristol Site Report June Staff & Users Departmental Physics / Networks: JP Melot, Neil Laws (Microsoft); Rhys Morris (Astrophysics.
Tier 3g Infrastructure Doug Benjamin Duke University.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
University Information Services Managed Cluster Service (MCS) and Desktop Services Computer Laboratory Thursday 9 th October 2014.
SouthGrid Status Pete Gronbech: 2 nd April 2009 GridPP22 UCL.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Southgrid Technical Meeting Pete Gronbech: 26 th August 2005 Oxford.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
13th October 2011Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
Southgrid Technical Meeting Pete Gronbech: 24 th October 2006 Cambridge.
14th October 2010Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and SouthGrid Technical Co-ordinator.
University of Bristol 5th GridPP Collaboration Meeting 16/17 September, 2002Owen Maroney University of Bristol 1 Testbed Site –EDG 1.2 –LCFG GridPP Replica.
VMware vSphere Configuration and Management v6
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
11th October 2012Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
BaBar Cluster Had been unstable mainly because of failing disks Very few (
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
RALPP Site Report HEP Sys Man, 11 th May 2012 Rob Harper.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
VIRTUALIZATION TECHNIQUES By:- Aman, Denis and Dharit.
2007/05/22 Integration of virtualization software Pierre Girard ATLAS 3T1 Meeting
ITIS 3110 IT Infrastructure II
Oxford Site Report HEPSYSMAN
Artem Trunov and EKP team EPK – Uni Karlsruhe
Migration Strategies – Business Desktop Deployment (BDD) Overview
Pete Gronbech, Kashif Mohammad and Vipul Davda
Presentation transcript:

Winnie Lacesso Bristol Site Report May 2006

2 Scope User Support / Servers / Config Security / Network UKI-SOUTHGRID-BRIS-HEP Upcoming: major infrastructure mods Assimilation of/into &or contention for upcoming IS HPC cluster

3 User Support Inventory/Asset Management (initialise! ), budget, hardware replenish/upgrades –Desktop: XP, RH7/9, SL3/4, FC4/5, Mac –Rhys Morris (Astro) 25% PP support –MS support: Danny Yarde, JP Melot –Macs: users do own support mostly –Lots of laptops on & offsite (security??) No user acct/disk space management yet

4 PP Servers 2 login svr, one is main NFS – need separate Auth via krb4 to antique MS ADS svr – security concerns – can't upgrade, IS politics Other servers: PBS, NFS, compute, license – each unique hardware, OS/software/config Critical AFS server: Win2K (eeek); TO-DO: migrate to Unix Map services & disk: consolidate & retire

5 Standard Config Security: iptables & /etc/hosts.{deny,allow} but no central logging or monitoring yet (TO-DO) sendmail broken => no logwatch notices Nightly MS/RH/SL updates Only 4 filesystems regularly backed up (TO- DO: ensure backing up everything needing backup)

6 Security Nodes run standard Lx crons ie logwatch but sendmail never sends due to security; 97% is unwanted all-ok; need filtering – really do want that 3% Help me... Needed: Secure monitored central logging server, secure gateway/login server Encrypted (Unix) offsite backups (done) MS backups by JP Melot to flr bkp svrs

7 Network JP Melot manages all Physics networking Grid nodes (CE/SE/MON) on 1Gb network March 2006, rest still 100Mb Still mapping network & devices: Phys => IS => SWERN => JaNET

8 PHYSICS => BRIS => SWERN => JANET netmon BRIS 1Gb router 1Gb PP switch 100Mb PP switch 100Mb Phys Dept Router Bris-BkBn Router Bris-Gw Router SWERN-Gw Router JaNET-Gw Router ce se mon ui wn

9 UKI-SOUTHGRID-BRIS-HEP Pete G & Yves C built CE/SE/MON/WN (1 WN = 2CPU (1 reserved for SFT)) WL online , Pete G & Yves C came to tutor in UI build Yves still a font of wisdom! (LCG: complex) LFC, DPM, Upgrade to LCG 2.7.0, +7WN(scavenged from PP PBS cluster) Next: Security Review &Monitor, Backups; XFS? SL4.x?

10 UKI-SOUTHGRID-BRIS-HEP (pix)

11 Upcoming Major Physics reno => PP floor must move –Staff move to 1 if not 2 remote building(s) 5 th floor water tank: dismantle for new IS HPC machine room; Physics (incl. Astro, etc) machine room(s) will move into it All servers rackmount – consolidate heap of white box PP servers to fewer rackmount servers (++work, but will simplify)

12 IS HPC Just completed Phase1 of tender evaluation Selection by – mid May? 1024 CPU, garden variety cluster (Masters, Storage subsystem = heads & servers) Shared by ALL Uni research groups (eeek) Particle Physics funding = expect ongoing set of LCG WN allocated, & LCG storage; IS being evasive to agree/commit

13 CE => IS.HPC Cluster Existing WN may stay, & want to point LCG CE at IS.HPC cluster / queue (queues?) LCG/PP #WN probably fluctuate – have to define when/how IS.HPC cluster adds & deletes PP LCG WN Must know exactly what config needed for WN { PP LCG CE/SE/MON)

14 SE => IS.HPC Storage Likewise PP expects to have access from LCG SE to some (much?) IS.HPC storage Really need to know how much + what kind of control the LCG SE needs over remote cluster storage IS expecting at best NFS access.... maybe not good enough for LCG SE Need to find out what others have done

15 Current Issues Useful to learn how other sites do security & configuration architect/maintain/manage Scavenge good ideas; SIMPLE = GOOD Server consolidation will be +work but good Less daily interrupts to work on longer-term projects & study/grok LCG software structure & monitoring