UCL Site Report Ben Waugh HepSysMan, 22 May 2007.

Slides:



Advertisements
Similar presentations
Liverpool HEP – Site Report May 2007 John Bland, Robert Fay.
Advertisements

Liverpool HEP - Site Report June 2008 Robert Fay, John Bland.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
UCL HEP Computing Status HEPSYSMAN, RAL,
24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
{ Making Microsoft Office work for you Organizing Your Life at work and home in the Cloud Presented by: Matthew Baker (321)
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
INSTALLING LINUX.  Identify the proper Hardware  Methods for installing Linux  Determine a purpose for the Linux Machine  Linux File Systems  Linux.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
CT NIKHEF June File server CT system support.
1 Objectives Discuss the Windows Printer Model and how it is implemented in Windows Server 2008 Install the Print Services components of Windows Server.
The PC Evolution  Began in the late 1970’s with the development of 4 bit and 8 bit microprocessors  Market penetration started with the MAC and IBM PC.
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
Tier 3g Infrastructure Doug Benjamin Duke University.
14th April 1999Hepix Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
WE FACILITATE TECHNOLOGY TRANSFORMATIONS THAT DRIVE & SUPPORT BUSINESS SUCCESS. We facilitate technology transformations that drive & support business.
Chapter 7: Using Windows Servers to Share Information.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Projects. High Performance Computing Projects Design and implement an HPC cluster with one master node and two compute nodes. (Hint: use Rocks HPC Cluster.
Chapter 3.  Help you understand different types of servers commonly found on a network including: ◦ File Server ◦ Application Server ◦ Mail Server ◦
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
RAL PPD Site Update and other odds and ends Chris Brew.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
MaterialsHub - A hub for computational materials science and tools.  MaterialsHub aims to provide an online platform for computational materials science.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Site Report May 2006 RHUL Simon George Sukhbir Johal Royal Holloway, University of London, Egham, Surrey TW20 0EX HEP SYSMAN May 2006.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
How to use Remote Desktop and Remote Support. What is remote desktop? Remotely control your computer from another office, from home, or while traveling.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
1st July 2004HEPSYSMAN RAL - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
London Tier 2 Status Report GridPP 11, Liverpool, 15 September 2004 Ben Waugh on behalf of Owen Maroney.
Manchester Site report Sabah Salih HEPP The University of Manchester UK HEP Tier3.
File sharing requirements of remote users G. Bagliesi INFN - Pisa EP Forum on File Sharing 18/6/2001.
Southgrid Technical Meeting Pete Gronbech: May 2005 Birmingham.
Brunel University, School of Engineering and Design, Uxbridge, UB8 3PH, UK Henry Nebrensky (not a systems manager) SIRE Group.
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
Introduction TO Network Administration
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Royal Holloway site report Simon George RAL Jun 2010.
Workstations & Thin Clients
Chapter 7: Using Windows Servers
Glasgow Site Report (Group Computing)
UK GridPP Tier-1/A Centre at CLRC
MaterialsHub - A hub for computational materials science and tools.
TYPES OF SERVER. TYPES OF SERVER What is a server.
Pete Gronbech, Kashif Mohammad and Vipul Davda
Presentation transcript:

UCL Site Report Ben Waugh HepSysMan, 22 May 2007

Desktops About 50, up to 5 years old Scientific Linux (CERN) –most still SLC3, moving to SLC4 –not all 64-bit capable NFS home directories and /usr/local –moving to rsync for /usr/local Also Windows Terminal Service for Office etc. –problems with Samba, print spooler –moving to use UCL central service –issues: printing, PDF creation

Laptops Increasingly used for travel and as desktop replacement Range of models and operating systems Limited support, but aim to install SLC where required Private VLAN, firewalled from main subnet

Batch farm 17 dual-CPU 2.4 GHz Dell PowerEdge Torque/Maui SLC3

LCG LCG service nodes submit jobs to batch farm Secondary to local users Useful for development of Grid-enabled software 1 TB for ATLAS served by DPM pool node

UCL Research Computing Central Computing Cluster part of London Tier-2 New SRIF3 procurement in progress –expect over 2500 cores –storage sufficient to meet GridPP plans –also available for local job submission

Storage Home directories on 300 GB RAID –moving soon to larger RAID Two 5-TB RAIDs (EonStor) Two smaller RAIDs for backup (RLBackup) Various RAIDs bought by CDF and MINOS No tape! Mounted via NFS on desktops and farm Group quotas are a nuisance –possibly move to Logical Volume Management (LVM)

Recently installed new server – quite smooth! More powerful machine –old one overwhelmed by virus scanning etc. Dovecot and Exim Introduced quotas and max attachment size Spam filtering –SpamAssassin on UCL mail hub –still a problem

Web server Group and user home pages eLog TWiki Room booking, library database…

Collaborative tools Video conferencing –hardware H.323 clients (Aethra Vega X3) –various software tools: iChat etc. –VRVS –EVO? Voice over IP –various tools available and used: Skype, Gizmo, iChat –UCL producing guidance on Skype –no clear best solution

Other services SSH gateways Code development server –CVS –Subversion –Trac Various databases –mostly mirroring experiment databases –each insist on their own preferred MySQL version

Monitoring etc. Nagios Investigating remote management tools –currently just SSH –could use serial over IP –remote power control

The future Moving to new machine room –latest estimate is November 2007 Move to SL(C)4 –mixture of 32- and 64-bit for a while Would like to improve: –hardware and configuration management –collaborative tools: VC, VOIP Would like to see more sharing of “best practice” –LCG system management working group –Wiki:

Issues Providing consistent support from fractional FTEs Storage –choice of file system –finding bottlenecks –space management Management of configuration, user accounts, hardware… –group has been growing