14th October 2014Graduate Lectures1 Oxford University Particle Physics Unix Overview Sean Brisbane Particle Physics Systems Administrator Room 661 Tel.

Slides:



Advertisements
Similar presentations
Oxford PP Computing Site Report HEPSYSMAN 28 th April 2003 Pete Gronbech.
Advertisements

9th May 2006HEPSYSMAN RAL - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
17th October 2013Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Andrew McNab - EDG Access Control - 14 Jan 2003 EU DataGrid security with GSI and Globus Andrew McNab University of Manchester
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Tuesday, September 08, Head Node – Magic.cse.buffalo.edu Hardware Profile Model – Dell PowerEdge 1950 CPU - two Dual Core Xeon Processors (5148LV)
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
Tier 3g Infrastructure Doug Benjamin Duke University.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
14th April 1999Hepix Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
Process & Organize Data Storage 2 Data can be stored for later recall and use. The storage facility is a very powerful feature as data can be used later.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
20th October 2003Hepix Vancouver - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Section 2 Section 2.1 Identify hardware Describe processing components Compare and contrast input and output devices Compare and contrast storage devices.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Monitoring the Grid at local, national, and Global levels Pete Gronbech GridPP Project Manager ACAT - Brunel Sept 2011.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPIX 2009 Umea, Sweden 26 th May 2009.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN RAL 30 th June 2009.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Configuration Management with Cobbler and Puppet Kashif Mohammad University of Oxford.
11th Oct 2005Hepix SLAC - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager and South Grid Technical Co-ordinator.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
Exporting User Certificate from Internet Explorer.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
13th October 2011Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
1st July 2004HEPSYSMAN RAL - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Southgrid Technical Meeting Pete Gronbech: 24 th October 2006 Cambridge.
Oxford University Particle Physics Unix Overview Sean Brisbane Particle Physics Systems Administrator Room 661 Tel th.
14th October 2010Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and SouthGrid Technical Co-ordinator.
HEPSYSMAN May 2007 Oxford & SouthGrid Computing Status (Ian McArthur), Pete Gronbech May 2007 Physics IT Services PP Computing.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
11th October 2012Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
A GANGA tutorial Professor Roger W.L. Jones Lancaster University.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Oxford University Particle Physics Unix Overview
Oxford University Particle Physics Unix Overview
Submit BOSS Jobs on Distributed Computing System
Oxford Site Report HEPSYSMAN
Unit 27: Network Operating Systems
Oxford University Particle Physics Unix Overview
Presentation transcript:

14th October 2014Graduate Lectures1 Oxford University Particle Physics Unix Overview Sean Brisbane Particle Physics Systems Administrator Room 661 Tel

14th October 2014Graduate Lectures2 l Strategy l Local Cluster Overview l Connecting to it l Grid Cluster l Computer Rooms l How to get help

14th October 2014Graduate Lectures3 Particle Physics Strategy The Server / Desktop Divide Win 7 PC Linux Desktop Desktops Servers General Purpose Unix Server Group DAQ Systems Linux Worker nodes Web Server Linux File Servers Win 7 PC Ubuntu PC Approx 200 Desktop PC’s with Exceed, putty or ssh/X windows used to access PP Linux systems Virtual Machine Host NIS Server torque Server

Physics fileservers and clients 14th October 2014Graduate Lectures4 Windows server Central Linux file- server PP file- server Storage system: ClientWindowsCentral Ubuntu PP Linux Recommended storage H:\ drive/home folder/home and /data folders Windows storage“H:\” drive or “Y:\home” /physics/home PP StorageY:/LinuxUsers/pplinux /data/home /data/home, /data/experiment Central LinuxY:/LinuxUsers/home/ particle /network/home/particle

14th October 2014Graduate Lectures5 Particle Physics Linux l Unix Team (Room 661): n Pete Gronbech - Senior Systems Manager and GridPP Project Manager n Ewan MacMahon – Grid Systems Administrator n Kashif Mohammad – Grid and Local Support n Sean Brisbane – Local Server and User Support l General purpose interactive Linux based systems for code development, short tests and access to Linux based office applications. These are accessed remotely. l Batch queues are provided for longer and intensive jobs. Provisioned to meet peak demand and give a fast turnaround for final analysis. l Systems run Scientific Linux (SL) which is a free Red Hat Enterprise based distribution. l The Grid & CERN have migrated to SL6. The majority of the local cluster is also on SL6, but some legacy SL5 systems are provided for those that need them. l We will be able to offer you the most help running your code on the newer SL6. Some experimental software frameworks still require SL5.

14th October 2014Graduate Lectures6 Current Clusters l Particle Physics Local Batch cluster l Oxfords Tier 2 Grid cluster

14th October 2014 PP Linux Batch Farm pplxwn15 Scientific Linux 6 pplxint8 pplxint9 8 * Intel 5420 cores Interactive login nodes pplxwn16 8 * Intel 5420 cores 7Graduate Lectures pplxwn31 pplxwn32 pplxwnnn pplxwn38 pplxwn41 pplxwnnn pplxwn60 16 * Intel cores 16 * Intel 2650 cores 12 * Intel 5650 cores Users log in to the interactive nodes pplxint8 & 9, the home directories and all the data disks (/home area or /data/group ) are shared across the cluster and visible on the interactive machines and all the batch system worker nodes. Approximately 300 cores (430 incl. JAI/LWFA), each with 4GB of RAM memory. The /home area is where you should keep your important text files such as source code, papers and thesis The /data/ area is where you should put your big reproducible input and output data pplxwnnn 12 * Intel 5650 cores pplxwn59 jailxwn01 64 * AMD cores 16 * Intel cores jailxwn02 64 * AMD cores

14th October 2014 PP Linux Batch Farm Scientific Linux 5 pplxint5 Interactive login nodes pplxwn23 16 * AMD 6128 cores pplxwn24 16 * AMD 6128 cores 8Graduate Lectures Legacy SL5 jobs supported by smaller selection of worker nodes. Currently eight servers with 16 cores each with 4GB of RAM memory per core. All of your files area available from SL5 and 6, but the software environment will be different and therefore your code may not run if compiled for the other operating system. pplxwn30 16 * AMD 6128 cores pplxwnnn 16 * AMD 6128 cores pplxwnnn 16 * AMD 6128 cores pplxint6

14th October 2014 PP Linux Batch Farm Data Storage pplxfsn 40TB pplxfsn 40TB Data Areas pplxfsn 19TB 9 Graduate Lectures NFS Servers Home areas Data Areas NFS is used to export data to the smaller experimental groups, where the partition size is less than the total size of a server. The data areas are too big to be backed up. The servers have dual redundant PSUs, RAID 6 and are running on uninterruptible powers supplies. This safeguards against hardware failures, but does not help if you delete files. The home areas are backed up by two different systems nightly. The Oxford ITS HFS service and a local back up system. If you delete a file tell us a soon as you can when you deleted it and it’s full name. The latest nightly backup of any lost or deleted files from your home directory is available at the read-only location /data/homebackup/{username} The home areas are quota’d but if you require more space ask us. Store your thesis on /home NOT /data. pplxfsn 30TB Data Areas

Particle Physics Computing Lustre MDSLustre OSS02Lustre OSS03 18TB 44TB SL6 Node SL5 Node pplxint8 Lustre OSS01 44TB Lustre OSS04 df -h /data/atlas Filesystem Size Used Avail Use% Mounted on /lustre/atlas25/atlas 366T 199T 150T 58% /data/atlas df -h /data/lhcb Filesystem Size Used Avail Use% Mounted on /lhcb25 118T 79T 34T 71% /data/lhcb25 14th October Graduate Lectures The Lustre file system is used to group multiple file servers together to provide extremely large continuous file spaces. This is used for the Atlas and LHCb groups. pplxint5

14th October 2014Graduate Lectures11

14th October 2014Graduate Lectures12 Strong Passwords etc l Use a strong password not open to dictionary attack! n fred123 – No good n Uaspnotda!09 – Much better l More convenient* to use ssh with a passphrased key stored on your desktop. n Once set up

14th October 2014Graduate Lectures13 Connecting with PuTTY Question: How many of you are using Windows? & Linux? On the desktop Demo 1. Plain ssh terminal connection 1. From ‘outside of physics’ 2. From Office (no password) 2. ssh with X windows tunnelled to passive exceed 3. ssh, X windows tunnel, passive exceed, KDE Session 4. Password-less access from ‘outside physics’ 1. See backup slides

14th October 2014Graduate Lectures14

14th October 2014Graduate Lectures15 SouthGrid Member Institutions l Oxford l RAL PPD l Cambridge l Birmingham l Bristol l Sussex l JET at Culham

Current capacity l Compute Servers n Twin and twin squared nodes –1770 CPU cores l Storage n Total of ~1300TB n The servers have between 12 and 36 disks, the more recent ones are 4TB capacity each. These use hardware RAID and UPS to provide resilience. 14th October 2014Graduate Lectures16

14th October 2014Graduate Lectures17 Get a Grid Certificate Must remember to use the same PC to request and retrieve the Grid Certificate. The new UKCA page uses a JAVA based CERT WIZARD You will then need to contact central Oxford IT. They will need to see you, with your university card, to approve your request: To: Dear Stuart Robeson and Jackie Hewitt, I Please let me know a good time to come over to Banbury road IT office for you to approve my grid certificate request. Thanks.

When you have your grid certificate… 14th October 2014 Graduate Lectures18 Save to a filename in your home directory on the Linux systems, eg: Y:\Linuxusers\particle\home\{username}\mycert.p12 Log in to pplxint9 and run mkdir.globus chmod 700.globus cd.globus openssl pkcs12 -in../mycert.p12 -clcerts -nokeys -out usercert.pem openssl pkcs12 -in../mycert.p12 -nocerts -out userkey.pem chmod 400 userkey.pem chmod 444 usercert.pem

Now Join a VO l This is the Virtual Organisation such as “Atlas”, so: n You are allowed to submit jobs using the infrastructure of the experiment n Access data for the experiment l Speak to your colleagues on the experiment about this. It is a different process for every experiment! 14th October 2014Graduate Lectures19

Joining a VO l Your grid certificate identifies you to the grid as an individual user, but it's not enough on its own to allow you to run jobs; you also need to join a Virtual Organisation (VO). l These are essentially just user groups, typically one per experiment, and individual grid sites can choose to support (or not) work by users of a particular VO. l Most sites support the four LHC VOs, fewer support the smaller experiments. l The sign-up procedures vary from VO to VO, UK ones typically require a manual approval step, LHC ones require an active CERN account. l For anyone that's interested in using the grid, but is not working on an experiment with an existing VO, we have a local VO we can use to get you started. 14th October 2014Graduate Lectures20

When that’s done l Test your grid certificate: > voms-proxy-init –voms lhcb.cern.ch Enter GRID pass phrase: Your identity: /C=UK/O=eScience/OU=Oxford/L=OeSC/CN=j bloggs Creating temporary proxy Done l Consult the documentation provided by your experiment for ‘their’ way to submit and manage grid jobs 14th October 2014Graduate Lectures21

14th October 2014Graduate Lectures22 Two Computer Rooms provide excellent infrastructure for the future The New Computer room built at Begbroke Science Park jointly for the Oxford Super Computer and the Physics department, provides space for 55 (11KW) computer racks. 22 of which will be for Physics. Up to a third of these can be used for the Tier 2 centre. This £1.5M project was funded by SRIF and a contribution of ~£200K from Oxford Physics. The room was ready in December Oxford Tier 2 Grid cluster was moved there during spring All new Physics High Performance Clusters will be installed here.

14th October 2014Graduate Lectures23 Local Oxford DWB Physics Infrastructure Computer Room Completely separate from the Begbroke Science park a computer room with 100KW cooling and >200KW power has been built. ~£150K Oxford Physics money. Local Physics department Infrastructure computer room. Completed September This allowed local computer rooms to be refurbished as offices again and racks that were in unsuitable locations to be re housed.

Cold aisle containment 24 14th October 2014Graduate Lectures

Other resources (for free) l Oxford Advanced Research Computing n A shared cluster of CPU nodes, “just” like the local cluster here n GPU nodes –Faster for ‘fitting’, toy studies and MC generation –*IFF* code is written in a way that supports them n Moderate disk space allowance per experiment (<5TB) n l Emerald n Huge farm of GPUs n l Both needs a separate account and project n Come talk to us in RM th October 2014Graduate Lectures25

14th October 2014Graduate Lectures26 The end of the overview l Now more details of use of the clusters l Help Pages n n physics/particle-physics-computer-support physics/particle-physics-computer-support l ARC n l n

BACKUP 14th October 2014Graduate Lectures27

Puttygen to create an ssh key on Windows (previous slide point #4) 14th October 2014Graduate Lectures28 Paste this into ~/.ssh/authorized_keys on pplxint Enter a secure passphrase then : - Enter a strong passphrase - Save the private parts of the key to a subdirectory of your local drive.

Pageant l Run Pageant once after login l Right-click on the pageant symbol and and “Add key” for your Private (windows ssh key) 14th October 2014Graduate Lectures29

14th October 2014Graduate Lectures30 Network l Gigabit JANET connection to campus July l Second JANET gigabit connection Sept l JANET campus connection upgraded to dual 10 gigabit links August 2009 l Gigabit Juniper firewall manages internal and external Physics networks. l 10Gb/s network links installed between Tier-2 and Tier-3 clusters in l Physics-wide wireless network. Installed in DWB public rooms, Martin Wood, AOPP and Theory. New firewall provides routing and security for this network.

14th October 2014Graduate Lectures31 Network Access Campus Backbone Router Super Janet 4 2* 10Gb/s with Janet 6 OUCS Firewall depts Physics Firewall Physics Backbone Router 1Gb/s 10Gb/s 1Gb/s 10Gb/s Backbone Edge Router depts 100Mb/s 1Gb/s depts 100Mb/s Backbone Edge Router 10Gb/s

14th October Physics Backbone desktop Server switch Physics Firewall Physics Backbone Switch Dell 8024F 10Gb/s 1Gb/s Particle Physics Dell 8024F desktop 1Gb/s Clarendon Lab Dell 8024F 10Gb/s Win 2k Server Astro Dell 8024F 10Gb/s 1Gb/s Theory Dell 8024F 10Gb/s Atmos Dell 8024F 10Gb/s Server Switch S Gb/s Linux Server 10Gb/s Linux Server 10Gb/s Super FRODO Frodo 10Gb/s 1Gb/s Graduate Lectures