Presentation is loading. Please wait.

Presentation is loading. Please wait.

Oxford University Particle Physics Unix Overview Sean Brisbane Particle Physics Systems Administrator Room 661 Tel 73389 14th.

Similar presentations


Presentation on theme: "Oxford University Particle Physics Unix Overview Sean Brisbane Particle Physics Systems Administrator Room 661 Tel 73389 14th."— Presentation transcript:

1 Oxford University Particle Physics Unix Overview Sean Brisbane Particle Physics Systems Administrator Room 661 Tel 73389 s.brisbane1@physics.ox.ac.uk 14th October 2015 Graduate Lectures1

2  Strategy  Local Cluster Overview  Connecting to it  Grid Cluster  Computer Rooms  How to get help 14th October 2014Graduate Lectures2

3 14th October 2014Graduate Lectures3 Particle Physics Strategy The Server / Desktop Divide Win 7 PC Linux Desktop Clients Servers General Purpose Unix Server Group DAQ Systems Linux Worker nodes Web Server Linux File Servers Win 7 PC Ubuntu PC Virtual Machine Host NIS Server torque Server laptop

4 14th October 2014Graduate Lectures4 Distributed Model Files are not stored on the machine you are using, but on remote locations You can run a different operating system by making a remote connection to it

5  Computing work splits broadly into  Office work (local/remote)  Writing code (remote)  Computation (remote)  Use your favourite desktop/laptop for office files.  Make a remote desktop connection to pplxint8/9 to do computing work on the batch farm  It is better to write code on the remote Linux desktop 14th October 2014Graduate Lectures5 Recommended working strategy

6  We use RDP for remote desktop  This means that everyone in particle physics has access to multiple desktop environments from anywhere  Windows ( termserv.physics.ox.ac.uk )  Scientific Linux ( pplxint8/9 )  Ubuntu Linux ( ubuntu-trusty-ts )  MaxOSX ( osxts via vnc) 14th October 2014Graduate Lectures6 Physics Remote desktops

7 14th October 2014Graduate Lectures7 Storage Windows server Central Linux file- server PP file- server Storage system: ClientWindowsCentral Ubuntu PP Linux Recommended storage H:\ drive/home folder/home and /data folders Windows storage“H:\” drive or “Y:\home” /physics/home PP StorageY:/LinuxUsers/pplinux/ data/home /data/home, /data/experiment Central LinuxY:/LinuxUsers/home/p article /network/home/particle Physics storage.

8  Windows: map your H:\ drive by typing  net use H: https://winfe.physics.ox.ac.uk/home/yourname /USER:yournamehttps://winfe.physics.ox.ac.uk/home/yourname - Where yourname is your physics user name  OSX: http://www2.physics.ox.ac.uk/it-services/connecting-to-physics- file-servers-from-os-xhttp://www2.physics.ox.ac.uk/it-services/connecting-to-physics- file-servers-from-os-x  Linux: http://www2.physics.ox.ac.uk/it-services/access-windows- shares-from-linuxhttp://www2.physics.ox.ac.uk/it-services/access-windows- shares-from-linux 14th October 2014Graduate Lectures8 To store files on servers using your laptop

9 RDP and storage demo H:\ drive on windows Connecting to Linux on pplxint8 from windows /home and /data on Linux 14th October 2014Graduate Lectures9

10  Unix Team (Room 661):  Sean Brisbane – Local Server and User Support  Kashif Mohammad – Grid and Local Support  Ewan MacMahon – Grid Systems Administrator  Pete Gronbech - Senior Systems Manager and GridPP Project Manager  General purpose interactive Linux based systems for code development, short tests and access to Linux based office applications. These are accessed remotely.  Batch queues are provided for longer and intensive jobs. Provisioned to meet peak demand and give a fast turnaround for final analysis.  Our Local Systems run Scientific Linux (SL) which is a free Red Hat Enterprise based distribution.  The same as the Grid and CERN  We will be able to offer you the most help running your code on SL6. 14th October 2014Graduate Lectures10 Particle Physics Linux

11  Particle Physics Local Batch cluster  Oxford’s Tier 2 Grid cluster 14th October 2014Graduate Lectures11 Current Clusters

12 14th October 2014Graduate Lectures12 PP Linux Batch Farm Scientific Linux 6 pplxint8 pplxint9 Interactive login nodes pplxdatatrans Grid data transfer nodes pplxwn31 pplxwn32 pplxwn41 pplxwnnn pplxwn67 16 * Intel cores 16 * Intel 2650 cores 12 * Intel 5650 cores Users log in to the interactive nodes pplxint8 & 9, the home directories and all the data disks (/home area or /data/group ) are shared across the cluster and visible on the interactive machines and all the batch system worker nodes. Approximately 600 cores (430 incl. JAI/LWFA), each with 4GB of RAM memory. The /home area is where you should keep your important text files such as source code, papers and thesis The /data/ area is where you should put your big reproducible input and output data pplxwn68 jailxwn01 64 * AMD cores 16 * Intel cores jailxwn02 64 * AMD cores

13 14th October 2014 Graduate Lectures 13 PP Linux Batch Farm Data Storage pplxfsn 40TB pplxfsn 40TB Data Areas pplxfsn 19TB NFS and lustre storage Servers Home areas Data Areas The /data areas are big and fast disks. This is too big to be backed-up but still have some redundancy features and are safer than laptop storage. This does not help if you delete files. The /home areas are backed up by two different systems nightly. The latest nightly backup of any lost or deleted files from your home directory is available at the read-only location /data/homebackup/{username} If you need older files, tell us If you need more space on /home, tell us Store your thesis on /home NOT /data. pplxfsn 30TB Data Areas

14 14th October 2014Graduate Lectures14 Two Computer Rooms provide excellent infrastructure for the future The New Computer room built at Begbroke Science Park jointly for the Oxford Super Computer and the Physics department, provides space for 55 (11KW) computer racks. 22 of which will be for Physics. Up to a third of these can be used for the Tier 2 centre. This £1.5M project was funded by SRIF and a contribution of ~£200K from Oxford Physics. The room was ready in December 2007. Oxford Tier 2 Grid cluster was moved there during spring 2008. All new Physics High Performance Clusters will be installed here.

15 14th October 2014Graduate Lectures15

16 14th October 2014Graduate Lectures16 Local Oxford DWB Physics Infrastructure Computer Room Completely separate from the Begbroke Science park a computer room with 100KW cooling and >200KW power has been built. ~£150K Oxford Physics money. Local Physics department Infrastructure computer room. Completed September 2007. This allowed local computer rooms to be refurbished as offices again and racks that were in unsuitable locations to be re housed.

17 14th October 2014Graduate Lectures17

18  Use a strong password not open to dictionary attack!  fred123 – No good  Uaspnotda!09 – Much better  More convenient to use ssh with a passphrased key stored on your desktop.  Once set up 14th October 2014Graduate Lectures18 Strong Passwords etc

19 Demo 1.Plain ssh terminal connection 1.From ‘outside of physics’ 2.From Office (no password) 2.ssh with X windows tunnelled to passive exceed. Single apps. 3.Password-less access from ‘outside physics’ http://www2.physics.ox.ac.uk/it-services/ppunix/ppunix-cluster http://www.howtoforge.com/ssh_key_based_logins_putty 14th October 2014Graduate Lectures19 Connecting with PuTTY to Linux

20 14th October 2014Graduate Lectures20

21  Oxford Advanced Research Computing  A shared cluster of CPU nodes, “just” like the local cluster here  GPU nodes  Faster for ‘fitting’, toy studies and MC generation  *IFF* code is written in a way that supports them  Moderate disk space allowance per experiment (<5TB)  http://www.arc.ox.ac.uk/content/getting-started http://www.arc.ox.ac.uk/content/getting-started  The Grid  Massive globally connected computer farm  For big computing projects  Atlas, LHCb, t2k and SNO please stay at the end!  Come talk to us in RM 661 about these 14th October 2014Graduate Lectures21 Other resources (for free)

22  Now more details of use of the clusters  Help Pages  http://www.physics.ox.ac.uk/it/unix/default.htm http://www.physics.ox.ac.uk/it/unix/default.htm  http://www2.physics.ox.ac.uk/research/particle- physics/particle-physics-computer-support http://www2.physics.ox.ac.uk/research/particle- physics/particle-physics-computer-support  ARC  http://www.arc.ox.ac.uk/content/getting-started http://www.arc.ox.ac.uk/content/getting-started  Email  pp_unix_admin@physics.ox.ac.uk  GRID talk at the end 14th October 2014Graduate Lectures22 The end of the overview

23 Atlas, Sno, LHCb and t2k please read this GRID certificates 14th October 2015Graduate Lectures23

24 SouthGrid Member Institutions  Oxford  RAL PPD  Cambridge  Birmingham  Bristol  Sussex  JET at Culham 14th October 2014Graduate Lectures24

25  Compute Servers  Twin and twin squared nodes  1770 CPU cores  Storage  Total of ~1300TB  The servers have between 12 and 36 disks, the more recent ones are 4TB capacity each. These use hardware RAID and UPS to provide resilience. 14th October 2014Graduate Lectures25 Current capacity

26 14th October 2014Graduate Lectures26 Get a Grid Certificate Must remember to use the same PC to request and retrieve the Grid Certificate. The new UKCA page http://www.ngs.ac.uk/ukca uses a JAVA based CERT WIZARD You will then need to contact central Oxford IT. They will need to see you, with your university card, to approve your request: To: help@it.ox.ac.uk Dear Stuart Robeson and Jackie Hewitt, I Please let me know a good time to come over to Banbury road IT office for you to approve my grid certificate request. Thanks.

27 Log in to pplxint9 and run mkdir.globus chmod 700.globus cd.globus openssl pkcs12 -in../mycert.p12 -clcerts -nokeys -out usercert.pem openssl pkcs12 -in../mycert.p12 -nocerts -out userkey.pem chmod 400 userkey.pem chmod 444 usercert.pem 14th October 2014 Graduate Lectures27 When you have your grid certificate… Save to a filename in your home directory on the Linux systems, eg: Y:\Linuxusers\particle\home\{username}\mycert.p12

28  This is the Virtual Organisation such as “Atlas”, so:  You are allowed to submit jobs using the infrastructure of the experiment  Access data for the experiment  Speak to your colleagues on the experiment about this. It is a different process for every experiment! 14th October 2014Graduate Lectures28 Now Join a VO

29  Your grid certificate identifies you to the grid as an individual user, but it's not enough on its own to allow you to run jobs; you also need to join a Virtual Organisation (VO).  These are essentially just user groups, typically one per experiment, and individual grid sites can choose to support (or not) work by users of a particular VO.  Most sites support the four LHC VOs, fewer support the smaller experiments.  The sign-up procedures vary from VO to VO, UK ones typically require a manual approval step, LHC ones require an active CERN account.  For anyone that's interested in using the grid, but is not working on an experiment with an existing VO, we have a local VO we can use to get you started. 14th October 2014Graduate Lectures29 Joining a VO

30  Test your grid certificate: > voms-proxy-init –voms lhcb.cern.ch Enter GRID pass phrase: Your identity: /C=UK/O=eScience/OU=Oxford/L=OeSC/CN=j bloggs Creating temporary proxy..................................... Done  Consult the documentation provided by your experiment for ‘their’ way to submit and manage grid jobs 14th October 2014Graduate Lectures30 When that’s done

31 14th October 2014Graduate Lectures31 BACKUP

32 14th October 2014Graduate Lectures32 Puttygen to create an ssh key on Windows (previous slide point #4) Paste this into ~/.ssh/authorized_keys on pplxint Enter a secure passphrase then : - Enter a strong passphrase - Save the private parts of the key to a subdirectory of your local drive.

33  Run Pageant once after login  Right-click on the pageant symbol and and “Add key” for your Private (windows ssh key) 14th October 2014Graduate Lectures33 Pageant

34  Gigabit JANET connection to campus July 2005.  Second JANET gigabit connection Sept 2007.  JANET campus connection upgraded to dual 10 gigabit links August 2009  Gigabit Juniper firewall manages internal and external Physics networks.  10Gb/s network links installed between Tier-2 and Tier-3 clusters in 2012.  Physics-wide wireless network. Installed in DWB public rooms, Martin Wood, AOPP and Theory. New firewall provides routing and security for this network. 14th October 2014Graduate Lectures34 Network

35 14th October 2014Graduate Lectures35 Network Access Campus Backbone Router Super Janet 4 2* 10Gb/s with Janet 6 OUCS Firewall depts Physics Firewall Physics Backbone Router 1Gb/s 10Gb/s 1Gb/s 10Gb/s Backbone Edge Router depts 100Mb/s 1Gb/s depts 100Mb/s Backbone Edge Router 10Gb/s

36 14th October 201436 Physics Backbone desktop Server switch Physics Firewall Physics Backbone Switch Dell 8024F 10Gb/s 1Gb/s Particle Physics Dell 8024F desktop 1Gb/s Clarendon Lab Dell 8024F 10Gb/s Win 2k Server Astro Dell 8024F 10Gb/s 1Gb/s Theory Dell 8024F 10Gb/s Atmos Dell 8024F 10Gb/s Server Switch S4810 10Gb/s Linux Server 10Gb/s Linux Server 10Gb/s Super FRODO Frodo 10Gb/s 1Gb/s Graduate Lectures


Download ppt "Oxford University Particle Physics Unix Overview Sean Brisbane Particle Physics Systems Administrator Room 661 Tel 73389 14th."

Similar presentations


Ads by Google