Download presentation
Presentation is loading. Please wait.
1
Oxford University Particle Physics Unix Overview
Tuesday, 08 May 2018 Oxford University Particle Physics Unix Overview Pete Gronbech Particle Physics Senior Systems Manager & GridPP Project Manager 10th October 2016 Graduate Lectures
2
Local Cluster Overview Connecting to it Grid Cluster Computer Rooms
Strategy Local Cluster Overview Connecting to it Grid Cluster Computer Rooms How to get help Graduate Lectures 10th October 2016
3
Particle Physics Strategy The Server / Desktop Divide
Tuesday, 08 May 2018 Particle Physics Strategy The Server / Desktop Divide Virtual Machine Host Servers General Purpose Unix Server Linux Worker nodes Group DAQ Systems Linux File Servers Web Server NIS Server torque Server Win 7 PC Win 7 PC Ubuntu PC laptop Linux Desktop Clients Users should use exceed from their windows 2000 desktop pc’s to connect to unix servers. Do not store files on the desktop they are not backed up. The servers home areas are backed up. Graduate Lectures 10th October 2016
4
Distributed Model Files are not stored on the machine you are using, but on remote locations You can run a different operating system by making a remote connection to it Graduate Lectures 10th October 2016
5
Recommended working strategy
Computing work splits broadly into Office work (local/remote) Writing code (remote) Computation (remote) Use your favourite desktop/laptop for office files. Make a remote desktop connection to pplxint8/9 to do computing work on the batch farm It is better to write code on the remote Linux desktop Graduate Lectures 10th October 2016
6
Physics Remote desktops
We use RDP for remote desktop This means that everyone in particle physics has access to multiple desktop environments from anywhere Windows (termserv.physics.ox.ac.uk) Scientific Linux (pplxint8/9) Ubuntu Linux (ubuntu-trusty-ts) MaxOSX (osxts via vnc) Graduate Lectures 10th October 2016
7
/data/home, /data/experiment
Storage Physics storage Windows server Central Linux file-server Storage system: PP file-server Client Windows Central Ubuntu PP Linux Recommended storage H:\ drive /home folder /home and /data folders Windows storage “H:\” drive or “Y:\home” /physics/home PP Storage Y:/LinuxUsers/pplinux/ data dirs or home /data/home, /data/experiment Central Linux Y:/LinuxUsers/home/particle /network/home/particle Graduate Lectures 10th October 2016
8
To store files on servers using your laptop
Windows: map your H:\ drive by typing net use H: /USER:yourname - Where yourname is your physics user name OSX: Linux: Graduate Lectures 10th October 2016
9
RDP and storage demo H:\ drive on windows
Connecting to Linux on pplxint8 from windows /home and /data on Linux Graduate Lectures 10th October 2016
10
Particle Physics Linux
Unix Team (Room 661): Vipul Davda – Grid and Local Support Kashif Mohammad – Grid and Local Support Pete Gronbech - Senior Systems Manager and GridPP Project Manager General purpose interactive Linux based systems for code development, short tests and access to Linux based office applications. These are accessed remotely. Batch queues are provided for longer and intensive jobs. Provisioned to meet peak demand and give a fast turnaround for final analysis. Our Local Systems run Scientific Linux (SL) which is a free Red Hat Enterprise based distribution. The same as the Grid and CERN We will be able to offer you the most help running your code on SL6. Graduate Lectures 10th October 2016
11
Current Clusters Particle Physics Local Batch cluster
Oxford’s Tier 2 Grid cluster Graduate Lectures 10th October 2016
12
PP Linux Batch Farm Scientific Linux 6
Users log in to the interactive nodes pplxint8 & 9, the home directories and all the data disks (/home area or /data/group ) are shared across the cluster and visible on the interactive machines and all the batch system worker nodes. Approximately 600 cores (incl. 128 cores for JAI/LWFA), each with 4GB of RAM memory. The /home area is where you should keep your important text files such as source code, papers and thesis The /data/ area is where you should put your big reproducible input and output data jailxwn02 64 * AMD cores jailxwn01 64 * AMD cores pplxwn68 16 * Intel cores pplxwn67 16 * Intel cores pplxwnnn 16 * Intel 2650 cores pplxwn41 16 * Intel 2650 cores pplxwn32 12 * Intel 5650 cores pplxwn31 12 * Intel 5650 cores pplxdatatrans Grid data transfer nodes pplxint9 Interactive login nodes pplxint8 Graduate Lectures 10th October 2016
13
NFS and lustre storage Servers
PP Linux Batch Farm Data Storage NFS and lustre storage Servers 40TB The /data areas are big and fast disks. This is too big to be backed-up but still have some redundancy features and are safer than laptop storage. This does not help if you delete files. Data Areas pplxfsn 40TB Data Areas The /home areas are backed up by two different systems nightly. The latest nightly backup of any lost or deleted files from your home directory is available at the read-only location /data/homebackup/{username} If you need older files, tell us If you need more space on /home, tell us Store your thesis on /home NOT /data. pplxfsn 30TB Data Areas pplxfsn 19TB Home areas pplxfsn Graduate Lectures 10th October 2016
14
Local Oxford DWB Physics Infrastructure Computer Room
Completely separate from the Begbroke Science park a computer room with 100KW cooling and >200KW power has been built. ~£150K Oxford Physics money. Local Physics department Infrastructure computer room. Completed September 2007. This allowed local computer rooms to be refurbished as offices again and racks that were in unsuitable locations to be re housed. Graduate Lectures 10th October 2016
15
Two Computer Rooms provide excellent infrastructure for the future
The New Computer room built at Begbroke Science Park jointly for the Oxford Super Computer and the Physics department, provides space for 55 (11KW) computer racks. 22 of which will be for Physics. Up to a third of these can be used for the Tier 2 centre. This £1.5M project was funded by SRIF and a contribution of ~£200K from Oxford Physics. The room was ready in December Oxford Tier 2 Grid cluster was moved there during spring All new Physics High Performance Clusters will be installed here. Graduate Lectures 10th October 2016
16
Graduate Lectures 10th October 2016
17
Strong Passwords etc Use a strong password not open to dictionary attack! fred123 – No good Uaspnotda!09 – Much better More convenient to use ssh with a passphrased key stored on your desktop. Once set up Graduate Lectures 10th October 2016
18
Connecting with PuTTY to Linux
Demo Plain ssh terminal connection From ‘outside of physics’ From Office (no password) ssh with X windows tunnelled to passive exceed. Single apps. Password-less access from ‘outside physics’ Graduate Lectures 10th October 2016
19
Graduate Lectures 10th October 2016
20
Other resources (for free)
Oxford Advanced Research Computing A shared cluster of CPU nodes, “just” like the local cluster here GPU nodes Faster for ‘fitting’, toy studies and MC generation *IFF* code is written in a way that supports them Moderate disk space allowance per experiment (<5TB) The Grid Massive globally connected computer farm For big computing projects Atlas, LHCb, t2k and SNO please stay at the end! Come talk to us in RM 661 about these Graduate Lectures 10th October 2016
21
The end of the overview Now more details of use of the clusters
Help Pages ARC GRID talk at the end Graduate Lectures 10th October 2016
22
Atlas, Sno, LHCb and t2k please read this
GRID certificates Atlas, Sno, LHCb and t2k please read this Graduate Lectures 10th October 2016
23
SouthGrid Member Institutions
Oxford RAL PPD Cambridge Birmingham Bristol Sussex JET at Culham Graduate Lectures 10th October 2016
24
Current capacity Compute Servers Storage Twin and twin squared nodes
2500 CPU cores Storage Total of ~1000TB The servers have between 12 and 36 disks, the more recent ones are 4TB capacity each. These use hardware RAID and UPS to provide resilience. Graduate Lectures 10th October 2016
25
Get a Grid Certificate http://www.ngs.ac.uk/ukca
You will then need to contact central Oxford IT. They will need to see you, with your university card, to approve your request: To: Dear Stuart Robeson and Jackie Hewitt, I Please let me know a good time to come over to Banbury road IT office for you to approve my grid certificate request. Thanks. Get a Grid Certificate Must remember to use the same PC to request and retrieve the Grid Certificate. The new UKCA page uses a JAVA based CERT WIZARD Graduate Lectures 10th October 2016
26
When you have your grid certificate…
Log in to pplxint9 and run mkdir .globus chmod 700 .globus cd .globus openssl pkcs12 -in ../mycert.p12 -clcerts -nokeys -out usercert.pem openssl pkcs12 -in ../mycert.p12 -nocerts -out userkey.pem chmod 400 userkey.pem chmod 444 usercert.pem Save to a filename in your home directory on the Linux systems, eg: Y:\Linuxusers\particle\home\{username}\mycert.p12 10th October 2016 Graduate Lectures
27
Now Join a VO This is the Virtual Organisation such as “Atlas”, so:
You are allowed to submit jobs using the infrastructure of the experiment Access data for the experiment Speak to your colleagues on the experiment about this. It is a different process for every experiment! Graduate Lectures 10th October 2016
28
Joining a VO Your grid certificate identifies you to the grid as an individual user, but it's not enough on its own to allow you to run jobs; you also need to join a Virtual Organisation (VO). These are essentially just user groups, typically one per experiment, and individual grid sites can choose to support (or not) work by users of a particular VO. Most sites support the four LHC VOs, fewer support the smaller experiments. The sign-up procedures vary from VO to VO, UK ones typically require a manual approval step, LHC ones require an active CERN account. For anyone that's interested in using the grid, but is not working on an experiment with an existing VO, we have a local VO we can use to get you started. Graduate Lectures 10th October 2016
29
When that’s done Test your grid certificate:
> voms-proxy-init –voms lhcb.cern.ch Enter GRID pass phrase: Your identity: /C=UK/O=eScience/OU=Oxford/L=OeSC/CN=j bloggs Creating temporary proxy Done Consult the documentation provided by your experiment for ‘their’ way to submit and manage grid jobs Graduate Lectures 10th October 2016
30
BACKUP Graduate Lectures 10th October 2016
31
Puttygen to create an ssh key on Windows (previous slide point #4)
Tuesday, 08 May 2018 Puttygen to create an ssh key on Windows (previous slide point #4) Paste this into ~/.ssh/authorized_keys on pplxint Enter a secure passphrase then : - Enter a strong passphrase - Save the private parts of the key to a subdirectory of your local drive. If you are likely to then hop to other nodes add : ForwardAgent yes to a file called config in the .ssh dir on pplxint Graduate Lectures 10th October 2016
32
Pageant Run Pageant once after login
Right-click on the pageant symbol and and “Add key” for your Private (windows ssh key) Graduate Lectures 10th October 2016
33
Network Gigabit JANET connection to campus July 2005.
Second JANET gigabit connection Sept 2007. JANET campus connection upgraded to dual 10 gigabit links August 2009 Gigabit Juniper firewall manages internal and external Physics networks. 10Gb/s network links installed between Tier-2 and Tier-3 clusters in 2012. Campus connection now 2 * 20Gb/s active Passive configuration. Each link is made up of two 10Gb/s aggregated links. Physics-wide wireless network. Installed in DWB public rooms, Martin Wood, AOPP and Theory. New firewall provides routing and security for this network. Graduate Lectures 10th October 2016
34
Network Access Janet OUCS Firewall 20Gb/s active /passive with Janet 6
Tuesday, 08 May 2018 Network Access Janet 20Gb/s active /passive with Janet 6 Physics Firewall 1Gb/s Physics Backbone Router OUCS Firewall 1Gb/s 10Gb/s Backbone Edge Router 10Gb/s 100Mb/s Campus Backbone Router 1Gb/s 10Gb/s depts Backbone Edge Router depts 100Mb/s depts 100Mb/s depts Graduate Lectures 10th October 2016
35
Physics Backbone Frodo Super FRODO Physics Firewall Linux Server
Tuesday, 08 May 2018 Linux Server 10Gb/s Physics Backbone Server Switch S4810 Frodo 10Gb/s Super FRODO Linux Server 10Gb/s 1Gb/s Server switch Physics Firewall 10Gb/s 1Gb/s 1Gb/s Win 2k Server 1Gb/s Particle Physics Dell 8024F 10Gb/s 1Gb/s 1Gb/s 10Gb/s Physics Backbone Switch Dell 8024F desktop Clarendon Lab Dell 8024F 1Gb/s 10Gb/s desktop 10Gb/s 1Gb/s 10Gb/s Astro Dell 8024F Atmos Dell 8024F Theory Dell 8024F 10th October 2016 Graduate Lectures
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.