Download presentation
Presentation is loading. Please wait.
1
Israel Cluster Structure
2
Outline The local cluster Local analysis on the cluster –Program location –Storage –Interactive analysis & batch analysis –PBS –Grid-UI
3
The Local Cluster
4
Program Location All the software is located on the local software. The software directory can be accessed using: $ATLAS_INST_PATH The directory structute of $ATLAS_INST_PATH –setup environment – general scripts that define all the necessary environment variables for each of the installed program. There is a general script setupEnv.sh that source all the other scripts. initialization – general scripts that build up the user environment. For example ‘setupAthena’ will build the athena environment, create a default requirements files and create an init.sh script (see previous tutorial). swInstallation – install scripts, relevant for the software administrator only. –athena releases – the different athena kits for releases versions. nightly – nightlies that are downloaded regularly during the night (not in use for now) groupArea – groupArea for certain athena projects. For now only the tutorial groupArea is installed –atlantis – one version of atlantis is locally installed. Should be updated from time to time to the lates version –gridTools Dq2 – the latest version of dq2 only. Should be updated on regular basis. ganga – a version of ganga (see grid tutorial). Should be updated from time to time –generators – the different generators in use.
5
Storage Home directories are backed up. Backup is expensive and the charge is per MB. The backup is for 6 months, so even if you delete the data the next day we will keep paying for 6 months. So keep your data on the large disks (Panasas or Thumper) and only your analysis code on your home dirs. Delete old data. It is very easy to overload the disks with old MC samples. But be careful, after deletion there is no turning back. At later time data management and data control guidelines will be made.
6
Interactive analysis & Batch analysis The local cluster holds: – ~140 cpu (tau ~56cpu) –~70TB –1-2 interactive working stations. Interactive work –First look at data – most of the time ARA/pyAthena/matlab/root –Code development –Testing jobs before submission to batch mode Batch mode –Everything that takes more than ~1hr is probably best to send as a batch job. Batch jobs can run either on the Grid or on the local cluster –Grid – All the jobs that need to use datasets that are stored on the Grid must be run on the Grid. Do not copy data to the local cluster! Also, for long jobs that can be fragmented into several small jobs. –Local – short jobs, or jobs on local stored data.
7
PBS PBS (Portable Batch System) is the batch system installed on the local cluster. Jobs are submitted into queues. The choice of the queue is according to the job length. On lxplus there is a similar system called LSF (Load Sharing Facility)
8
Grid-UI Grid-UI software is installed on the working stations. After initiating the proxy it is possible to send jobs to the grid, and retrieving its output, and datasets. ganga – a software developed in Cern provide a python based environment to send jobs to the different Grids and to the local job manger (PBS/LSF) dq2 – a software for dataset manipulation on the grid pathena – an Athena plug-in that sends athena jobs into the grid. Unlike ganga it can send jobs only to the panda grid (US).
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.