Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software framework and batch computing Jochen Markert.

Similar presentations


Presentation on theme: "Software framework and batch computing Jochen Markert."— Presentation transcript:

1 Software framework and batch computing Jochen Markert

2 Shutdown of /lustre + LSF batch farm The shut down of the old batch farm is scheduled for the 14 th of December! Please backup important stuff to tape. Data in use should be copied to /hera/hades/user. Clean before and do not copy full blast. Take into account that this procedure takes time ! Starting at 12 th of December is not a good strategy. Users performing analysis of old beam times should be aware that from this date on the old software packages are not any longer available ( /misc/hadessoftware is not visible from ikarus and prometheus running GrigEngine If really strongly required we have to back port and old hydra-8-21 to 64bit and squeeze64. This requires a bit of effort 11/22/12 2

3 Diskspace /lustre 11/22/12 still 118 TB !!!! 3

4 Diskspace /hera 420 TB: hld : 137 TB dst : 184 TB sim : 70 TB user: 31 TB 11/22/12 4

5 Resources All new software (hydra2+hgeant2+ more) is installed in /cvmfs/hades.gsi.de/install All official parameter files are located in /cvmfs/hades.gsi.de/param All packages are installed in dependence of the corresponding ROOT version (current 5.34.01) Each package has it's own environment script: /cvmfs/hades.gsi.de/install/5.34.01/hydra2-2.8/defall.sh The software installations are visible from all squeeze64 desktops + pro.hpc.gsi.de cluster and the new batch farm prometheus/hera 11/22/12 5

6 New Batch farm Installation of the HADES software: The software is build on lxbuild02.gsi.de and stored local /cvmfs/hades.gsi.de After installation the software has to be published to enable the user access it The publish command runs basically a rsync to the cvmfs server From the server the software will be distributed to all hosts and seen as /cvmfs/hades.gsi.de lxbuild02.gsi.de /cvmfs/hades.gsi.de lxbuild02.gsi.de /cvmfs/hades.gsi.de Cvmfs server lxb320.gsi.de lxb321.gsi.de lxb322.gsi.de lxb323.gsi.de lxb324.gsi.de publish distribute 11/22/12 6

7 How to compute Each user performing analysis should have an own account at GSI (please fill the prepared form....) User data should be stored at /hera/hades/user/username (please use the linux username) Log in to pro.hpc.gsi.de cluster : This cluster has /lustre + /hera mounted (do you data transfer) Is supposed to be used for daily work and submission of batch jobs This cluster is not directly reachable from outside GSI 11/22/12 7

8 How to run on batch farm Submit jobs from pro.hpc.gsi.e Jobs running on batch farm should only use software from /cvmfs/hades.gsi.de and data from /hera/hades. user homedirs or /misc/hadessoftware etc are not supported and will crash your jobs Start batch computing from script examples in svn: svn checkout https://subversion.gsi.de/hades/hydra2/trunk/batch/GE GE Scripts for PLUTO, UrQMD, HGeant, dsts and sim dsts are provided Compile and run local tests on pro.hpc.gsi.de send massive parallel jobs after test to the farm Standard users can run up to 400 jobs in parallel Merge you histogram files using hadd or hadd.pl (parallel hadd by Jan) Avoid tons of small files on /hera they will slow down the performance (merge them or zip root files using hzip) 11/22/12 8

9 example loop 11/22/12 9

10 example loop batch script 11/22/12 10 The user is supposed to work in his home dir The current working dir is synchronized to submission dir on /hera/hades/user... works with file lists  flexible allows to combine input files into one job on the fly  better efficiency on the batch farm enhanced batch farm debugging output provided by log files

11 Documentation http://www-hades.gsi.de/?q=computing hydra2 online documentation (ROOT + doxygen) Batch farm Data storage Monitoring Software 11/22/12 11

12 Installation Procedure The installation procedure installs on 32 or 64 bit systems 1 tar.gz file (150 Mb source code … needs some time to compile, located at /misc/hadessoftware/etch32 ) From tarball gsl ROOT Cernlib Garfield All admin scripts All environment scripts UrQmd UrQmd converter From SVN Hydra2 HGeant2 hzip hadd.pl Pluto ORACLE client has to be installed separately (from tar file or full installer) 11/22/12 12

13 New stuff to come On analysis macro level: Event mixing framework by Szymon Multiple scattering Minimizing for leptons by Wolfgang (matching on the RICH Mirror + global vertex use) Add util functions for vertex + secondary vertex calculations + transformations to libParticle.so (stuff contained in Alex's macros for example) On DST level: Second iteration of cluster finder + kickplane corrections Enable full switch to Kalman Filter Additional data objects for close pair rejection (to be developed) backtracking MDC  RICH for ring finding 11/22/12 13


Download ppt "Software framework and batch computing Jochen Markert."

Similar presentations


Ads by Google