Download presentation
Presentation is loading. Please wait.
Published byHoward Day Modified over 8 years ago
1
M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009
2
M. Schott (CERN) Page 2 CAT Tier-3 Computing Resources Interactive nodes: -5 machines with 8 CPU cores and 16 GB total memory, with access to AFS and castor, for interactive analysis work. -These are accessed via LSF using the atlasinter queue. Batch queues: -Two dedicated batch queues, atlascatshort (1 hour) and atlascatlong (10 hours) with a certain number of dedicated LSF batch job slots. Castor disk pool: A 40 TB disk pool atlt3 for storing DPDs, ntuples etc used in CAT analysis. No tape backup AFS scratch disk space allocated to CAT team members.
3
M. Schott (CERN) Page 3 Data Organization CAT Tier-3 resources -Only for CERN people -Access via rfio -Two locations -Group-Space (5TB per group) /castor/cern.ch/grid/atlas/atlt3/ with the following groups: compperf, higgs, simulation:, sm, susy, top -Scratch-Space -/castor/cern.ch/grid/atlas/atlt3/scratch/ -nsmkdir /castor/cern.ch/grid/atlas/atlt3/scratch/ -nschmod 750 /castor/cern.ch/grid/atlas/atlt3/scratch/ -Setting Environment: export STAGE_HOST=castoratlast3 export STAGE_SVCCLASS=atlascernuserdisk CERN User Disk -For all users -Access via rfio, xrootd -Setting Environment (This defines the castor-disk. The directory is only “fake”) export RFIO_USE_CASTOR_V2=YES export STAGE_HOST=castoratlas export STAGE_SVCCLASS=atlt3
4
M. Schott (CERN) Page 4 Tutorial – Putting and Retrieving Files login lxplus Setup Athena cmt co PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis cd PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis/PerformanceAnaly sis-r198000/cmt cmt config source setup.sh gmake cd /tmp/ Now we have just our analysis algorithm. We set now the variables for accessing the usual CERN user disk export RFIO_USE_CASTOR_V2=YES export STAGE_HOST=castoratlast3 export STAGE_SVCCLASS=atlascernuserdisk And copy some files to our castor user disk rfcp /castor/cern.ch/user/d/ddmusr03/STEP09/mc08.106054.PythiaZee_Mll20t o60_1Lepton.merge.AOD.e379_s462_r635_t53_tid059207/AOD.059207._0000 1.pool.root.1./ rfcp AOD.059207._00001.pool.root.1 /castor/cern.ch/user/ / /Tutorial
5
M. Schott (CERN) Page 5 Tutorial – Putting and Retrieving Files Now we copy the same file to the CAT scratch disk. First we have to set the environment variables export RFIO_USE_CASTOR_V2=YES export STAGE_HOST=castoratlas export STAGE_SVCCLASS=atlt3 Actually the first two lines are not needed at this point anymore but we will leave them for completeness. rfcp AOD.059207._00001.pool.root.1 /castor/cern.ch/grid/atlas/atlt3/scratch/ /Tutorial/ Checking the content can be done with the usual rfdir-command, e.g. rfdir /castor/cern.ch/grid/atlas/atlt3/scratch/ /Tutorial/
6
M. Schott (CERN) Page 6 Submitting jobs (1) Our queues are atlascatshort and atlascatlong and can be seens via bqueues To access the interactive machines just type bsub -Is -q atlasinter zsh We simply exit with quit Now we want to submit a job on our Tier-3 queues. We go to our example code, e.g. cd PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis/PerformanceAnaly sis-r198000/cat Here we change the file runAthena.sh which should automatically setup the Athena environment and then starts an athena job. Remember that when sending the job to a queue, the job will be started on a scratch-directory which will be delete after then job.
7
M. Schott (CERN) Page 7 Submitting jobs (2) Havening changesd the runAthena.sh, we can submit the job via bsub -q atlascatlong source runAthena.sh ~/scratch0/Athena/15.5.1/PhysicsAnalysis/AnalysisCommon/Performance Analysis/PerformanceAnalysis-r198000/cat/runPerformanceAnalysis.py To see the status of our job, we simply type bjobs Now we can play around with different access modes, i.e. accessing a file via rfio or xrootd. For that we simply change the prefix of the file in the InputCollection of runPerformance.py. Xrootd is accessed via root://castoratlast3/ Rfio is accessed via rfio:// Keep in mind that you might have to initialize the environment variables on the batch- job! The performance between rfio and xrootd can be checked when looking in the PerformanceResults.log file, which is produced when the job is finished...
8
M. Schott (CERN) Page 8 Submitting jobs (3) You should observe that xrootd is much faster than rfio, but we cannot use xrootd on our tier-3 scratch-disks...which brings us to Max Baak‘s famous Filestager
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.