Group Metting Submit job to Grid Grass
outline Why we need Grid CPUs example : How to run TPCana Before run TPCana
Why we need Grid CPUs
IPAS Condor
LHC Taiwan Tier-1
example : How to run TPCana
example for setup grid environment ~]$ ls -l /usrX/grass/.globus/x509up_grass -rw grass spring Jul 2 16:26 /usrX/grass/.globus/x509up_grass ## grid environment \> source /scratch/grid/glite /etc/profile.d/grid_env.csh \> setenv X509_USER_PROXY /usrX/grass/.globus/x509up_grass STEP2 setup grid environment STEP1: copy proxy file \> chmod 600 x509up_grass
the data we need
the data we need is too big
generate drift table online Drift table production time generate drift table 5 times, we get the same drift table the same size no difference
upload data size of others data is small rawdata size
manual of “globus_submit” usage: globus_submit --tarFile=file --output=file --start=num --end=num... --tarFile=file: path for tar file to be submitted e.g../submitme.tgz --outputFile=file: path for output file to be stored e.g../cronus.$.tgz --cmd=command: command to be executed, normally a shell script e.g../cronus.sh $ --start=num: beginning segment number e.g. 1 --end=num: ending segment number e.g segments=num: segment number list e.g. 2-5,7,10 --VO=vo: VO name you belong to e.g. twgrid --verbose: verbose output --cronus: submit to Cronus, it's necessary in ipas003 --test: do not submit, show JDL to standard output -- address for summary output e.g. --globusscheduler: gatekeeper you wan to submit to e.g. lcg00125.grid.sinica.edu.tw:2119/jobmanager-lcgpbs --globusrsl: additional options to gatekeeper special character, $, will be replaced with segment number for --outputFile and --cmd
globus_submit --tarFile=../submitme.tgz --outputFile='tpcana.$.tgz' --cmd='./runTPC.sh /hepdata4/jychen/TPC/data/2004Oct17 $' --segments= , globusscheduler=lcg00125.grid.sinica.edu.tw:2119/jobmanager-lcgpbs --verbose --globusrsl='(queue=twgrid)'
Before run TPCana
grid copy rate upload 3.1G file need 85 second must of file small then 3G
submit 27 run at 23:00 raid: /hepdata4 file server: asscratch2
What we need to do later 1)Test the best number of upload files in the same time in our file server. (Tsanlung’s suggestion is 16 runs one time ) 2)Write a perl split to submit job.
backup
grid copy rate upload 1.2G file need 40 second must of file small then 3G