Download presentation
Presentation is loading. Please wait.
Published byLiliana Norman Modified over 9 years ago
1
CompHEP development for distributed calculation of particle processes at LHC A.Kryukov(kryukov@theory.sinp.msu.ru), L.Shamardin(shamardin@theory.sinp.msu.ru) Skobeltsyn Institute of Nuclear Physics, Moscow State University A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 1
2
Outlook Motivation Modification of CompHEP for Distributive Computation Conclusions and … …Future Development A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 2
3
Motivation Specific of hadron collider physics is a huge number of diagrams, hundreds of subprocesses. SM (Feynman gauge): p,p -> W+,W-, 2*jets p={u,U,d,D,c,C,s,S,b,B,G} jet={u,U,d,D,c,C,s,S,b,B,G} QCD background (excluded diagrams with virtual A,Z,W+,W-) Number of subprocesses is 775 Total number of diagrams is 16461 A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 3
4
Motivation (cont.) Possible solution is simplification of the model combinatory [Boos, Ilyin]. u#-d#-model (Feynman gauge): p,p -> W+,W-, 2*jets p={u#,U#,d#,D#,b,B,G} jet={u#,U#,d#,D#,b,B,G} QCD background (excluded diagrams with virtual A,Z,W+,W-) Number of subprocesses is 69; Total number of diagrams is 845; A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 4
5
u#,U# -> d#,D#,W+,W- A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 5
6
CompHEP calculation scheme Diagram generator Symbolic calculation C-code generator Compilation Sub process selection MC calculation Generate single executable file for all sub processes Nothing utilities to make easy tuning of MC integration for different sub processes. Next sub process A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 6
7
MC calculation Compilation sub process 2 Modified Scheme of calculation Diagram generator Symbolic calculation C-code generator MC calculation Compilation sub process N Separate executable file for each sub process. Compilation sub process 1 MC calculation A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03... 7
8
File structure Old file structure was Modified file structure Working Dir. Results F1.c F2.c... Working Dir. Results F1.c F2.c Sub1 F43. c F44. c Sub2... A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 8
9
Results Hardware: P4 (1.3GHz), RAM=256M Process: p,p -> W+,W-,2*jet 69 sub processes, 845 diagrams u#d# model StandardDistributed,Distributed, (Per sub proc.)mean valuetotal Size of executable file71M (1.02M)2.9M200.1M Compilation time176m (153s)133s53m A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 9
10
Results (cont) StandardDistributed Stnd/Dis. (Per sub proc.)(mean value) Cross section calc.22m (19s)15s1.3 Memory (Virt./RAM)46.5M/1.8M7.0M/2.0M6.7/0.9 Maximum search60m(52s)50s1.1 Memory (Virt./RAM)46.5M/1.8M6.8M/1.8M6.7/1.0 Generation of 1kev.106m(92s)60s1.5 Memory (Virt./RAM)46.7M/2.1M7.0M/2.0M6.7/1.1 A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 10
11
Specific features to support distributed calculation Copy session data of selected sub process to all other sub processes. Copy individual parameter of selected sub process to all other sub processes. Copy session data from specific sub process to selected sub process. Copy specific parameter from selected sub process. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 11
12
Specific features to support distributed calculation (cont.) Utility for job submission under PBS and GRID (LCG-1). Modified utility that collect event stream generated by separate subprocess executables into single event sample. d_comphep [-cC][P|G] path_to_results_dir -ccompilation only -Ccollect data into single sample -PPBS (default) -GGRID A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 12
13
Conclusions and... Hadron collider physics require to use computer cluster and/or GRID solution for calculation with more then 2 hard interacted particle in finale state. It is necessary to develop special tools to support such kind calculations. Even rather simple 2->4 process has profit if we use distributed computation. Here we do not discuss the problem of convenience for user. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 13
14
Future Development In this work we realize rather straightforward approach of distributive computation in CompHEP. In the future we are going to consider more sophisticated method takes into account the phase space as a set of non- overlapping pieces. This approach permit to divide any task on those number of independent (from MC point of view) sub tasks as necessary. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 14
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.