CompHEP development for distributed calculation of particle processes at LHC

Slides:



Advertisements
Similar presentations
INTRODUCTION TO SIMULATION WITH OMNET++ José Daniel García Sánchez ARCOS Group – University Carlos III of Madrid.
Advertisements

CompHEP Automatic Computations from Lagrangians to Events Ivan Melo University of Zilina Fyzika za Štandardným modelom klope na dvere Svit,
Compiler Construction by Muhammad Bilal Zafar (AP)
CompHEP: Present and Future Alexandre Kryukov on behalf of CompHEP collaboration (E. Boos, V. Bunichev, M. Dubinin, L. Dudko, V. Ilyin, A. Kryukov, V.
PANIC 2008 November, 2008 Searches using Photons and/or Jets at CDF David Toback, Texas A&M University 1 David Toback Texas A&M University For the CDF.
MadGraph + MadEvent Automated Tree-Level Feynman Diagram, Helicity Amplitude, and Event Generation + Tim Stelzer Fabio Maltoni.
Software Issues Derived from Dr. Fawcett’s Slides Phil Pratt-Szeliga Fall 2009.
Some features of processes with gauge bosons in models with large extra dimensions E.E. Boos, M.A. Perfilov, M.N. Smolyakov, I.P. Volobuev Skobeltsyn Institute.
A Tour of Visual Basic BACS 287. Early History of Basic Beginners All-Purpose Symbolic Instruction Code An “Interpreted” teaching language English-like.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Photon reconstruction and calorimeter software Mikhail Prokudin.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
In the R-parity violating SUSY model at hadron colliders 张仁友 中国科学技术大学.
MadGraph + MadEvent Automated Tree-Level Feynman Diagram, Helicity Amplitude, and Event Generation + Tim Stelzer Fabio Maltoni.
Jets and QCD resummation Albrecht Kyrieleis. BFKL at NLO Gaps between Jets.
Quark Compositeness Study and Progress Satyaki Bhattacharya, Sushil S. Chauhan, Brajesh C. Choudhary & Debajyoti Choudhury Department of Physics & Astrophysics.
Guide to Linux Installation and Administration, 2e1 Chapter 8 Basic Administration Tasks.
LOGO Scheduling system for distributed MPD data processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
Cooperative Meeting Scheduling among Agents based on Multiple Negotiations Toramatsu SHINTANI and Takayuki ITO Department of Intelligence and Computer.
Jets at RHIC Jiangyong Jia
Simple introduction to HDFS Jie Wu. Some Useful Features –File permissions and authentication. –Rack awareness: to take a node's physical location into.
LHCb computing in Russia Russia-CERN JWGC, March 2007.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Irakli Chakaberia Final Examination April 28, 2014.
Pavel Slavík, Marek Gayer, Frantisek Hrdlicka, Ondrej Kubelka Czech Technical University in Prague Czech Republic 2003 Winter Simulation Conference December.
M.S. Thesis Defense Jason Anderson Electrical and Computer Engineering Dept. Clemson University.
LOGO PROOF system for parallel MPD event processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
Olivier RavatLes Houches/June 3rd Higgs associated production at LHC : Thecase Olivier Ravat, Morgan Lethuillier IPN Lyon Les Houches 2003 : Physics.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
U N C L A S S I F I E D 7 Feb 2005 Studies of Hadronic Jets with the Two-Particle Azimuthal Correlations Method Paul Constantin.
The last talk in session-3. Full one-loop electroweak radiative corrections to single photon production in e + e - ACAT03, Tsukuba, 4 Dec LAPTH.
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
Cargese Summer School July 2010Tim Stelzer MadGraph/MadEvent 5.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
The EDGeS project receives Community research funding 1 Porting Applications to the EDGeS Infrastructure A comparison of the available methods, APIs, and.
George Goulas, Christos Gogos, Panayiotis Alefragis, Efthymios Housos Computer Systems Laboratory, Electrical & Computer Engineering Dept., University.
MadGraph intro1 THE MADGRAPH HOMEPAGES: I have been using.
Experience with CalcHEP H. S. Goh Univ. of Arizona very little West Coast LHC Theory Network -- UC Irvine May
Nuclear Effects in Au-Au Colliding and Monte-Carlo Simulation Weitian Deng High Energy Physics Group Shandong University.
MadGraph/MadEvent Automatically Calculate 1-Loop Cross Sections !
SHERPA Simulation for High Energy Reaction of PArticles.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
1 Update on tt-bar signal and background simulation Stan Bentvelsen.
Parallelization Geant4 simulation is an embarrassingly parallel computational problem – each event can possibly be treated independently 1.
Slava Bunichev, Moscow State University in collaboration with A.Kryukov.
Chapter – 8 Software Tools.
Six-Fermion Production at ILC with Grcft Calculation of the Six-Fermion Production at ILC with Grcft -New algorithm for Grace- KEK Minamitateya group Yoshiaki.
Enabling Grids for E-sciencE LRMN ThIS on the Grid Sorina CAMARASU.
Current status A.Kryukov Skobeltsyn Institute of Nuclear Physics, Moscow State University On behalf of CompHEP Collaboration.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
Zimanyi-School, Budapest, 03/12/2014 A. Ster, Wigner-RCP, Hungary 1 Total, elastic and inelastic cross sections of high energy pp, pA and  * A reactions.
Eric COGNERAS LPC Clermont-Ferrand Prospects for Top pair resonance searches in ATLAS Workshop on Top Physics october 2007, Grenoble.
on behalf of the CDF and DØ collaborations
Hydrodynamic Galactic Simulations
Distributed Shared Memory
QCD CORRECTIONS TO bb →h h
Automated Tree-Level Feynman Diagram and Event Generation
CompHEP Automatic Computations from Lagrangians to Events
Pipeline Execution Environment
Particle Physics Tour with CalcHEP
THE SIMULATION OF POLARIZED BARYONS
Edgar Dominguez Rosas Instituto de Ciencias Nucleares
Topics Introduction Hardware and Software How Computers Store Data
Operating System Concepts
Operating System Concepts
Measurement of b-jet Shapes at CDF
b-Quark Production at the Tevatron
Search for a New Vector Resonance in the pp WWtt+X Channel at LHC
Presentation transcript:

CompHEP development for distributed calculation of particle processes at LHC Skobeltsyn Institute of Nuclear Physics, Moscow State University A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 1

Outlook Motivation Modification of CompHEP for Distributive Computation Conclusions and … …Future Development A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 2

Motivation Specific of hadron collider physics is a huge number of diagrams, hundreds of subprocesses. SM (Feynman gauge): p,p -> W+,W-, 2*jets p={u,U,d,D,c,C,s,S,b,B,G} jet={u,U,d,D,c,C,s,S,b,B,G} QCD background (excluded diagrams with virtual A,Z,W+,W-) Number of subprocesses is 775 Total number of diagrams is A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 3

Motivation (cont.) Possible solution is simplification of the model combinatory [Boos, Ilyin]. u#-d#-model (Feynman gauge): p,p -> W+,W-, 2*jets p={u#,U#,d#,D#,b,B,G} jet={u#,U#,d#,D#,b,B,G} QCD background (excluded diagrams with virtual A,Z,W+,W-) Number of subprocesses is 69; Total number of diagrams is 845; A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 4

u#,U# -> d#,D#,W+,W- A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 5

CompHEP calculation scheme Diagram generator Symbolic calculation C-code generator Compilation Sub process selection MC calculation Generate single executable file for all sub processes Nothing utilities to make easy tuning of MC integration for different sub processes. Next sub process A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 6

MC calculation Compilation sub process 2 Modified Scheme of calculation Diagram generator Symbolic calculation C-code generator MC calculation Compilation sub process N Separate executable file for each sub process. Compilation sub process 1 MC calculation A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/

File structure Old file structure was Modified file structure Working Dir. Results F1.c F2.c... Working Dir. Results F1.c F2.c Sub1 F43. c F44. c Sub2... A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 8

Results Hardware: P4 (1.3GHz), RAM=256M Process: p,p -> W+,W-,2*jet 69 sub processes, 845 diagrams u#d# model StandardDistributed,Distributed, (Per sub proc.)mean valuetotal Size of executable file71M (1.02M)2.9M200.1M Compilation time176m (153s)133s53m A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 9

Results (cont) StandardDistributed Stnd/Dis. (Per sub proc.)(mean value) Cross section calc.22m (19s)15s1.3 Memory (Virt./RAM)46.5M/1.8M7.0M/2.0M6.7/0.9 Maximum search60m(52s)50s1.1 Memory (Virt./RAM)46.5M/1.8M6.8M/1.8M6.7/1.0 Generation of 1kev.106m(92s)60s1.5 Memory (Virt./RAM)46.7M/2.1M7.0M/2.0M6.7/1.1 A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 10

Specific features to support distributed calculation Copy session data of selected sub process to all other sub processes. Copy individual parameter of selected sub process to all other sub processes. Copy session data from specific sub process to selected sub process. Copy specific parameter from selected sub process. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 11

Specific features to support distributed calculation (cont.) Utility for job submission under PBS and GRID (LCG-1). Modified utility that collect event stream generated by separate subprocess executables into single event sample. d_comphep [-cC][P|G] path_to_results_dir -ccompilation only -Ccollect data into single sample -PPBS (default) -GGRID A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 12

Conclusions and... Hadron collider physics require to use computer cluster and/or GRID solution for calculation with more then 2 hard interacted particle in finale state. It is necessary to develop special tools to support such kind calculations. Even rather simple 2->4 process has profit if we use distributed computation. Here we do not discuss the problem of convenience for user. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 13

Future Development In this work we realize rather straightforward approach of distributive computation in CompHEP. In the future we are going to consider more sophisticated method takes into account the phase space as a set of non- overlapping pieces. This approach permit to divide any task on those number of independent (from MC point of view) sub tasks as necessary. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 14