ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK M. Werlen D. Perret-Gallix FJPPL IN2P3-CNRS/KEK Minami-Tateya Group.

Slides:



Advertisements
Similar presentations
Configuration management
Advertisements

Yaron Doweck Yael Einziger Supervisor: Mike Sumszyk Spring 2011 Semester Project.
GWDAW 16/12/2004 Inspiral analysis of the Virgo commissioning run 4 Leone B. Bosi VIRGO coalescing binaries group on behalf of the VIRGO collaboration.
June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full.
CompHEP: Present and Future Alexandre Kryukov on behalf of CompHEP collaboration (E. Boos, V. Bunichev, M. Dubinin, L. Dudko, V. Ilyin, A. Kryukov, V.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
Components and Architecture CS 543 – Data Warehousing.
MadGraph + MadEvent Automated Tree-Level Feynman Diagram, Helicity Amplitude, and Event Generation + Tim Stelzer Fabio Maltoni.
FCC Software Status Report from a User’s Perspective Colin Bernet (IPNL) 18 March 2015 Code Contributors: Michele De Gruttola, Benedikt Hegner, Clément.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Ch 4. The Evolution of Analytic Scalability
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
CIS Computer Programming Logic
A Distributed Computing System Based on BOINC September - CHEP 2004 Pedro Andrade António Amorim Jaime Villate.
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
HERA/LHC Workshop, MC Tools working group, HzTool, JetWeb and CEDAR Tools for validating and tuning MC models Ben Waugh, UCL Workshop on.
Scientific Computing Division A tutorial Introduction to Fortran Siddhartha Ghosh Consulting Services Group.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Wenjing Wu Computer Center, Institute of High Energy Physics Chinese Academy of Sciences, Beijing BOINC workshop 2013.
Outline 3  PWA overview Computational challenges in Partial Wave Analysis Comparison of new and old PWA software design - performance issues Maciej Swat.
“DECISION” PROJECT “DECISION” PROJECT INTEGRATION PLATFORM CORBA PROTOTYPE CAST J. BLACHON & NGUYEN G.T. INRIA Rhône-Alpes June 10th, 1999.
Russian Academy of Science Franco-Russia Forum Dec Denis Perret-Gallix CNRS ACPP Automated Calculation in Particle Physics France-Japan-Russia.
The last talk in session-3. Full one-loop electroweak radiative corrections to single photon production in e + e - ACAT03, Tsukuba, 4 Dec LAPTH.
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
1 Software tools for GLC studies Akiya Miyamoto KEK 20 April, 2004 Representing ACFA-Sim Group
Parallelization of likelihood functions for data analysis Alfio Lazzaro CERN openlab Forum on Concurrent Programming Models and Frameworks.
The EDGeS project receives Community research funding 1 Porting Applications to the EDGeS Infrastructure A comparison of the available methods, APIs, and.
The SAMPER project (Semi-numerical AMPlitude EvaluatoR) W. Giele, TeV4LHC, 20/10/05 Giulia Zanderighi, Keith Ellis and Walter Giele. hep-ph/ hep-ph/
Electroweak Correction for the Study of Higgs Potential in LC LAPTH-Minamitateya Collaboration LCWS , Stanford U. presented by K.Kato(Kogakuin.
Sep 13, 2006 Scientific Computing 1 Managing Scientific Computing Projects Erik Deumens QTP and HPC Center.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
MadGraph intro1 THE MADGRAPH HOMEPAGES: I have been using.
Experience with CalcHEP H. S. Goh Univ. of Arizona very little West Coast LHC Theory Network -- UC Irvine May
PERFORMANCE AND ANALYSIS WORKFLOW ISSUES US ATLAS Distributed Facility Workshop November 2012, Santa Cruz.
Status of LC Physics Study in Japan (II) Higgs/GRACE /Simulation study towards Higgs precision Measurement JAPAN LC Physics study group and KEK Minamitateya.
Liverpool Experience of MDC 1 MAP (and in our belief any system which attempts to be scaleable to 1000s of nodes) broadcasts the code to all the nodes.
SHERPA Simulation for High Energy Reaction of PArticles.
Slava Bunichev, Moscow State University in collaboration with A.Kryukov.
Six-Fermion Production at ILC with Grcft Calculation of the Six-Fermion Production at ILC with Grcft -New algorithm for Grace- KEK Minamitateya group Yoshiaki.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Large-scale accelerator simulations: Synergia on the Grid turn 1 turn 27 turn 19 turn 16 C++ Synergia Field solver (FFT, multigrid) Field solver (FFT,
Session 3 J. Fujimoto KEK May 27, Feynman Diagram Calculations automatization tree level loop level multi-loop Event Generators.
Victoria Ibarra Mat:  Generally, Computer hardware is divided into four main functional areas. These are:  Input devices Input devices  Output.
Enabling Grids for E-sciencE LRMN ThIS on the Grid Sorina CAMARASU.
Generator status Akiya Miyamoto KEK 23 February 2016.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
Analysis Tools interface - configuration Wouter Verkerke Wouter Verkerke, NIKHEF 1.
Review and Perspective of Muon Studies Marco Destefanis Università degli Studi di Torino PANDA Collaboration Meeting GSI (Germany) September 5-9, 2011.
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
One-loop electroweak corrections to double Higgs production in e+e-HH -Current status of Higgs studies with GRACE- KEK Minamitateya - LAPTH.
Overview of the Belle II computing
Automated Tree-Level Feynman Diagram and Event Generation
ACFA 7th Nov Taipei Taiwan
Ruslan Fomkin and Tore Risch Uppsala DataBase Laboratory
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
K Kato, E de Doncker, T Ishikawa and F Yuasa ACAT2017, August 2017
Kyoko Iizuka (Seikei Univ.) T.Kon (Seikei Univ.)
Precision Control and GRACE
Ch 4. The Evolution of Analytic Scalability
Simulation in a Distributed Computing Environment
Higgs Working Group Report (I) 2003 Oct.3 KEK Yoshiaki Yasui
MapReduce: Simplified Data Processing on Large Clusters
Mark Quirk Head of Technology Developer & Platform Group
Presentation transcript:

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK M. Werlen D. Perret-Gallix FJPPL IN2P3-CNRS/KEK Minami-Tateya Group

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK What is Grace ? Cross-section Automatic Computation System for: –Tree level –One-loop level –SM and MSSM Generator of “event generators” –Bases/Spring framework Used at LEP I, II and targeting LHC, ILC physics and astro-particle calculations Developed by the Minami-Tateya Group (based in KEK)

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK User input Theory model file Diagram generator amplitude generator Library CHANEL, loop Kinematics library kinematics code generated code diagram description convergence information Make file etc. symbolic code REDUCE, Form etc. PS file Drawer BASES(MC integral) SPRING (EG manager) parameter file Diagrams (figure) Events Cross sections distributions TREE LOOP FORTRAN code.fin.mdl.rin © Minami-Tateya (KEK)

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Systematic R.C. to the Higgs production in ILC w/ GRACE Tree level e + e - →ννHH © Minami-Tateya (KEK)

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK GRACE/SUSY-loop project Systematic calculation of R.C. to the two-body decay of charginos Checked with the non-linear gauge invariance © Minami-Tateya (KEK) hep/ph A) Tanβ 10. μ 400. M M M M A ~ SPA1a’

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK R.C. to the three-body decay of charginos GRACE/SUSY-loop © Minami-Tateya (KEK) B) Tanβ 10. μ 400. M M M M A ~ SPA1a’

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Chargino x-sections

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Main issues Complex procedures, Many Interfaces to ext. programsComplex procedures, Many Interfaces to ext. programs –“GraceFUL” Project (Grace For U to Love) CPU/Memory performancesCPU/Memory performances –GRID, clusters, Supercomputers, High Arithmetic accuracyHigh Arithmetic accuracy (beyond Double Precision) –HAPPY (High Arithmetic Precision Processing Yoke) 3 projects

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK GraceFUL Front-end package to Grace for: –Simple individual use No GRACE code knowledge required to build integ/spring code Cover all actions from process selection to parameter dependent cross-section and event generation Interface to beamstrahlung and parton shower/hadronization Gather all information on a single spreadsheet –System wide massive production system Local or distributed, private or public computing system: Supercomputer, cluster, GRID, For all Grace packages –SM, MSSM –Tree level, 1-loop –Generic processes (i.e. e+-e-->lepton-lepton-H or pp->4 jets)

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Perl driven Build and manage configuration and parameter files –Direct edition of XML files through XML editor/viewer (Amaya) Build and manage a directory tree for: –Codes –Configurations (sets of parameters) –Results Build the parameter dependent GRACE code –The “integ/spring” binaries No input file. Parameters hardwired (GRACE policy) Automates the Interface to Pythia, PDF, Circe (Beamstrahlung) –Output Spring-> Pythia records –Prepares the supporting Pythia (upinit and upevent code) Run and manage the Jobs –Local run on user PC –submit jobs to Supercomp, the GRID or Analyze/display the output, keep track of the results –Summary through SpreadSheet –Interface to Root: plots and ntuples for analysis Store all codes and results in a directory tree  a database

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Perl driven code modification: Templates and XML Templates: kinit.tmpl w= [% w %] Data: default 250.D0 kinit.f, setmas.f, gfinit.f …. i.e. in kinit.f: w = 250.D0 Data: User selected 500.D0 } Merge Data: configuration file Instantiation New kinit.f file Integ, spring Makefile

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Directory Tree for Code

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Directory Tree for Running sets

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Parameters in XML format Single value: x –x can be a number (or a Fortran expression using Grace variables (for expert only) Range: xmin : xmax : step ; order –Order: 0…4 (0 inner loop) List: x1, x2, ….., x3 ; order Currently at most 5 (Range +List) parameters Examples: 250.d0 Single value 250.:400.:10.;0 from 250. to 400. by step of ,300.,1000.,2000.;1 list of values

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK e+ e-  jet, jet

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Sample XML default parameter file build by GraceFUL … 2.49D D0 AGW AGZ … 0 lextrn lextrn lepexa lepexa d acos(- 1.0d0 ) pi * pi pi / 180.0d d9 1.0d0/128.07d0 1.0d0/ d0 0.12d D D0 0.0D0

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK W: 500 -> 1000 step 100 GeV cos(θ): -1 -> -0.1 step 0.1 and

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK e+ e-  jet, jet

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK In progress Interface to the “Les Houches Accords” Extension to other packages –Grcft, a new fast EW tree level Grace system –Grace 1-loop Objective-Perl ? –Already more than 5000 Perl lines

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Volonteer Computing For Particle Physics BOINC Distributed Public Computing Berkeley Open Infrastructure for Network Computing Follow-up of –

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK 1 credit=1/100 cpu PC hour 222 M CPU Hours 478,000 CPU Hours/day ~ CPUs full time Jan projects

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Last 2 years Jan Large CPU power: 20,000 CPU and growing BUT Low reliability: redundant computations Not for time critical application Complementary to the GRID 222 M hours 914 K users

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Exploratory stage Target and goal –Public or/and Organization (KEK, IHEP, … Companies…) deployment –Cross-section first then event generation Two applications: –Small executables i.e.: 2->2,3,4 ( diag.) Mb One set of processes/ many different parameters i.e. multi dimensional parameter phase space exploration (MSSM) –Huge executables i.e.: 2->5…8, 1-loop (5, ,000 diag.) Gb split the binaries into 100 small subsets each of Mb. Each subset run in // on client PC The server run the integration algorithm At each iteration generate a new set of phase space points Hybrid system: BOINC + cluster/GRID –Load balancing private cluster or the GRID

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK International Collaboration –France, KEK, CERN, … server operational in KEK, –KEK intranet, no HEP application running yet Important Outreach for promoting LC and particle physics

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK BDP (Beyond Double Precision) Quadruple/octuple precision is needed. Correct results. Faster algorithms. But software implementations are too slow. New hardware/software development needed.

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Simple Example by J. Fujimoto (KEK) f = b 6 + a 2 (11a 2 b 2 - b b 4 – 2) + 5.5b 8 + a/2b where a= , b= (C. Hu, S. Xu and X. Yang) Double Precision f = Quadruple precision result f = Analytical result = /66192 f = New Octuple precision library, H3Lib: f = lost bits = 121

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Actual application By J. Fujimoto KEK Quadruple precision is required in some phase space points due to the Gram determinant happens in the reduction algorithm. t mass of mass of photon

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK ReJ[1] = ReJ[x] = E-0002 ReJ[y] = E-0002 ReJ[w] = ReJ[w**2] = ReJ[w*x] = E-0002 ReJ[x*y] = E-0002 … ReJ[w**3] = … ReJ[1] = ReJ[x] = E-0002 ReJ[y] = E-0002 ReJ[w] = ReJ[w**2] = ReJ[w*x] = E-0002 ReJ[x*y] = E-0002 … ReJ[w**3] = … Double precision Quadruple precision Blow up !! J. Fujimoto

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Minimization algorithms, … Gambolatin et al. Single Double Quad.

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK CPU bandwidth (Ghz) Memory size (Gbytes) Interconnection bandwidth (Ghz) Floating point precision (4-32 bytes) Instruction size (64 bits)

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK High precision libraries quadruple/octuple (Hitatchi) Double-double, quad-double (Arprec) Multi-precision lib. (1000 digits and more) Interval arithmetic Exact arithmetic (XR, iRRAM) Linpack double/quad: 30 times slower High precision Arithmetic Parallel Processor Yoke HAPPY Pulling heavy computations !!! based on CELL processor (IBM,Sony,Toshiba) complex programing Investigating other possibilities

ACFA meeting, Beijing Feb.4-7, 2007 Denis Perret-Gallix IN2P3-KEK Grace Simulation Summary Grace is producing tools for tree and one-loop SM and MSSM x-section calculations and event generation. (i.e. hep/ph ) 3 Projects to overcome the computational and management difficulties of complex process calculations –GraceFUL Grace User Interface World-wide Public distributed computing for Feynman diag. calculations –HAPPY High Arithmetic Precision: beyond double-precision. New collaborators welcome Perfect topics for international cooperations