Muon Collider Physics Workshop 10-12 November 2009, Fermilab P. Mato /CERN.

Slides:



Advertisements
Similar presentations
Ch:8 Design Concepts S.W Design should have following quality attribute: Functionality Usability Reliability Performance Supportability (extensibility,
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Blueprint RTAGs1 Coherent Software Framework a Proposal LCG meeting CERN- 11 June Ren é Brun ftp://root.cern.ch/root/blueprint.ppt.
Simulation Project Major achievements (past 6 months 2007)
Ideas on the LCG Application Architecture Application Architecture Blueprint RTAG 12 th June 2002 P. Mato / CERN.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
Usage of the Python Programming Language in the CMS Experiment Rick Wilkinson (Caltech), Benedikt Hegner (CERN) On behalf of CMS Offline & Computing 1.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
SEAL V1 Status 12 February 2003 P. Mato / CERN Shared Environment for Applications at LHC.
OpenAlea An OpenSource platform for plant modeling C. Pradal, S. Dufour-Kowalski, F. Boudon, C. Fournier, C. Godin.
Framework for Automated Builds Natalia Ratnikova CHEP’03.
TILC09, April 2009, Tsukuba P. Mato /CERN.  Former LHCb core software coordination ◦ Architect of the GAUDI framework  Applications Area manager.
Zhonghua Qu and Ovidiu Daescu December 24, 2009 University of Texas at Dallas.
Java Analysis Studio Status Update 12 May 2000 Altas Software Week Tony Johnson
David Adams ATLAS ATLAS Distributed Analysis David Adams BNL March 18, 2004 ATLAS Software Workshop Grid session.
ALCPG Software Tools Jeremy McCormick, SLAC LCWS 2012, UT Arlington October 23, 2012.
Magnetic Field Measurement System as Part of a Software Family Jerzy M. Nogiec Joe DiMarco Fermilab.
LC Software Workshop, May 2009, CERN P. Mato /CERN.
The PH-SFT Group Mandate, Organization, Achievements Software Components Software Services Summary January 31st,
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Common Application Software for the LHC experiments NEC’2007 International Symposium, Varna, Bulgaria September 2007 Pere Mato, CERN.
SEAL: Core Libraries and Services Project CERN/IT After-C5 Meeting 6 June 2003 P. Mato / CERN.
V. Serbo, SLAC ACAT03, 1-5 December 2003 Interactive GUI for Geant4 by Victor Serbo, SLAC.
1 Planning for Reuse (based on some ideas currently being discussed in LHCb ) m Obstacles to reuse m Process for reuse m Project organisation for reuse.
MINER A Software The Goals Software being developed have to be portable maintainable over the expected lifetime of the experiment extensible accessible.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
W. Pokorski - CERN Simulation Project1 Python binding for Geant4 toolkit using Reflex/PyROOT tool Witek Pokorski EuroPython 2006, CERN, Geneva
Introduction to Gaudi LHCb software tutorial - September
ROOT Application Area Internal Review September 2006.
SEAL: Common Core Libraries and Services for LHC Applications CHEP’03, March 24-28, 2003 La Jolla, California J. Generowicz/CERN, M. Marino/LBNL, P. Mato/CERN,
SEAL Core Libraries and Services CLHEP Workshop 28 January 2003 P. Mato / CERN Shared Environment for Applications at LHC.
SEAL Project Core Libraries and Services 18 December 2002 P. Mato / CERN Shared Environment for Applications at LHC.
LC Software Workshop, May 2009, CERN P. Mato /CERN.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
The CMS Simulation Software Julia Yarba, Fermilab on behalf of CMS Collaboration 22 m long, 15 m in diameter Over a million geometrical volumes Many complex.
- Athena Data Dictionary (28nov00 - SW CERN) Athena Data Dictionary Craig E. Tull HCG/NERSC/LBNL Software CERN November 28,
ATLAS Meeting CERN, 17 October 2011 P. Mato, CERN.
CASE (Computer-Aided Software Engineering) Tools Software that is used to support software process activities. Provides software process support by:- –
SEAL Project Overview LCG-AA Internal Review October 2003 P. Mato / CERN.
23/2/2000Status of GAUDI 1 P. Mato / CERN Computing meeting, LHCb Week 23 February 2000.
Interactive Data Analysis on the “Grid” Tech-X/SLAC/PPDG:CS-11 Balamurali Ananthan David Alexander
LCG – AA review 1 Simulation LCG/AA review Sept 2006.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
General requirements for BES III offline & EF selection software Weidong Li.
CERN Tutorial, February Introduction to Gaudi.
D. Duellmann - IT/DB LCG - POOL Project1 The LCG Dictionary and POOL Dirk Duellmann.
Preliminary Ideas for a New Project Proposal.  Motivation  Vision  More details  Impact for Geant4  Project and Timeline P. Mato/CERN 2.
SEAL Project Status SC2 Meeting 16th April 2003 P. Mato / CERN.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
CPT Week, November , 2002 Lassi A. Tuura, Northeastern University Core Framework Infrastructure Lassi A. Tuura Northeastern.
David Adams ATLAS ATLAS Distributed Analysis and proposal for ATLAS-LHCb system David Adams BNL March 22, 2004 ATLAS-LHCb-GANGA Meeting.
36 th LHCb Software Week Pere Mato/CERN.  Provide a complete, portable and easy to configure user environment for developing and running LHC data analysis.
Follow-up to SFT Review (2009/2010) Priorities and Organization for 2011 and 2012.
Project Work Plan SEAL: Core Libraries and Services 7 January 2003 P. Mato / CERN Shared Environment for Applications at LHC.
Ganga/Dirac Data Management meeting October 2003 Gennady Kuznetsov Production Manager Tools and Ganga (New Architecture)
SuperB Computing R&D Workshop 9-12 March 2010, IUSS, Ferrara, Italy P. Mato /CERN.
Elements of LCG Architecture Application Architecture Blueprint RTAG 8 th June 2002 P. Mato / CERN.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
16 th Geant4 Collaboration Meeting SLAC, September 2011 P. Mato, CERN.
SEAL: Common Core Libraries and Services for LHC Applications
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN B00le.
HEP detector description supporting the full experiment life cycle
Conference on High-Energy Physics Moscow, Russia,
SW Architecture SG meeting 22 July 1999 P. Mato, CERN
Simulation Framework Subproject cern
Simulation and Physics
SEAL Project Core Libraries and Services
Planning next release of GAUDI
Presentation transcript:

Muon Collider Physics Workshop November 2009, Fermilab P. Mato /CERN

 About myself  Introduction ◦ Software components ◦ Programming languages ◦ Software frameworks  Software interoperability levels ◦ Level-1 to Level-3  Leveraging from the LHC common software  Conclusions Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN2

 Former LHCb core software coordination ◦ Architect of the GAUDI framework  Applications Area manager of the Worldwide LHC Computing Grid (WLCG) ◦ Develops and maintains that part of the physics applications software and associated infrastructure that is common among the LHC experiments  Leader of the Software Development for Experiments (SFT) group in the Physics (PH) Department at CERN ◦ Projects: Geant4, ROOT, GENSER, SPI, CernVM, etc. ◦ Giving support to LHC experiments (+LC Detector studies) Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN3

 The emphasis on the requirements for the software system of a [almost] running experiment is different than for a conceptual detector design studies ◦ Flexibility vs. performance, ideal detector descriptions vs. realistic ones, etc. ◦ Naturally the attention changes with the evolution of the project and the software system should evolve with it  Emphasis on interoperability, reuse and enabling independent developments is common ◦ Level 1 - Application Interfaces/Formats ◦ Level 2 - Software Integrating Elements ◦ Level 3 – Common frameworks Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN4

5 non-HEP specific software packages Experiment Framework Event Det Desc. Calib. Applications Core Libraries Simulation Data Mngmt. Distrib. Analysis Every experiment has a framework for basic services and various specialized frameworks: event model, detector description, visualization, persistency, interactivity, simulation, calibrarion Many non-HEP libraries widely used (e.g. Xerces, GSL, Boost, etc.) Applications are built on top of frameworks and implementing the required algorithms (e.g. simulation, reconstruction, analysis, trigger) Core libraries and services that are widely used and provide basic functionality (e.g. ROOT, HepMC,…) Specialized domains that are common among the experiments (e.g. Geant4, COOL, Generators, etc.)

 Foundation Libraries ◦ Basic types ◦ Utility libraries ◦ System isolation libraries  Mathematical Libraries ◦ Special functions ◦ Minimization, Random Numbers  Data Organization ◦ Event Data ◦ Event Metadata (Event collections) ◦ Detector Description ◦ Detector Conditions Data  Data Management Tools ◦ Object Persistency ◦ Data Distribution and Replication Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN6  Simulation Toolkits Event generators Detector simulation  Statistical Analysis Tools Histograms, N-tuples Fitting  Interactivity and User Interfaces GUI Scripting Interactive analysis  Data Visualization and Graphics Event and Geometry displays  Distributed Applications Parallel processing Grid computing One or more implementations of each component exists for LHC

 C++ used almost exclusively by all LHC Experiments ◦ LHC experiments with an initial FORTRAN code base have completed the migration to C++ long time ago  Large common software projects in C++ are in production for many years ◦ ROOT, Geant4, …  FORTRAN still in use mainly by the MC generators ◦ Large developments efforts are being put for the migration to C++ (Pythia8, Herwig++, Sherpa,…)  Java is almost inexistent for LHC ◦ Exception is the ATLAS event display ATLANTIS Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN7

 Scripting has been an essential component in the HEP analysis software for the last decades ◦ PAW macros (kumac) in the FORTRAN era ◦ C++ interpreter (CINT) in the C++ era ◦ Python is widely used by 3 out of 4 LHC experiments  Most of the statistical data analysis and final presentation is done with scripts ◦ Interactive analysis ◦ Rapid prototyping to test new ideas  Scripts are also used to “configure” complex C++ programs developed and used by the experiments ◦ “Simulation” and “Reconstruction” programs with hundreds or thousands of options to configure Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN8

 Python language is really interesting for two main reasons: ◦ High level programming language  Simple, elegant, easy to learn language  Ideal for rapid prototyping  Used for scientific programming ( ◦ Framework to “glue” different functionalities  Any two pieces of software can be glued at runtime if they offer a “Python interface”  In principle with PyROOT any C++ class can be easy used from Python Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN9

 A software framework is an abstraction in which common code providing generic functionality can be selectively overridden or specialized by user code providing specific functionality.  A software framework is similar to software libraries in that they are reusable abstractions of code wrapped in a well-defined API ◦ Typically the framework “calls” the user provided adaptations for specific functionality  Is the realization of a software architecture and facilitates software re-use Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN10

 Experiments have developed Software Frameworks ◦ General architecture of any event processing applications (simulation, trigger, reconstruction, analysis, etc.) ◦ To achieve coherency and to facilitate software re-use ◦ Hide technical details to the end-user Physicists ◦ Help the Physicists to focus on their physics algorithms  Applications are developed by customizing the Framework ◦ By the “composition” of elemental Algorithms to form complete applications ◦ Using third-party components wherever possible and configuring them  ALICE: AliROOT; ATLAS+LHCb: Athena/Gaudi; CMS: CMSSW Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN11

 Predefined component ‘vocabulary’ ◦ E.g. ‘Algorithm’, ‘Tool’, ‘Service’, ‘Auditor’, ‘DataObject’, ‘Property’, ‘DetectorCondition’, etc  Separation from interfaces & implementation ◦ Allowing for evolution of implementations  Plug-in based (dynamic loading)  Homogenous configuration, logging and error reporting  Built-in profiler, monitoring, utilities, etc.  Interoperable with other languages (e.g. Java, Python, etc.) Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN12

 GAUDI is a mature software framework for event data processing used by several HEP experiments ◦ ATLAS, LHCb, HARP, GLAST, Daya Bay, Minerva, BES III,…  The same framework is used for all applications ◦ All applications behave the same way (configuration, logging, control, etc.) ◦ Re-use of ‘Services’ (e.g. Det. description) ◦ Re-use of ‘Algorithms’ (e.g. Recons -> HTL)  Equivalent to MARLIN for LDC Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN13

 Separation between “data” and “algorithms”  Three basic categories of “data” ◦ event data, detector data, statistical data  Separation between “transient” and “persistent” representations of data  Data store-centered (“black-board”) architectural style  “User code” encapsulated in few specific places  Well defined component “interfaces” with plug-in capabilities Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN14

Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN15 Algorithm A Algorithm B Algorithm C Transient Event Data Store Data T1 Data T2, T3 Data T2 Data T3, T4 Data T4 Data T5 Data T1 Data T5 Real dataflow Apparent dataflow

 Concept of sequences of Algorithms to allow processing based on physics signature ◦ Avoid re-calling same algorithm on same event ◦ Different instances of the same algorithm possible  Event filtering ◦ Avoid passing all the events through all the processing chain Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN16 Event Input/Output Algorithm Filter Decision Single Instances

 Agreement on ‘standard formats’ for input and output data is essential for independent development of different detector concepts ◦ Detector description, configuration data, MC generator data, raw data, reconstructed data, statistical data, etc.  This has been the direction for the ILC software so far (e.g. lcdd, lcio, stdhep, gear,…) ◦ Very successful ◦ Interoperability with different languages (C++, Java)  Unfortunately LHC has been standardizing on other formats/interfaces (e.g. HepMC, GDML, ROOT files, etc.) ◦ Effort has started to increase convergence in this area Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN18

 It is not sufficient to say that is ‘ROOT’ format or ‘ASCII’ format ◦ You need to agreed on what is the actual data contents ◦ It implies common or standard ‘Event Data Models’  Common Event Data models ◦ Obviously within an experiment the Event model is common  Algorithms can be easily re-used between reconstruction, high- level trigger, etc. ◦ Sharing of Event Data models in LHC has not happen ◦ On the contrary, LCIO is a very successful Event Model (and I/O system) for the ILC community  It allows to interoperate between detector concepts Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN19

 Event record written in C++ for HEP MC Generators. ◦ Many extensions from HEPEVT (the Fortran HEP standard common block)  Agreed interface between MC authors and clients (LHC experiments, Geant4, …)  I/O support ◦ Read and Write ASCII files. Handy but not very efficient. ◦ HepMC events can also be stored with ROOT I/O Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN20

 Geometry Description Markup Language (XML)  Low level (materials, shapes, volumes and placements) ◦ Quite verbose to edit directly  Directly understood by Geant4 and ROOT ◦ Long term commitment  Has been extended by lcdd with sensitive volumes, regions, visualization attributes, etc. Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN21

Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN22 Geant4 Fluka Flugg GDML (xml) text editor Pythia MC generators geometry (root) MCDB MCTruth (root) HepMC ROOT TGeom HL Desc. (xml)

 To facilitate the integration of independently developed components to build a coherent [single] program  Dictionaries ◦ Dictionaries provide meta data information (reflection) to allow introspection and interaction of objects in a generic manner ◦ The LHC strategy has been a single reflection system (Reflex)  Scripting languages ◦ Interpreted languages are ideal for rapid prototyping ◦ They allow integration of independently developed software modules (software bus) ◦ Standardized on CINT and Python scripting languages  Component model and plug-in management ◦ Modeling the application as a set of components with well defined interfaces ◦ Loading the required functionality at runtime Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN24

Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN25 X.h Reflex data rootcint genreflex XDict.so ReflexROOT ROOT meta C++ CINT Python (PyROOT) Object I/O Scripting (CINT, Python) Plug-in management etc. dictionaries

 Highly optimized (speed & size) platform independent I/O system developed for more than 10 years ◦ Able to write/read any C++ object (event model independent) ◦ Almost no restrictions (default constructor needed)  Make use of ‘dictionaries’  Self-describing files ◦ Support for automatic and complex ‘schema evolution’ ◦ Usable without ‘user libraries’  All the LHC experiments rely on ROOT I/O for the next many years Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN26

 Common interfaces/formats between detector concepts is good but adopting a “common framework” is even better ◦ It would enable one level up in re-use (at the algorithm/tool level)  What are the alternatives? ◦ Evolve MARLIN and be universally adopted? ◦ Use an existing LHC framework (e.g. GAUDI, AliROOT)? ◦ Do something else? Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN28

 Python could also be seen as a framework where you can plug easily “extension modules” in binary form implemented using other languages. ◦ Very easy and natural to interface to C++ classes ◦ Python should only be the “glue” between modules developed in C++ or other languages ◦ The interface (API) for Python extension modules is quite simple and at the same time very flexible (generic functions) Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN29

Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN30 GUI Python math shell GaudiPython DatabaseGanga GUI Very rich set of Python standard modules Several GUI toolkits XML Very rich set specialized generic modules Gaudi Framework PyROOT Root (C++) Classes PVSS JPype Java Classes LHC modules Gateways to other frameworks

 The high-level steering of applications could be done in Python ◦ Configuration, plugin (module) loading, event loop, algorithm scheduling, etc. ◦ The low Python execution performance shouldn’t be a problem at this level  The real number-crunching could be done in a proper compiled language (e.g. C or C++)  Essential to define ‘interfaces’ for the framework components ◦ As it is the case for a C++ framework Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN31

 Based on what exists of common software for LHC, we plan to provide a coherent and fairly complete set of software packages ready to be used by any existing and new HEP experiment ◦ Utility libraries, analysis tools, MC generators, full and fast simulation, reconstruction algorithms, etc. ◦ Reference place to look for some needed functionality ◦ Made from internally and externally developed products  [fairly] independent packages including some integrating elements such as ‘dictionaries’, ‘standard interfaces’, etc. ◦ Support for I/O, interactivity, scripting, etc.  Easy to configure, deploy and use by end-users ◦ Good packaging and deployment tools will be essential 33Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN

 Unfortunately very little has been done in common between the LHC experiments ◦ Different Event Models ◦ Different Data Processing Frameworks  The equivalent of the LCIO event data model did not exists  Within a given experiment, alternative set of algorithms has been developed and could be evaluated thanks to the common event model and framework  Work in this direction will be part of an EU-FP7 proposal (AIDA) Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN34

 ROOT is the facto standard and repository for all these common functionality ◦ Histogramming, random numbers, liner algebra, numerical algorithms, fitting, multivariate analysis, statistical classes, etc.  The aim is to continue to provide and support a coherent set of mathematical and statistical functions which are needed for the analysis, reconstruction and simulation Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN35

 Re-using existing software packages saves on development effort but complicates “software configuration” ◦ Need to hide this complexity  A configuration is a combination of packages and versions that are coherent and compatible  E.g. LHC experiments build their application software based on a given “LCG/AA configuration” ◦ Interfaced to the experiments configuration systems (SCRAM, CMT) Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN36

 CernVM is a virtual appliance that is being developed to provide a portable and complete software environment for all LHC experiments ◦ Decouples experiment software from system software and hardware  It comes with a read-only file system optimized for software distribution ◦ Operational in offline mode for as long as you stay within the cache ◦ Reduces the effort to install, maintain and keep up to date the experiment software Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN37

 I have been focusing on the reasons why a Framework is needed rather than recommending a concrete one  When deciding on the framework (and languages) not only the technical advantages should be considered ◦ Interoperate with existing codes from the community (e.g. ROOT, Geant4, etc.) ◦ People training is  Common interfaces/formats is good but adopting a common framework is even better ◦ It would enable one level up in re-use ◦ Single framework to cover all applications is desirable  MC detector studies could profit from existing structures and support for the common LHC software Muon Collider WS, November 10-12, Fermilab -- P. Mato/CERN38