Proposal for the MEG Offline System Assisi 9/21/2004Corrado Gatto General Architecture Computing Model Organization & Responsibilities Milestones.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

MEG-Review Feb MEG Software Group MEG Software Status Framework for MC, Schedule, DC reconstruction update and Discussion on a ROOT-based offline.
1 The MEG Software Project PSI 9/2/2005Corrado Gatto Offline Architecture Computing Model Status of the Software Organization.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
Report from the Lecce’s Offline Group Rome 6-Feb-2006 The Achievements The People Future Plans.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Quality Control B. von Haller 8th June 2015 CERN.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Shuei MEG review meeting, 2 July MEG Software Status MEG Software Group Framework Large Prototype software updates Database ROME Monte Carlo.
REVIEW OF NA61 SOFTWRE UPGRADE PROPOSAL. Mandate The NA61 experiment is contemplating to rewrite its fortran software in modern technology and are requesting.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
As of 28 Juni 2005Getting Starged with GEM - Shuei Yamada 1 Getting Started with GEM Shuei YAMADA ICEPP, University of Tokyo What is GEM? Before you start.
David N. Brown Lawrence Berkeley National Lab Representing the BaBar Collaboration The BaBar Mini  BaBar  BaBar’s Data Formats  Design of the Mini 
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
14 February 2007Fabrizio Cei1 INFN and University of Pisa PSI Review Meeting PSI, 14 February 2007 Status of MEG Software.
PHENIX Simulation System 1 December 7, 1999 Simulation: Status and Milestones Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
NOVA Networked Object-based EnVironment for Analysis P. Nevski, A. Vaniachine, T. Wenaus NOVA is a project to develop distributed object oriented physics.
ALICE Simulation Framework Ivana Hrivnacova 1 and Andreas Morsch 2 1 NPI ASCR, Rez, Czech Republic 2 CERN, Geneva, Switzerland For the ALICE Collaboration.
Prediction W. Buchmueller (DESY) arXiv:hep-ph/ (1999)
MINER A Software The Goals Software being developed have to be portable maintainable over the expected lifetime of the experiment extensible accessible.
19 November 98 1 Jürgen Knobloch ATLAS Computing ATLAS Computing - issues for 1999 Jürgen Knobloch Slides also on:
Andreas Morsch, CERN EP/AIP CHEP 2003 Simulation in ALICE Andreas Morsch For the ALICE Offline Project 2003 Conference for Computing in High Energy and.
CMS pixel data quality monitoring Petra Merkel, Purdue University For the CMS Pixel DQM Group Vertex 2008, Sweden.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
Virtual Monte Carlo and new geometry description in STAR Maxim Potekhin STAR Collaboration Meeting, BNL July 17, 2004 July 17, 2004.
Hall-D/GlueX Software Status 12 GeV Software Review III February 11[?], 2015 Mark Ito.
STAR Event data storage and management in STAR V. Perevoztchikov Brookhaven National Laboratory,USA.
H.G.Essel: Go4 - J. Adamczewski, M. Al-Turany, D. Bertini, H.G.Essel, S.Linev CHEP 2003 GSI Online Offline Object Oriented Go4.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
VICOMTECH VISIT AT CERN CERN 2013, October 3 rd & 4 th O.COUET CERN/PH/SFT DATA VISUALIZATION IN HIGH ENERGY PHYSICS THE ROOT SYSTEM.
Introduction What is detector simulation? A detector simulation program must provide the possibility of describing accurately an experimental setup (both.
Status of the LAr OO Reconstruction Srini Rajagopalan ATLAS Larg Week December 7, 1999.
Online Monitoring for the CDF Run II Experiment T.Arisawa, D.Hirschbuehl, K.Ikado, K.Maeshima, H.Stadie, G.Veramendi, W.Wagner, H.Wenzel, M.Worcester MAR.
Paul Scherrer Institut 5232 Villigen PSI ROME / / Matthias Schneebeli ROME Collaboration Meeting in Pisa Presented by Matthias Schneebeli.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Jean-Roch Vlimant, CERN Physics Performance and Dataset Project Physics Data & MC Validation Group McM : The Evolution of PREP. The CMS tool for Monte-Carlo.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
General requirements for BES III offline & EF selection software Weidong Li.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Preservation of LEP Data There is still hope Is there? Marcello Maggi, Ulrich Schwickerath, Matthias Schröder, , DPHEP7 1.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ACOS Report ATLAS Software Workshop December 1998 Jürgen Knobloch Slides also on:
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
VI/ CERN Dec 4 CMS Software Architecture vs Hybrid Store Vincenzo Innocente CMS Week CERN, Dec
1 SLAC simulation workshop, May 2003 Ties Behnke Mokka and LCDG4 Ties Behnke, DESY and SLAC MOKKA: european (france) developed GEANT4 based simulation.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
9 Feb, MEG Software Status -for the coming review committee- Question from A.Blondel Our answer Schedule and Man power MEG Software Group.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
DAQ thoughts about upgrade 11/07/2012
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
ALICE Computing Data Challenge VI
KID - KLOE Integrated Dataflow
CMS High Level Trigger Configuration Management
Migration of reconstruction and analysis software to C++
LHC experiments Requirements and Concepts ALICE
US ATLAS Physics & Computing
DQM for the RPC subdetector
Simulation and Physics
Presentation transcript:

Proposal for the MEG Offline System Assisi 9/21/2004Corrado Gatto General Architecture Computing Model Organization & Responsibilities Milestones

Dataflow and Reconstruction Requirements 100 Hz L3 trigger evt size : 1.2 MB Raw Data throughput: (10+10)Hz  1.2Mb/Phys evt  Hz  0.01 Mb/bkg evt = 3.5 Mb/s : 35 kB Total raw data storage: 3.5Mb/s  10 7 s = 35 TB/yr

Monarc Analysis Model Baseline: Event Sizes and Storage Sizes – Raw data 1.2 MB/event – ESD 10 KB/event – AOD 2 KB/event – TAG or DPD1 KB/event Storage – Raw data 35 TByte – Reconstructed data 13 Tbyte/reprocessing – MC generated events 40 TByte – MC reconstructed events 25 Tbyte /reprocessing Assuming 10 9 generated MC events Very Rough Estimate

Compare to the others

Requirements for software architecture or framework Geant3 compatible (at least at the beginning) Easy interface with existing packages: – Geant3, Geant4, external (fortran) event generators Scalability Simple structure to be used by non-computing experts Written and maintained by few people Portability Use a world-wide accepted framework Use ROOT + An existing Offline Package as starting point

Project is started 5 existing Offline packages under evaluation AliRoot is winning: all aspects of the Offline are implemented Support offered by Carminati+Brun

Raw Performances Pure Linux setup 20 data sources FastEthernet local connection

MEG Computing Model: Work in Progress Analysis = Reconstruction -> Need O (1) farm per analysis Analysis policy not yet established (1 centralized analysis or several competing analyses) Very CPU demanding processing: – MC generation: 0.5 Hz – Reconstruction: 1 Hz Final design within few months

Manpower Estimate (framework only) Available at Lecce (Estimate)

Responsibilities & Tasks (all software) Detector experts: – LXe: Signorelli, Yamada, Savada – DC: Schneebeli (hit), Hajime (Pattern), Lecce – TC: Pavia/Genova – Trigger: Nicolo’ (Pisa)

Starting-up: Tasks and Objectives Setup a system to test the Offline code DAQ Prompt Calib Reco farm Online Disk Server Staging to tape Prototype the Main Program Container Classes Steering program FastMC Test functionalities & performance Core Offline System Development RDBMS

Milestones 1. Start-up: September Choice of the prototype Offline system: End October Organize the reconstruction code. December per subdetector (simulation, part of reconstruction) 2. central tasks (framework, global reconstruction, visualisation, geometry database …) 4. Start the development system HW: January Write down the Offline Structure (container classes, event class, etc…) : February MDC: 4 th quarter Keep the existing MC in the Geant3 framework. Form a panel to decide if and how to migrate to ROOT: 4 th quarter 2005

Conclusions MEG’s Offline project approved by the collaboration Offline group is consolidating (mostly in Lecce) Work is starting Software framework and architecture have been frozen Computing Model will be chosen soon Merging with existing software within 1 year

Proposed Architecture Fully OO. Each detector executes a list of detector actions/tasks & produces/posts its own data All the functionalities are implemented in the “Detector Class” – Both sensitive modules (detectors) and non-sensitive ones are described by this base class. – supports the hit and digit trees produced by the simulation – supports the the objects produced by the reconstruction. – This class is also responsible for building the geometry of the detectors. The Run Manager coordinates the detector classes – executes the detector objects in the order of the list – Global trigger, simulation and reconstruction are special services controlled by the Run Manager class Ensure high level of modularity (for easy of maintenance) – The structure of every detector package is designed so that static parameters (like geometry and detector response parameters) are stored in distinct objects The data structure is build up as ROOT TTree-objects Offline services (Geometry browser, Event Display, RDBMS interface, Tape interface, etc.) based on ROOT bult-in services

Computing Model

What Does MEG Need? Wan file access Parallel/remote Processing Robotic Tape Support RDBMS Connectivity Event Display GEANT3 Interface Geometric Modeler UI DQM Histogramming

What does ROOT Offer Extensive CERN support – Bonus for small collaborations Unprecedented Large contributing HEP Community – Open Source project Multiplatforms Support multi-threading and asynchronous I/O – Vital for a reconstruction farm Optimised for different access granularity – Raw data, DST's, NTuple analysis

Experiments Using ROOT for the Offline ExperimentMax Evt sizeEvt rateDAQ outTape StorageSubdetectorsCollaborators STAR20 MB1 Hz20 MB/s200 TB/yr3>400 Phobos300 kB100 Hz30 MB/s400 TB/yr3>100 Phenix116 kB200 Hz17 MB/s200 TB/yr12600 Hades9 kB (compr.)33 Hz300 kB/s1 TB/yr517 Blast0.5 kB500 Hz250 kB/s555 Meg1.2 MB100 Hz3.5 MB/s70 TB/yr3

From GEANT3 to ROOT A conversion program from Geant3 to Root (g2root) exists to convert a GEANT RZ file into a ROOT C++ program – All the components are translated: geometry, materials, kinematics, etc. – However, need to write the output in the format required by the reconstruction code. – The new code is integrated with the VMC schema and can be run with any desired MC Call the fortran code from a C++ program – The calls to Geant3 are intercepted and the ROOT components are fully usable (geometry browser, geometric modeler, etc.) – However, need to interface the output in Zebra format with the TTree format required by the ROOT based reconstruction code. Migration schema #1 Migration schema #2

ROOT vs Fortran Several months start-up Investment for the future All HEP components built in a unique framework All libraries still mantained Plenty of compilers Built-in 3D full Event display with navigator Interactive remote analysis Fully platform independent (can even mix Unix and Windows) 3-D histos Automatic file compression Immediate start-up Hard to actract younger people Need to merge many libraries CERNLIB no longer mantained (last is for the Itanium) Hard to find free compilers No Event display available in fortran (naive in G3) No interactive remote analysis No compression A ROOT file is 2-3 times smaller than ZEBRAA ROOT file is 2-3 times smaller than ZEBRA

Conclusions ROOT has all the features MEG need already built-in Migration from Geant3 to ROOT is a non existent problem Real issue are the reconstruction routines: they have to be written in C++ The decision between an Offline in fortran or in C++ should be taken on the base of much reconstruction code rather than Montecarlo code has already been written