COMPASS off-line computing

Slides:



Advertisements
Similar presentations
Object Persistency & Data Handling Session C - Summary Object Persistency & Data Handling Session C - Summary Dirk Duellmann.
Advertisements

1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
Objectivity Data Migration Marcin Nowak, CERN Database Group, CHEP 2003 March , La Jolla, California.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
2/10/2000 CHEP2000 Padova Italy The BaBar Online Databases George Zioulas SLAC For the BaBar Computing Group.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
David N. Brown Lawrence Berkeley National Lab Representing the BaBar Collaboration The BaBar Mini  BaBar  BaBar’s Data Formats  Design of the Mini 
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Summary of 1 TB Milestone RD Schaffer Outline : u Goals and assumptions of the exercise u The hardware constraints u The basic model u What have we understood.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
F. Rademakers - CERN/EPLinux Certification - FOCUS Linux Certification Fons Rademakers.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
The CMS Simulation Software Julia Yarba, Fermilab on behalf of CMS Collaboration 22 m long, 15 m in diameter Over a million geometrical volumes Many complex.
07/10/99 Focus Meeting1 Review of application software services for LHC era Lucia Silvestris Lucia Silvestris Cern/CMC -Infn/Bari.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
Claudio Grandi INFN-Bologna CHEP 2000Abstract B 029 Object Oriented simulation of the Level 1 Trigger system of a CMS muon chamber Claudio Grandi INFN-Bologna.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
5 June 2003Alan Norton / Focus / EP Topics1 Other EP Topics Some 2003 Running Experiments - NA48/2 (Flavio Marchetto) - Compass (Benigno Gobbo) - NA60.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
Overview of PHENIX Muon Tracker Data Analysis PHENIX Muon Tracker Muon Tracker Software Muon Tracker Database Muon Event Display Performance Muon Reconstruction.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Proposal for the MEG Offline System Assisi 9/21/2004Corrado Gatto General Architecture Computing Model Organization & Responsibilities Milestones.
1/28 VISITS TO COMPASS / NA58 Seminar for guides 9 March 2005 Susanne Koblitz Gerhard Mallot.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
1 Calice TB Review DESY 15/6/06D.R. Ward David Ward Post mortem on May’06 DESY running. What’s still needed for DESY analysis? What’s needed for CERN data.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
Digital Trigger System for the COMPASS Experiment
WP18, High-speed data recording Krzysztof Wrona, European XFEL
DQM4HEP Monitoring Status Tom Coates AIDA-2020 Second Annual Meeting
Memory COMPUTER ARCHITECTURE
LHC experiments Requirements and Concepts ALICE
Experiences with Large Data Sets
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
The COMPASS event store in 2002
The ZEUS Event Store An object-oriented tag database for physics analysis Adrian Fox-Murphy, DESY CHEP2000, Padova.
US ATLAS Physics & Computing
Example of DAQ Trigger issues for the SoLID experiment
ALICE Data Challenges Fons Rademakers Click to add notes.
Simulation and Physics
Geant4 in HARP V.Ivanchenko For the HARP Collaboration
ATLAS DC2 & Continuous production
The ATLAS Computing Model
Use of GEANT4 in CMS The OSCAR Project
Agenda SICb Session Status of SICb software migration F.Ranjard
Development of LHCb Computing Model F Harris
Search for X - - Pentaquark in COMPASS
ΔG/G Extraction From High-Pt Hadron Pairs at COMPASS
Presentation transcript:

COMPASS off-line computing the COMPASS experiment the analysis model the off-line system hardware software CHEP2000

The COMPASS Experiment (Common Muon and Proton Apparatus for Structure and Spectroscopy) Fixed target experiment at the CERN SPS approved in February 1997 commissioning from May 2000 data taking for at least 5 years collaboration about 200 physicists from Europe and Japan diversified physics programme muon beam gluon contribution to nucleon spin, quark spin distribution functions hadron beam glueballs, charmed baryons, Primakoff reactions all measurements require high statistics CHEP2000

Experimental Apparatus Two stage spectrometer (LAS, SAS) Several new detectors GEMs, microMega, straw trackers, scintillating fibers, RICH, and silicon detectors, Calorimeters, Drift and MWP Chambers (440 K electronic channels) Not an easy geometry highly inhomogeneous magnetic field (SM1, PTM) CHEP2000

Expected Rates beam intensity: 108 muons/s with duty cycle of 2.4s/14s RAW event size: ~ 20 - 30 kB trigger rate: 104 events/spill DAQ designed for 105 events/spill (hadron programme) on-line filtering continuous data acquisition flux: 35 MB/s data taking period ~100 days/year: ~ 1010 events/year, ~ 300 TB/year of RAW data CHEP2000

COMPASS analysis model The RAW data will be stored at CERN (no copy foreseen) and have to be accessible during all the experiment lifetime will be processed at CERN, in parallel to and at the same speed of data acquisition assuming no pre-processing for calibrations ~ 1 reprocessing of the full data set processing time 2 SPECint95-sec/event calibrations “on-line”, powerful on- off-line monitoring, small data subset reprocessing if fast raw data access the needed CPU power is 2000 SpecInt95 (~ 20 000 CU) Physics analysis will be performed at the home institutes, as well as specific studies and MC production the relevant sets of data must have a much smaller size remote and concurrent access to raw data important (“PANDA” model…) CHEP2000

General choices In 1997 COMPASS decided to build a completely new software system use OO techniques C++ as programming language ODB to store the data. Given the short time scale, the ‘small’ collaboration, the novelty, and the well known difficulty of the tasks, it was mandatory to collaborate with the IT division foresee the use LHC++ and commercial products (HepODBMS, Objectivity/DB) look at other developments (ROOT) CHEP2000

Off-line system Hardware Software central data recording COMPASS Computing Farm (CCF) (see M. Lamanna presentation, Feb. 7, session E) Software Data structures and access CORAL (Compass Reconstruction and AnaLysis) program CHEP2000

Central data recording (CDR) updated version of the CERN Central Data Recording (CDR) scheme the on-line system performs the event building (and filtering) - ALICE DATE system writes RAW data on local disks files in byte-stream format (10-20 parallel streams), keeping a "run" structure (typical sizes of 50 GB) the Central Data Recording system transfers the files to the COMPASS Computing Farm, at the computer center (rate of 35 MB/s) the COMPASS Computing Farm CCF formats the data into a federated database (Objectivity/DB) converts the RAW events in simple persistent objects performs fast event tagging or clusterisation (if necessary) sends the DB to HMS for storage CHEP2000

COMPASS Computing Farm (CCF) Beginning 1998: IT/PDP Task Force: computing farms for high-rate experiments (NA48, NA45, and COMPASS). Proposed model for the CCF: hybrid farm with about 10 Proprietary Unix servers (“data servers”) about 200 PCs (”CPU clients”), 2000 SPECint95 (0.2 s/ev) 3 to 10 TB of disk space Present model: farm with PCs as “data servers” and ”CPU clients” order of 100 dual PIII machines standard PCs running CERN certified Linux (now: RH5.1 with kernel 2.2.10/12) CHEP2000

CCF CHEP2000

COMPASS Computing Farm (cont.) The data servers will handle the network traffic from the CDR, format the RAW events into a federated DB , and send them to the HSM and receive the data to be processed from the HSM, if needed, distribute the RAW events to the PCs for reconstruction receive back the output (persistent objects), and send it to the HSM. The CPU clients will process the RAW events (reconstruction of different runs/files has to run in parallel) a real challenge:1000 ev/sec to be stored and processed by 100 dual PCs tests with prototypes are going on since two years; good results CHEP2000

Off-line system Software Data structures Event DB Experimental conditions DB Reconstruction quality control monitoring DB MC data CORAL: Compass Reconstruction and AnaLysis program CHEP2000

Data structures Event DB event headers containers: small dimensions (on disk), basic information like tag, time,... RAW event containers: just one object with event (DATE) buffer  reconstructed data containers: objects for physics analysis direct access to objects run: file structure not visible association to avoid duplications direct: raw - reco. data via “time”: raw - mon. ev CHEP2000

Data structures (cont.) Experimental conditions DB includes all information for processing and physics analysis (on-line calibrations, geometrical description of the apparatus...) based on CERN porting of BABAR Condition Database package (included in HepODBMS) versioning of  objects access to valid information using event time Reconstruction quality control monitoring data includes all quantities needed for monitoring the stability of the reconstruction and of the apparatus performances stored in Objectivity/DB Monte Carlo data we are using Geant3 (Geant4: under investigation, not in the short term) ntuples, Zebra banks CHEP2000

status Event DB Experimental conditions DB version 1 ready Experimental conditions DB in progress: implementation started Reconstruction quality control monitoring data starting Monte Carlo data ready CHEP2000

CORAL COmpass Reconstruction and AnaLysis program fully written in C++, OO technique modular architecture with a framework providing all basic functionalities well defined interfaces for all components needed for event reconstruction insulation layers for all "external" packages access to the experimental conditions and event DB (reading and writing persistent objects) - HepODBMS to assure flexibility in changing both reconstruction components and external packages components for event reconstruction developed in parallel detector decoding, pattern rec. in geom. regions, track fit, RICH and Calorimeter rec., … CHEP2000

CORAL CHEP2000

CORAL status development and tests on Linux we try to keep portability on other platforms (Solaris) framework: almost ready work going on to interface new reconstruction components and access to experimental conditions DB reconstruction components: integrated inside CORAL and tested MC event reading and decoding, track pattern recognition, track fit,… integration foreseen soon RICH pattern recognition, Calorimeter reconstruction, vertex fit,... under development detector (DATE buffer) decoding, in parallel with on-line,... Goal: version 1 ready at the end of April 2000 all basic functionalities, even if not optimised as for all other off-line system components CHEP2000

General comments most of the problems we had are related to the fact that we are still in a transition period: no stable environment both for available software (LHC++) and OS (Linux) lack of standard “HEP made” tools and packages commercial products seem not to be always a solution too few examples of HEP software systems using new techniques expertise and resources having a large number of physicists knowing the new programming language (and techniques) requires time all the work has been done by a very small enthusiastic team (3 to 10 fte in 2 years) Still, we think we made the right choice CHEP2000

“FOCUS …. recognises the role that the experiment has from the minutes of the 16th meeting of FOCUS held on December 2, 1999: “FOCUS …. recognises the role that the experiment has as a "test-bed" for the LHC experiments.” CHEP2000

CHEP2000