Status Report on Data Reconstruction May 2002 C.Bloise Results of the study of the reconstructed events in year 2001 Data Reprocessing in Y2002 DST production.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Offline Status Report M. Moulson, 20 December 2001 Summary presentation for KLOE General Meeting Outline: Processing of 2001 data History of datarec executable.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Work finished M. Antonelli ISR and f decay simulation M. MoulsonBank-reduction code for DST’s M. Palutan et al.Trigger simulation parameters S. GiovannellaGEANFI.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Notes on offline data handling M. Moulson Frascati, 29 March 2006.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
KLOE Offline Status Report Data Reconstruction MonteCarlo updates MonteCarlo production Computing Farm C.Bloise – May 28th 2003.
W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, ZEUS Computing Board Report Zeus Executive Meeting Wesley H. Smith, University.
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
Offline Discussion M. Moulson 22 October 2004 Datarec status Reprocessing plans MC status MC development plans Linux Operational issues Priorities AFS/disk.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
MC/Offline Planning M. Moulson Frascati, 21 March 2005 MC status Reconstruction coverage and data quality Running time estimates MC & reprocessing production.
KLOE Computing Update Paolo Santangelo INFN LNF KLOE General Meeting University of Rome 2, Tor Vergata 2002, December
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
STATUS OF KLOE F. Bossi Capri, May KLOE SHUTDOWN ACTIVITIES  New interaction region  QCAL upgrades  New computing resources  Monte Carlo.
DPDs and Trigger Plans for Derived Physics Data Follow up and trigger specific issues Ricardo Gonçalo and Fabrizio Salvatore RHUL.
CHEP 2000: 7-11 February, 2000 I. SfiligoiData Handling in KLOE 1 CHEP 2000 Data Handling in KLOE I.Sfiligoi INFN LNF, Frascati, Italy.
V.Patera – KLOE GM – Otranto – 10 June 2002 K  reconstruction status K + K - retracking features New vs Old : resolution New vs Old : efficiencies Conclusion.
Update on MC-04(05) –New interaction region –Attenuation lengths vs endcap module numbers –Regeneration and nuclear interaction –EMC time resolution –
The PHysics Analysis SERver Project (PHASER) CHEP 2000 Padova, Italy February 7-11, 2000 M. Bowen, G. Landsberg, and R. Partridge* Brown University.
Status of the KLOE experiment M. Moulson, for the KLOE collaboration LNF Scientific Committee 23 May 2002 Data taking in 2001 and 2002 Hardware and startup.
Hall-D/GlueX Software Status 12 GeV Software Review III February 11[?], 2015 Mark Ito.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
Offline meeting Outcome of Scientific Committee Status of MC04 Proposal for a minimum bias EvCl sample Other offline activies: –Reprocessing –Dead wires.
Offline meeting  Status of MC/Datarec  Priorities for the future LNF, Sep
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Offline Status Report A. Antonelli Summary presentation for KLOE General Meeting Outline: Reprocessing status DST production Data Quality MC production.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
5 June 2003Alan Norton / Focus / EP Topics1 Other EP Topics Some 2003 Running Experiments - NA48/2 (Flavio Marchetto) - Compass (Benigno Gobbo) - NA60.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Analyzing ever growing datasets in PHENIX Chris Pinkenburg for the PHENIX collaboration.
Work finished M. Antonelli ISR and f decay simulation M. MoulsonBank-reduction code for DST’s.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
David Stickland CMS Core Software and Computing
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
Offline Status Review M. Moulson, P. Valente, for the Offline Group 16 March 2001 Outline: Status update FILFO: new developments (G. Finocchiaro) Questions.
Offline resource allocation M. Moulson, 11 February 2003 Discussion Outline: Currently mounted disk space Allocation of new disk space CPU resources.
Status report on analysis of BR(K S  p + p - p 0 ) A. Antonelli, M. Moulson, Second KLOE Physics Workshop, Otranto, June 2002.
1 Reminder: Why reprocess? data Reconstruction: DBV DSTs: DBV data DBV-19 for reconstruction & DSTs, Run < DBV-20 for.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
MC/OFFLINE meeting  Status  Request by Ke2 group  aob LNF
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
KLOE offline & computing: Status report
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Offline Report Outline: Status update DST Proposal P. de Simone
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Kloe plans for 500 pb-1 Since now up to mid/end 2003
Proposal for the LHCb Italian Tier-2
ALICE Computing Upgrade Predrag Buncic
Computing Infrastructure for DAQ, DM and SC
ILD Ichinoseki Meeting
Special edition: Farewell for Valerie Halyo
Special edition: Farewell for Stephen Bailey
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
Special edition: Farewell for Eunil Won
Presentation transcript:

Status Report on Data Reconstruction May 2002 C.Bloise Results of the study of the reconstructed events in year 2001 Data Reprocessing in Y2002 DST production Monte Carlo samples Data taking in Y2002 Summary

Y2001 Data Study of the data samples taken at different background levels –Reconstruction improved : more robust selection criteria implemented

Data Reconstruction Updates Filtering Procedure –Pre-filtering of cosmic muons removed –Rejection of events with a big number of calorimeter cells removed Selection criteria –Large angle bhabha and  selection improved –Tag devoted to k S       , k L  , k L      added –Charged kaon selection criteria completely revised Calorimeter Calibration –Refined time calibration introduced on Oct, 17 th –Procedure to correct the rest of data created

Reconstruction - Y2001 raw EmC recon. DC recon. Evt. Class bha kpm ksl rpi rad clb flt afl Evt. Class cos prescaled cosmic MB cosmic ÷10 ÷100 Bhahba

Reconstruction - Y2002 EmC recon. DC recon.Evt. Class bha kpm ksl rpi rad clb flt afl MB cosmic ÷100 L3 filter cosmic raw L2 Triggers

To generate an homogeneous data set on respect filtering criteria and calibration quality to include the new selection criteria developed for charged and neutral kaons for improving the luminosity measurement. Reprocessing from January, till May, with triggers analyzed Data Reprocessing

Computing & Data Storage 2002 krunc Run Control & DFC Online farm fibm fibm08 Offline farm 22 IBM B80 88 CPU 2.7 KHz : total farm power 10 Enterprise CPU MonteCarlo DSTs 2 IBM + 2 Sun Users Reconstruction output 560 GB Job control areas 200 GB Recalled Data from Tape 2200 GB Magstar 5,500 Cassettes 12 Tape Units 220TB fibm01 DB2 Server 2 IBM H80 TSM/NFS Servers 2 IBM H70 AFS Servers 2 TByte 1.4 TByte Catalyst 6000

pb -1 /day KHz pb -1 /100Mtri Feb,17 th May, 7 th Day by day Data Reprocessing

180 pb -1 collected in different run periods, at different luminosities and background levels Data Taking Period Max luminosity (cm -2 s -1 ) Sample Luminosity ( pb -1 ) Maximum reconstructable luminosity (pb -1 /day) Data volume (Gbytes/pb -1 ) Nov-Dec Sep-Oct Jun-Jul Apr Y2000 and Y2001 Data set - CPU power cost and data volumes stored for different data taking conditions Different run conditions imply different CPU power needed per pb -1 different data volumes stored per pb - 1 Data Taking in Y2001

Besides the reconstructed data sample, the DSTs are produced, with a selected information content reducing by a factor of 10 the data volume for the analyses. These files are created immediately after the data reconstruction completion to take advantage of the availability of the files on disk. Excluding the charged kaon DSTs, that have not been created so far, we have a total of 6 GB/pb -1 as DST volume. The charged kaon DSTs will have both, the biggest event size, and the largest event sample: 10 6 events/pb -1 X 5 KBytes/event, e.g. 5 GBytes/ pb TBytes of disk space devoted now to DSTs on the new servers. To allow an efficient multi-user access to the entire DST sample (6 TBytes by the end of data taking) we need to increase disk space to 4 Tbytes by the end fo the year DSTs for the analyses

Monte Carlo 8 Sun-Enterprise 450 are dedicated to Monte Carlo generation and reconstruction Event production is based on procedures based on the information (random seeds, input cards, job status,…) in the DB Computer power correspond to 2.5  events completed per day The most demanding task is the study of the background topologies Work in progress – of Ks      decays (60 pb-1) ~25% available – of  decays ( 5 pb-1) ~50% available

The Offline farm can process 2,700 triggers/s Y2002 Data : May 3 rd – May 17 th : Trigger rate 1.5 KHz (Level 3) 0.04 nb -1 s  Bhabha s Bck s -1 40% of the farm available for users and DST production Enough power is left for handling further improvements in the luminosity. Y2002 Data Reconstruction

Day by day 2002 Data Reconstruction May, 3 rd May, 17 th pb -1 /day pb -1 /100Mtri KHz

Dafne luminosity (cm -2 s -1 ) Maximum reconstructable luminosity (pb -1 /day) Data volume (Gbytes/pb -1 ) Raw volume (GB/pb -1 ) Filtered volume (GB/pb -1 ) Reconstructed volume (GB/pb -1 ) Bhabhas volume (GB/pb -1 ) Data taking conditions May 2002 From 8 pb -1 of integrated luminosity Total capacity of the Library, inclusive of the Y2001 data set, and assuming the present Da  ne running conditions, is 400 pb -1 corresponding to 100 full days of data taking, and if we scratch the filtered sample, is 570 pb -1 corresponding to 177 full days of data taking. Y2000-Y2001 data fill about 105 Tbytes, e.g cassettes, corresponding to 54% of the Tape Library storage space.

SUMMARY Disk space : Multi-user access to DSTs (6 Tbytes expected by the end of the year ) requires additional 3 Tbytes by the end of the year. Library : It will be full by the end of the year. As soon as they will be available on the market (Summer 02), and according to the data taking status we expect to update the tape drives to increase the total capacity by 50% New storage solutions are under study for the year 2003 data taking. Computing Power : Less than 50% of the approved total computing power has been acquired so far. It is enough for the online reconstruction, DST production, to generate MonteCarlo samples and to perform the ongoing analyses It is marginal for any reprocessing campaign, as the 4-full-months needed to reconstruct 180 pb -1 demonstrate.