Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.

Slides:



Advertisements
Similar presentations
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Advertisements

T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Notes on offline data handling M. Moulson Frascati, 29 March 2006.
KLOE Offline Status Report Data Reconstruction MonteCarlo updates MonteCarlo production Computing Farm C.Bloise – May 28th 2003.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, ZEUS Computing Board Report Zeus Executive Meeting Wesley H. Smith, University.
Offline Discussion M. Moulson 22 October 2004 Datarec status Reprocessing plans MC status MC development plans Linux Operational issues Priorities AFS/disk.
J OINT I NSTITUTE FOR N UCLEAR R ESEARCH OFF-LINE DATA PROCESSING GRID-SYSTEM MODELLING FOR NICA 1 Nechaevskiy A. Dubna, 2012.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Snapshot of the D0 Computing and Operations Planning Process Amber Boehnlein For the D0 Computing Planning Board.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
MC/Offline Planning M. Moulson Frascati, 21 March 2005 MC status Reconstruction coverage and data quality Running time estimates MC & reprocessing production.
Offline/MC status report M. Moulson 6th KLOE Physics Workshop Sabaudia, May 2006.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
KLOE Computing Update Paolo Santangelo INFN LNF KLOE General Meeting University of Rome 2, Tor Vergata 2002, December
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
STATUS OF KLOE F. Bossi Capri, May KLOE SHUTDOWN ACTIVITIES  New interaction region  QCAL upgrades  New computing resources  Monte Carlo.
CHEP 2000: 7-11 February, 2000 I. SfiligoiData Handling in KLOE 1 CHEP 2000 Data Handling in KLOE I.Sfiligoi INFN LNF, Frascati, Italy.
Update on MC-04(05) –New interaction region –Attenuation lengths vs endcap module numbers –Regeneration and nuclear interaction –EMC time resolution –
Status of the KLOE experiment M. Moulson, for the KLOE collaboration LNF Scientific Committee 23 May 2002 Data taking in 2001 and 2002 Hardware and startup.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
Offline meeting Outcome of Scientific Committee Status of MC04 Proposal for a minimum bias EvCl sample Other offline activies: –Reprocessing –Dead wires.
Offline meeting  Status of MC/Datarec  Priorities for the future LNF, Sep
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.
Offline Status Report A. Antonelli Summary presentation for KLOE General Meeting Outline: Reprocessing status DST production Data Quality MC production.
Reliability of KLOE Computing Paolo Santangelo for the KLOE Collaboration INFN LNF Commissione Scientifica Nazionale 1 Roma, 13 Ottobre 2003.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
5 June 2003Alan Norton / Focus / EP Topics1 Other EP Topics Some 2003 Running Experiments - NA48/2 (Flavio Marchetto) - Compass (Benigno Gobbo) - NA60.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
David Stickland CMS Core Software and Computing
MC sign-off and running plans M. Moulson, 1 March 2004 Offline discussion Outline: Results of tests on new MC files Sign-off on production version What.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
Offline resource allocation M. Moulson, 11 February 2003 Discussion Outline: Currently mounted disk space Allocation of new disk space CPU resources.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Recent Evolution of the Offline Computing Model of the NOA Experiment Talk #200 Craig Group & Alec Habig CHEP 2015 Okinawa, Japan.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Status Report on Data Reconstruction May 2002 C.Bloise Results of the study of the reconstructed events in year 2001 Data Reprocessing in Y2002 DST production.
DØ Computing Model and Operational Status Gavin Davies Imperial College London Run II Computing Review, September 2005.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
MC/OFFLINE meeting  Status  Request by Ke2 group  aob LNF
BESIII data processing
KLOE offline & computing: Status report
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Kloe plans for 500 pb-1 Since now up to mid/end 2003
ALICE Computing Upgrade Predrag Buncic
G. Venanzoni-LNF KGM Frascati, 19 Dec 2006
ILD Ichinoseki Meeting
Presentation transcript:

Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004

Runs (9 May) to (8 Nov, 6:00) 503 pb -1 to disk with tag OK 484 pb -1 with tag = 100 (no problems) 476 pb -1 with full calibrations 455 pb -1 reconstructed (96%) Reconstruction The reconstruction follows the data acquisition (each reconstruction jobs last 2h as maximum )

Program Events (10 6 ) LSF Time (B80 days) Size (TB) e  e   e + e   (ISR only)  rad ee  eeee  ee  all  all (21 pb -1 scan)  K S K L  K  K  Total  K S K L  rare 6220*320 est.1.7 est. MC production in 2004

Data 2001/02 (TB) RAW101 Recon51.2 DST data4.2 Total156.4 data 2001/02 MC data 2004 Data 2004 (TB) RAW42.7 Recon28.4 DST4.5 Total75.6 Present tape library: 312TB Tape Storage (Nov 04) TB 75.6 TB Residual space: mostly in use for backup and other storage services 47.5 TB 450 pb -1

Present tape storage IBM 3494 tape library:  12 Magstar 3590 drives, 14 MB/s read/write  60 GB/cartridge 5200 cartridges (5400 slots)  Dual active accessors  Managed by Tivoli Storage Manager Maximum capacity: 312 TB (5200 cartridges)

New Tape storage: Additional IBM 3494 tape library  6 Magstar 3592 drives: 300 GB/cartridge, 40 MB/s  Initially 1000 cartridges (300 TB)  Slots for 3600 cartridges (1080 TB)  Remotely accessed via FC/SAN interface Mechanical installation completed. Now under test

Stored vol. by type (GB/pb  ) from average.(GB/pb  ) The fighting against bckg allowed us to fit with data flow expectations Avg Luminosity (  b/s) Specific data volume

Today 1.5 fb  Estimate of Tape library usage (TB) raw recon DST MC Overall it adds fb /02 It needs additional Tapes (~100 TB) 2 fb  2.5 fb  Present library New library

CPU TIME Same level of background of 2002 was assumed for 2004 estimate CONFIRMED with Data Taking B80 CPU’s needed to follow acquisition MC DST reco 196 CPU’s installed

CPU resources:  10 IBM p630 servers: 10×4 POWER GHz (equivalent to 100 (40x2.5) B80 CPU )  23 IBM B80 servers: 92 CPU’s  10 Sun E450 servers: 18 B80 CPU- equivalents  “Online” data Reconstruction and calibration  DST and MC production  Reprocessing (8 dismissed on Oct 11) CPU allocation flexible. Simply redefine queues with LoadLeveler

Other tasksReprocessing data 2001/02 MC 04 (1 fb -1 ) (ms/ev) # events (10 9 ) Time (B80 days/CPU) Calendar days (assuming 80 CPU) ˜ CPU Required for offline tasks 9 Months for reprocessing & MC production of 1 fb -1 with 80 CPU

Disk resources Current recalled areas Production0.7 TB User recalls2.1 TB DST cache 12.9 TB (10.2 TB added in April) 2001 – 2002 Total DSTs 7.4 TB Total MCDSTs 7.0 TB 2004 DST volume scales with L 3.2 TB added to AFS cell 2.0 TB available but not yet installed Reserved for testing new network-accessible storage solutions

 Data reconstruction: We Keep up with data taking (70 B80 CPU)  MC for 2004 data: Minor modification: Add the new interaction region in GEANFI. Complete machinery for inserting the machine background events produced with the random trigger. Refinement of the detector responses MC production of additional phi decays (1fb -1 )  Data reprocessing: We want to reprocess 2001/02: more compact sample and with refined selection/reconstruction routines. Conclusion & Outlook

 Tape storage: Additional 300 TB with the new library (612TB in total) They are sufficient to store up to 2 fb -1 of collected data & MC