The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Work finished M. Antonelli ISR and f decay simulation M. MoulsonBank-reduction code for DST’s M. Palutan et al.Trigger simulation parameters S. GiovannellaGEANFI.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Notes on offline data handling M. Moulson Frascati, 29 March 2006.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
KLOE Offline Status Report Data Reconstruction MonteCarlo updates MonteCarlo production Computing Farm C.Bloise – May 28th 2003.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
Online Data Challenges David Lawrence, JLab Feb. 20, /20/14Online Data Challenges.
Offline Discussion M. Moulson 22 October 2004 Datarec status Reprocessing plans MC status MC development plans Linux Operational issues Priorities AFS/disk.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina, L.Lueking,
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.
Ichiro Adachi ACAT03, 2003.Dec.021 Ichiro Adachi KEK representing for computing & DST/MC production group ACAT03, KEK, 2003.Dec.02 Belle Computing System.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
6-10 Oct 2002GREX 2002, Pisa D. Verkindt, LAPP 1 Virgo Data Acquisition D. Verkindt, LAPP DAQ Purpose DAQ Architecture Data Acquisition examples Connection.
MC/Offline Planning M. Moulson Frascati, 21 March 2005 MC status Reconstruction coverage and data quality Running time estimates MC & reprocessing production.
Offline/MC status report M. Moulson 6th KLOE Physics Workshop Sabaudia, May 2006.
KLOE Computing Update Paolo Santangelo INFN LNF KLOE General Meeting University of Rome 2, Tor Vergata 2002, December
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
STATUS OF KLOE F. Bossi Capri, May KLOE SHUTDOWN ACTIVITIES  New interaction region  QCAL upgrades  New computing resources  Monte Carlo.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
CHEP 2000: 7-11 February, 2000 I. SfiligoiData Handling in KLOE 1 CHEP 2000 Data Handling in KLOE I.Sfiligoi INFN LNF, Frascati, Italy.
Update on MC-04(05) –New interaction region –Attenuation lengths vs endcap module numbers –Regeneration and nuclear interaction –EMC time resolution –
Status of the KLOE experiment M. Moulson, for the KLOE collaboration LNF Scientific Committee 23 May 2002 Data taking in 2001 and 2002 Hardware and startup.
Offline meeting Outcome of Scientific Committee Status of MC04 Proposal for a minimum bias EvCl sample Other offline activies: –Reprocessing –Dead wires.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Offline Status Report A. Antonelli Summary presentation for KLOE General Meeting Outline: Reprocessing status DST production Data Quality MC production.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
Reliability of KLOE Computing Paolo Santangelo for the KLOE Collaboration INFN LNF Commissione Scientifica Nazionale 1 Roma, 13 Ottobre 2003.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
5 June 2003Alan Norton / Focus / EP Topics1 Other EP Topics Some 2003 Running Experiments - NA48/2 (Flavio Marchetto) - Compass (Benigno Gobbo) - NA60.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
Offline resource allocation M. Moulson, 11 February 2003 Discussion Outline: Currently mounted disk space Allocation of new disk space CPU resources.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, R. Brock,T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina,
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Status Report on Data Reconstruction May 2002 C.Bloise Results of the study of the reconstructed events in year 2001 Data Reprocessing in Y2002 DST production.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
BESIII data processing
KLOE offline & computing: Status report
CMS High Level Trigger Configuration Management
PC Farms & Central Data Recording
Kloe plans for 500 pb-1 Since now up to mid/end 2003
ALICE Computing Model in Run3
LHC Collisions.
Special edition: Farewell for Valerie Halyo
Example of DAQ Trigger issues for the SoLID experiment
Special edition: Farewell for Stephen Bailey
Special edition: Farewell for Eunil Won
Presentation transcript:

The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct The KLOE experiment at DAΦNE 3.3 m 4 m CP/CPT symmetries in K S K L system K , radiative ,  decays  (e  e   hadrons ) All  events interesting Days of running  L dt (pb  1 ) DAΦNE L design 5×10 32 cm 2 s  Hz e  e    G  events KLOE EM calorimeter Drift chamber channels Physics objectives:

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct KLOE data flow reconstruction users datarec rawmco mcr dst data reduction reconstruction Monte Carlo data reduction calibration DAQ 2.5 KB/evt 15 KB/evt 3 KB/evt 20 KB/evt 30 KB/evt 4 KB/evt

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Data handling user analysis DAQ online offline production recall archiver (nfs) KID/nfs KID afs 1.4 TB 0.8 TB disk cache 5.7 TB tape library 324 TB 2.3 TB archiver (nfs) nfs

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Data handling File database: IBM DB2 Tracks location and status of several million files Data-handling interface: KLOE Integrated Dataflow (KID) Access to data regardless of file location via URL’s KID URL’s can incorporate SQL query for file database: dbraw:run_nr=23015 dbdatarec:run_nr=23015 and stream_code='ksl' dbraw:run_nr between and and analyzed is not null

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Online computing DAQ designed for 50 MB/s from detector to tape FDDI Gigaswitch builder T3 recorder 270 GB spy monitoring calibration archiver CR CR, Bha etc. online SMP × config.: 3 DAQ 1 calibration 1 monitoring 2 reserved Online farm: 7 IBM H50 servers 4×332 MHz PPC 604e Maximum rate: 2.4 KHz/DAQ server in current configuration 2002 averages: L : 4×10 31 cm  2 s  1 T2: 3.5 KHz/3 servers

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Offline computing Hardware environment: 16 to 19 IBM B80 servers: 4×375 MHz POWER3 8 Sun E450 servers: 4×400 MHz UltraSPARC-II 800 GB NFS-mounted disk for I/O staging Software environment: ANALYSIS_CONTROL with KID interface

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Offline computing raw EmC recon DC recon MB/CR filter 40% 25% specific algorithms event selection compr Reconstruction and streaming DST production event classification 4 ms40 ms180 GB/pb  1 KSKLKSKL rad Bha K+K–K+K–  100 GB/pb  1 12 ms20 GB/pb  1 3 GB/pb  1 KSKLKSKL KKKK 115 ms4.5 GB/pb  1 12 GB/pb  1 Examples:

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Monte Carlo production Simulated event samples statistically comparable to data  all 450 pb  1 at 1:5 scale 260M events  K S K L 450 pb  1 at 1:1 scale 430M events Each run in data set individually simulated  s, p , x , background, dead wires, trigger thresholds… GEANT 3.21-based simulation Inclusion of accidental activity from machine background Extracted from e + e –  events in data set Inserted run-by-run to match temporal profile of data MC DST’s to provide convenient user interface

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Analysis environment Production of histograms/Ntuples on analysis farm: 4 to 7 IBM B80 servers + 2 Sun E450 servers DST’s latent on 5.7 TB recall disk cache accessed by KID Batch processes managed by IBM LoadLeveler Output to 2.3 TB AFS cell accessed by user PC’s Final-stage analysis on user PC/Linux systems Analysis example: 700M K S K L events, 1.4 TB DST’s 6 days elapsed for 6 simultaneous batch processes Output on order of GB

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct CPU power requirements Input rate (KHz) Avg L (10 30 cm  2 s  1 ) B80 CPU’s needed to follow acquisition MC DST recon 76 CPU offline farm

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Mass storage requirements Max. capacity: 324 TB In use: 185 TB Installed hardware: 5.7 TB recall disk cache (DST) IBM 3494 tape library 12 Magstar 3590 drives (14 MB/s) 60 GB/cartridge 5400 slots, 2 accessors Tivoli Storage Manager Predicted needs for 2004: 2 fb  1 at L = 1×10 32 cm  2 s  TB tape storage 16 TB DST disk cache (50%) raw recon MCDST Total: 205 GB/pb  1 DST free raw recon MC

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Upgrades for 2004 Additional servers for offline farm: ~80 B80 equivalents 10 IBM p630 servers: 4×1.45 GHz POWER4+ Additional disk space: ~20 TB for DST cache and AFS cell Additional IBM 3494 tape library: 300 TB Magstar 3592 drives: 300 GB/cartridge, 40 MB/s Initially 1000 cartridges with space for 3600 (1080 TB) 2 accessors, 6 drives, remotely accessed via FC/SAN interface KLOE computing environment flexible and scalable Will be capable of handling 10 pb  1 /day in 2004

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct Additional information

The KLOE computing environment – M. Moulson – IEEE/NSS, Portland OR, 20 Oct KLOE computing resources tape library IBM 3494, GB slots, 2 robots, TSM 324 TB 12 Magstar E1A drives, 14 MB/sec each managed disk space 0.8 TB SSA: offline staging 6.5 TB 2.2 TB SSA TB FC: latent disk cache offline farm 19 IBM B80 4×POWER Sun E450 4×UltraSPARC-II 400 AFS cell 2 IBM H70 4×RS64-III TB SSA TB FC disk online farm 7 IBM H50 4×PPC604e TB SSA disk analysis farm 4 IBM B80 4×POWER Sun E450 4×UltraSPARC-II 400 file servers 2 IBM H80 6×RS64-III 500 DB2 server IBM F50 4×PPC604e 166 CISCO Catalyst 6000 nfs afs 100 Mbps1 Gbps