Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.

Similar presentations


Presentation on theme: "The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March."— Presentation transcript:

1 The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March 24, 2003 Introduction: Belle Belle software tools Belle computing system & PC farm DST/MC production Summary

2 March 24, 2003Ichiro Adachi, CHEP032 Introduction Belle experiment –B-factory experiment at KEK –study CP violation in B meson system. start from 1999 –recorded ~120M B meson pairs so far –KEKB accelerator is still improving its performance 120fb -1 The largest B meson data sample at  (4s) region in the world

3 March 24, 2003Ichiro Adachi, CHEP033 Belle detector example of event reconstruction fully reconstructed event

4 March 24, 2003Ichiro Adachi, CHEP034 Belle software tools Home-made kits –“B.A.S.F.” for framework Belle AnalySis Framework unique framework for any step of event processing event-by-event parallel processing on SMP –“Panther” for I/O package unique data format from DAQ to user analysis bank system with zlib compression –reconstruction & simulation library written in C++ Other utilities –CERNLIB/CLHEP… –Postgres for database Input with panther Output with panther unpacking calibration tracking vertexing clustering particle ID diagnosis B.A.S.F. module loaded dynamically shared object Event flow

5 March 24, 2003Ichiro Adachi, CHEP035 University resources User analysis & storage system Computing network for batch jobs and DST/MC production Belle computing system Tokyo Nagoya Tohoku super-sinet 1Gbps Sun computing server 500MHz*4 GbE switch Compaq 38 hosts online tape server PC farms tape library 500TB Fujitsu HSM server HSM library 120TB disk 4TB file server 8TB 500MHz*4 9 hosts user PC 1GHz 100hosts work group server GbE switch GbE switch

6 March 24, 2003Ichiro Adachi, CHEP036 Computing requirements Reprocess entire beam data in 3 months MC size is 3 times larger than real data at least Once reconstruction codes are updated or constants are improved, fast turn-around is essential to perform physics analyses in a timely manner Analyses are getting matured and understanding systematic effect in detail needs large MC sample enough to do this Added more PC farms and disks

7 March 24, 2003Ichiro Adachi, CHEP037 PC farm upgrade Total CPU(GHz) ~1500GHz boost up CPU power for DST & MC productions Delivered in Dec.2002 Will come soon Total CPU has become 3 times bigger in recent two years 60TB(total) disks have been also purchased for storage Total CPU = CPU processor speed(GHz)  # of CPUs  # of nodes

8 March 24, 2003Ichiro Adachi, CHEP038 Belle PC farm CPUs -heterogeneous system from various vendors -CPU processors(Intel Xeon/PenIII/Pen4/Athlon) Fujitsu 127PCs (Pentium-III 1.26GHz) Appro 113PCs (Athlon 2000+) Compaq 60PCs (Intel Xeon 0.7GHz) 380GHz 320GHz 168GHz Dell 36PCs (Pentinum-III ~0.5GHz) 470GHz NEC 84PCs (Pentium4 2.8GHz) setting up done will come soon

9 March 24, 2003Ichiro Adachi, CHEP039 DST production & skimming scheme PC farm raw data DST data disk DST data histograms log files disk or HSM skims such as hadronic data sample Sun disk data transfer Sun 1. Production(reproduction) 2. Skimming histograms log files user analysis

10 March 24, 2003Ichiro Adachi, CHEP0310 Output skims Physics skims from reprocessing –“Mini-DST”(4-vectors) format –Create hadronic sample as well as typical physics channels(up to ~20 skims) many users do not have to go through whole hadronic sample. –Write data onto disk at Nagoya(350Km away from KEK) directly using NFS(thanks to super-sinet link of 1Gbps) hadronic mini-DST J/  inclusive bsbs D*sD*s full recon mini-DST reprocessing output Nagoya ~350Km from KEK 1Gbps KEK site

11 March 24, 2003Ichiro Adachi, CHEP0311 Processing power & failure rate Processing power –Processing ~1fb -1 per day with 180GHz Allocate 40 PC hosts(0.7GHzx4CPU) for daily production to catch up with DAQ –2.5fb -1 per day possible Processing speed(in case of MC) with 1GHz one CPU –Reconstruction: 3.4sec –Geant simulation: 2.3sec Failure rate for one B meson pair module crash < 0.01% tape I/O error1% process communication error3% network trouble/system errornegligible

12 March 24, 2003Ichiro Adachi, CHEP0312 Reprocessing 2001 & 2002 Reprocessing –major library & constants update in April –sometimes we have to wait for constants Final bit of beam data taken before summer shutdown always reprocessed in time For 2001 summer 30fb -1 For 2002 summer 78fb -1 2.5months 3months

13 March 24, 2003Ichiro Adachi, CHEP0313 MC production Produce ~2.5fb -1 per day with 400GHz PenIII –Resources at remote sites also used Size 15~20GB for 1 M events. –4-vector only Run dependent beam data file Run# xxx B 0 MC data B + B - MC data charm MC data min. set of generic MC Run# xxx light quark MC run-dependent background IP profile mini-DST

14 March 24, 2003Ichiro Adachi, CHEP0314 MC production 2002 Keep producing MC generic samples –PC farm shared with DST –Switch from DST to MC production can be made easily Reached 1100M events in March 2003. 3 times larger samples of 78fb -1 completed major update minor change

15 March 24, 2003Ichiro Adachi, CHEP0315 MC production at remote sites Total CPU resources at remote sites is similar to KEK 44% of MC samples has been produced at remote sites –All data transferred to KEK via network 6~8TB in 6 months GHz CPU resource available 44% at remote sites MC events produced ~300GHz

16 March 24, 2003Ichiro Adachi, CHEP0316 Future prospects Short term –Software:standardize utilities –Purchase more CPUs and/or disks if budget permits… –Efficient use of resources at remote sites Centralized at KEK  distributed over Belle-wide –Grid computing technology… just started survey & application Date file management CPU usage SuperKEKB project –Aim 10 35 (or more) cm -2 s -1 luminosity from 2006 –Phys.rate ~100Hz for B-meson pair –1PB/year expected –New computing system like LHC experiment can be a candidate

17 March 24, 2003Ichiro Adachi, CHEP0317 Summary The Belle computing system has been working fine. More than 250fb -1 of real beam data has been successfully (re)processed. MC samples with 3 time larger than beam data has been produced so far. Will add more CPU in near future for quick turn- around as we accumulate more data. Grid computing technology would be a good friend of ours. Start considering its application in our system. For SuperKEKB, we need much more resources. May have rather big impact in our system.

18 March 24, 2003Ichiro Adachi, CHEP0318 backup

19 March 24, 2003Ichiro Adachi, CHEP0319 dbasf data flow Sun as a tape servers for I/O –Input/output deamon –stdout/histo deamon –I/O speed of 25MB/s Linux cluster –RedHad 6/7 –15 PCs of 4CPU of 0.7GHz Intel Xeon communication by network shared memory(NSM) 200pb -1 per day for 1 cluster Processing limited by CPU –Possible to add ~30PC’s –Need optimization outputdinputd NSM network Linux cluster Master PC basf Tape server

20 March 24, 2003Ichiro Adachi, CHEP0320 CPU & disk servers Sun CPU –9 servers(0.5GHz*4CPU) –38 computing servers(ibid.) operated under LSF batch system tape drives(2 each for 20hosts) Linux CPU –60 computing servers(Intel Xeon, 0.7GHz*4CPU) central CPU engines for DST/MC productions Disk servers & storage -Tape library DTF2 tape(200GB), 24MB/s IO 500TB total 40 tape drives –8TB NFS file servers –120TB HSM servers 4TB staging disk

21 March 24, 2003Ichiro Adachi, CHEP0321 Data size Raw data35KB/event DST data58KB/event mini-DST data12KB/event Total raw data ~120TB for 120fb -1 1000 DTF2(200GB) tapes for hadronic event


Download ppt "The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March."

Similar presentations


Ads by Google