Download presentation
Presentation is loading. Please wait.
Published byAndrea Marcia Newton Modified over 9 years ago
1
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese
2
Key figures CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 2 Max. event rate (mbias Au+Au)10 7 /s Estimated raw data size per event100 kB Raw data rate to FLES1 TB/s Archival rate1 GB/s Run scenario3 months/year at 2/3 duty cycle Raw data volume per run year5 PB ESD data size per event100 kB to be evaluated FLES on-site doable Why? ALICE: 8 months, PANDA: 6 months ESD much smaller than RAW for ALICE
3
On-line farm (FLES) CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 3 Data reduction by factor 1,000 required Complicated trigger patterns: require (partial) event reconstruction Estimated size of on-line farm: 60,000 cores
4
Off-line reconstruction „Conventional“ scenario: off-line reconstruction RAW -> EDS today‘s performance for event reconstruction: ≈ 10 s Typically several reconstruction runs per data run Target: 100 days per reconstruction run Requires 60,000 cores Could be executed between runs on the on-line farm CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 4 To be reduced significantly
5
Reconstruction: Offline = Online ? Partial reonstruction (STS, fast primaries) required on-line Performance for global tracking (MUCH) drastically improved (A. Lebedev); to be expected also for RICH+TRD Bottleneck: hit finder Complete on-line reconstruction not unthinkable Storage of EDS dispensable? Trade-off speed vs. accuracy / efficiency Calibration ? CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 5 pros cons
6
CBM computing ressource estimates for AR-Report CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 6 CBMPANDA Cores60,00066,000 On-line storage15 PB12 PB Tape archive11 PB/a12 PB/a Raw data of 2 years + EDS + analysis 2 copies of raw data + ε Target date: 2016 Ramping: Annual increase by 100 %, starting 2010 Gives CBM ressources for 2010: 940 cores 0.2 PB online storage 0.2 PB tape archive
7
Mass disc storage at GSI Lustre filesystem in production since 2008: –distributed storage –high bandwidth –fully scalable –dynamically upgradable –connected to long-term storage (tape archive) Current installation at GSI: –100 server, 1,600 disks –1.1 PB capacity (1.5 PB end of 2009) –120 Gb/s throughput (200 Gb/s end of 2009) –serves 3000 cores (4000 end of 2009) CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 7 Largest non-military lustre installation worldwide
8
FAIR HPC System (vision) CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 8 connect CCs of „local“ universities (Frankfurt, Darmstadt, Mainz) with GSI through high-speed links (1 Tb/s) serves as core (TIER0) for FAIR computing lossless and performant access to experiment data stored at GSI; no replica needed 10 Gb/s connection to remote institutions using GRID technology Computing initiatives: GSI/FAIR HPC, CSC-Scout, HIC4FAIR,.....
9
Open questions Integration of CBM needs into FAIR computing model –On-line farm close to experiment indispensable –Exclusive use at least during runtimes CBM-GRID –For user analysis, simulations –Data model (replicas etc.) –Context of FAIR-GRID Ressources for simulations not yet included –Paradigma: one simulated event for each real event? CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 9
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.