Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing for ILC experiments Akiya Miyamoto KEK 14 May 2014 AWLC14 Any comments are welcomed.

Similar presentations


Presentation on theme: "Computing for ILC experiments Akiya Miyamoto KEK 14 May 2014 AWLC14 Any comments are welcomed."— Presentation transcript:

1 Computing for ILC experiments Akiya Miyamoto KEK 14 May 2014 AWLC14 Any comments are welcomed

2 Introduction Computing design and cost are not included in ILC TDR, because  difficult to estimate a reliable estimate now  development of computing technology in > 10 years will be enormous But there requests to evaluate the cost and the human power needed for ILC computing.  “HL-LHC needs a huge computing resource. How about ILC ?”  Funding agency would like to know the total cost. 2 2014/5/14 AWLC14

3 3 2014/5/14AWLC14 S.Komamiya, AWLC14

4 Introduction Computing design and cost are not included in ILC TDR, because  difficult to estimate a reliable estimate now  development of computing technology in > 10 years will be enormous But there requests to evaluate the cost and the human power needed for ILC computing.  “HL-LHC needs a huge computing resource. How about ILC ?”  Funding agency would like to know the total cost.  LCC PD WG for “Software and Computing” N.Graf, F.Gaede, A. Sailer, AM Trying to develop a preliminary plan as a starting point of discussion among community. Any comment is welcome 4 2014/5/14 AWLC14

5 5 Bases of estimation: ILD raw data size in TDR ( @500 GeV) Total data size : < 150MB/train = 750MB/sec ~ 6Gbps (bit per sec) VXD : ~ 100MB BeamCal : 126 MB  reduced to 5% = 6MB Others < 40MB raw data size per train estimated @ 500 GeV ~ 7.5PB/1 year ( 10 7 sec ) for ILD

6 6 Bases of estimation: SiD raw data size in TDR SiD: Data size per train (1 TeV) -VXD, Tracker mainly coherent pair bkg. VXD ~ 200MB/train -BCAL ~ 430MB/sec, LCAL ~ 340MB/sec assuming 1/20 reduction,  ~ 40MB/train -Others : 90 MB/train -Safety factor 5 & 2 for pair and  had. incl. In total : 330MB/train  1320MB/sec ~10Gbps, 13.2PB/year

7 Basic Assumption World wide resource requirement is estimated Computing facility is constructed at ILC site Raw data is stored at site, replica at USA for SiD and EU for ILD Raw data reconstruction is at ILC site. Most CPU consuming part is MC production. It is shared by resources at ILC site, Asia, Europe and USA. Timeline overview  now to Year 0 : Preparation for the ILC lab.  Year 0 to 9 : ILC construction  Year 9 to 19 : ILC data taking at 250 – 500 GeV Resource needed for year 0 to 9 would be about 1/10 of Year 9 to 19. 7 2014/5/14 AWLC14

8 Storage space Assumption and tape space  ILD/SiD push-pull operation: Running time=0.5x10 7 sec  assume 10PB/year. for 10 years operation, 100PB tapes  ILD & SiD raw data at Kitakami, Replicas at USA(SiD) and EU(ILD)  200PB in total WW for raw data  ~ 1/10 of storage in construction phase Disk space:  Temporary raw data storage: 1 years of raw data, 10PB, at site, 5 PB each at USA&EU: 20PB in total  Disk for analysis : negligible  Disk for MC: BELLE2 case: ratio of MC disk and tape is ~1 In ILC, 2/3 of raw data is pair background by VXD, BCAL, FCAL.  assume 200x1/3=66PB world wide. 1/3(22PB) each at site, Asia, EU, USA  Total 42PB at site and 44PB at USA+EU at Year 20. At site, ~ 3.2PB by Y9, ~13 PB by Y10, ~35PB by Y20 At EU&USA ~2PB by Y9, ~54 PB by Y20 8 2014/5/14 AWLC14

9 CPU resource DBD exercise:  Main consumer was detector simulation.  For ILD 1 TeV 1ab -1 samples, O(2k) core took 3 months. 1 core ~ 10HepSpec For ILC experiments, assuming  x10 MC samples for real data analysis.  x4 margin for unknown factors (Digitization, Calibration, not considered)  x1/2 for 500 GeV data  400k HepSpec to produce sample in 3 month or 200k HepSpec to produce MC samples in 6 month  For raw data processing, assuming ½ of MC simulation,  20kHepSpec In total, 220k HepSpec. Shared among 3 region.  80k HepSpec at ILC site, 70k at US and EU each. 9 2014/5/14 AWLC14 Note: KEKCC is ~40k HepSpec

10 Academic Network Infrastructure in Japan 2014/5/14AWLC14 10 Present: SINET4 supported by NII (National Institute for Information ) 40Gbps 2.4Gbps NII is proposing the upgrade to SINET5. - larger BW in Japan and to oversea A possible network for ILC ILC-Site  Morioka: Morioka  Sendai  Tokyo  KEK, Univ., Oversea: Need support by SINET. Bottle Neck : Morioka --> Sendai ?. Expensive to cross prefecture boundary by dark fiber. ILC Tokyo KEK

11 Belle2 plan : Japan-US(8Gbps@2018, 19Gbps@2012) Japan-US-Germany (1.6Gbps@2018, 2.6Gbps@2012) NII is considering a direct Japan-EU connection @ ~10 Gbps Japan to oversea network

12 Summary As the first step, resources necessary in total are estimated based on DBD Raw data rate of ILC experiment is ~ 10Gbps Computing resources necessary for ILD and SiD experiments are  Tape storage for raw data tape: 200PB ILC site (100PB), USA(50PB), EU(50PB) by year 20. ( 3xATLAS(2013), BELLE2(2022), 12xKEKCC(2013) )  Disk storage for MC and analysis: 86PB world wide by Y20 ILC site(35PB), USA(27PB), EU(27PB) by year 20. ( ~ATLAS(2013) )  In Year 0 to Year 9, About 1/10 to these resources are required. CPU resources:  Requires 220k HepSpec world wide. ( ~ ¼ of ATLAS(2013) ) Network:  Japan to/from oversea band width should be increased significantly with a help of NII (National Institute for Information ) 12 2014/5/14 AWLC14

13 Summary 2 Still many unknowns  What is the role of computing stuff at the Lab ? hardware operation software maintenance ( GRID middle ware ) management ?  SiD and ILD computing resources at site. Do they prepare and operate by their own resources, or do they need common infrastructure ?  Campus design will start soon. Software group should give a proper input to CFS group if any. 13 2014/5/14 AWLC14


Download ppt "Computing for ILC experiments Akiya Miyamoto KEK 14 May 2014 AWLC14 Any comments are welcomed."

Similar presentations


Ads by Google