Computing for ILC experiments Akiya Miyamoto KEK 14 May 2014 AWLC14 Any comments are welcomed.

Slides:



Advertisements
Similar presentations
ILD Resource Survey 2014/9/9 Y. Sugimoto 1. Committee under MEXT ILC Task Force in MEXT Academic experts committee Particle-Nuclear physics WG Members.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Manpower and space requirements for ILD 2015/1/13 Y. meeting at SLAC 1.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
P hysics background for luminosity calorimeter at ILC I. Božović-Jelisavčić 1, V. Borka 1, W. Lohmann 2, H. Nowak 2 1 INN VINČA, Belgrade 2 DESY, Hamburg.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Computing plans for the post DBD phase Akiya Miyamoto KEK ILD Session ECFA May 2013.
Status of ILC Computing Model and Cost Study Akiya Miyamoto Norman Graf Frank Gaede Andre Sailer Marcel Stanitzki Jan Strube 1.
ILD meeting Welcome/Introduction 2015/4/23 Y. 1.
MEXT Review Process in Japan 2014/7/16 Y. Sugimoto 1.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Data GRID deployment in HEPnet-J Takashi Sasaki Computing Research Center KEK.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
ILC in Japan A 10 minute introduction H.Weerts Argonne National Lab March 24, 2014 University of Chicago.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
ILC DBD Common simulation and software tools Akiya Miyamoto KEK ILC PAC 14 December 2012 at KEK.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Beam-Beam Background and the Forward Region at a CLIC Detector André Sailer (CERN PH-LCD) LC Physics School, Ambleside 22st August, 2009.
Software Common Task Group Meeting Akiya Miyamoto 6-June-2011.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Software Overview Akiya Miyamoto KEK JSPS Tokusui Workshop December-2012 Topics MC production Computing reousces GRID Future events Topics MC production.
Strategy of detector solenoid construction Yasuhiro Sugimoto KEK 1.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
Next Step Akiya Miyamoto, KEK 17-Sep-2008 ILD Asia Meeting.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
AA->HH Study AA->4b analysis & (preliminary) Summary Shin-ichi Kawada (AdSM, Hiroshima University) 22nd General KEK (2011/9/3)1.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010.
Summary of MC requests for DBD benchmarks ILD Workshop, Kyushu University May 23,
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
A proposal for the KM3NeT Computing Model Pasquale Migliozzi INFN - Napoli 1.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
Off-Detector Processing for Phase II Track Trigger Ulrich Heintz (Brown University) for U.H., M. Narain (Brown U) M. Johnson, R. Lipton (Fermilab) E. Hazen,
Hall D Computing Facilities Ian Bird 16 March 2001.
KEK Computing Resources after Earthquake Akiya Miyamoto 30-March-2011 ILD Software WG meeting Status as of today.
Computing requirements for the experiments Akiya Miyamoto, KEK 31 August 2015 Mini-Workshop on ILC Infrastructure and CFS for Physics and KEK.
ILD MCProduction with ILCDirac
ILD Soft & Analysis meeting
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Overview of the Belle II computing
Computing model and data handling
Brief summary of discussion at LCWS2010 and draft short term plan
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ECAL project in the future 2-3 years
Computing in ILC Contents: ILC Project Software for ILC
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
ILD Ichinoseki Meeting
New strategies of the LHC experiments to meet
US ATLAS Physics & Computing
Collaboration Board Meeting
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Presentation transcript:

Computing for ILC experiments Akiya Miyamoto KEK 14 May 2014 AWLC14 Any comments are welcomed

Introduction Computing design and cost are not included in ILC TDR, because  difficult to estimate a reliable estimate now  development of computing technology in > 10 years will be enormous But there requests to evaluate the cost and the human power needed for ILC computing.  “HL-LHC needs a huge computing resource. How about ILC ?”  Funding agency would like to know the total cost /5/14 AWLC14

3 2014/5/14AWLC14 S.Komamiya, AWLC14

Introduction Computing design and cost are not included in ILC TDR, because  difficult to estimate a reliable estimate now  development of computing technology in > 10 years will be enormous But there requests to evaluate the cost and the human power needed for ILC computing.  “HL-LHC needs a huge computing resource. How about ILC ?”  Funding agency would like to know the total cost.  LCC PD WG for “Software and Computing” N.Graf, F.Gaede, A. Sailer, AM Trying to develop a preliminary plan as a starting point of discussion among community. Any comment is welcome /5/14 AWLC14

5 Bases of estimation: ILD raw data size in TDR GeV) Total data size : < 150MB/train = 750MB/sec ~ 6Gbps (bit per sec) VXD : ~ 100MB BeamCal : 126 MB  reduced to 5% = 6MB Others < 40MB raw data size per train 500 GeV ~ 7.5PB/1 year ( 10 7 sec ) for ILD

6 Bases of estimation: SiD raw data size in TDR SiD: Data size per train (1 TeV) -VXD, Tracker mainly coherent pair bkg. VXD ~ 200MB/train -BCAL ~ 430MB/sec, LCAL ~ 340MB/sec assuming 1/20 reduction,  ~ 40MB/train -Others : 90 MB/train -Safety factor 5 & 2 for pair and  had. incl. In total : 330MB/train  1320MB/sec ~10Gbps, 13.2PB/year

Basic Assumption World wide resource requirement is estimated Computing facility is constructed at ILC site Raw data is stored at site, replica at USA for SiD and EU for ILD Raw data reconstruction is at ILC site. Most CPU consuming part is MC production. It is shared by resources at ILC site, Asia, Europe and USA. Timeline overview  now to Year 0 : Preparation for the ILC lab.  Year 0 to 9 : ILC construction  Year 9 to 19 : ILC data taking at 250 – 500 GeV Resource needed for year 0 to 9 would be about 1/10 of Year 9 to /5/14 AWLC14

Storage space Assumption and tape space  ILD/SiD push-pull operation: Running time=0.5x10 7 sec  assume 10PB/year. for 10 years operation, 100PB tapes  ILD & SiD raw data at Kitakami, Replicas at USA(SiD) and EU(ILD)  200PB in total WW for raw data  ~ 1/10 of storage in construction phase Disk space:  Temporary raw data storage: 1 years of raw data, 10PB, at site, 5 PB each at USA&EU: 20PB in total  Disk for analysis : negligible  Disk for MC: BELLE2 case: ratio of MC disk and tape is ~1 In ILC, 2/3 of raw data is pair background by VXD, BCAL, FCAL.  assume 200x1/3=66PB world wide. 1/3(22PB) each at site, Asia, EU, USA  Total 42PB at site and 44PB at USA+EU at Year 20. At site, ~ 3.2PB by Y9, ~13 PB by Y10, ~35PB by Y20 At EU&USA ~2PB by Y9, ~54 PB by Y /5/14 AWLC14

CPU resource DBD exercise:  Main consumer was detector simulation.  For ILD 1 TeV 1ab -1 samples, O(2k) core took 3 months. 1 core ~ 10HepSpec For ILC experiments, assuming  x10 MC samples for real data analysis.  x4 margin for unknown factors (Digitization, Calibration, not considered)  x1/2 for 500 GeV data  400k HepSpec to produce sample in 3 month or 200k HepSpec to produce MC samples in 6 month  For raw data processing, assuming ½ of MC simulation,  20kHepSpec In total, 220k HepSpec. Shared among 3 region.  80k HepSpec at ILC site, 70k at US and EU each /5/14 AWLC14 Note: KEKCC is ~40k HepSpec

Academic Network Infrastructure in Japan 2014/5/14AWLC14 10 Present: SINET4 supported by NII (National Institute for Information ) 40Gbps 2.4Gbps NII is proposing the upgrade to SINET5. - larger BW in Japan and to oversea A possible network for ILC ILC-Site  Morioka: Morioka  Sendai  Tokyo  KEK, Univ., Oversea: Need support by SINET. Bottle Neck : Morioka --> Sendai ?. Expensive to cross prefecture boundary by dark fiber. ILC Tokyo KEK

Belle2 plan : Japan-US-Germany NII is considering a direct Japan-EU ~10 Gbps Japan to oversea network

Summary As the first step, resources necessary in total are estimated based on DBD Raw data rate of ILC experiment is ~ 10Gbps Computing resources necessary for ILD and SiD experiments are  Tape storage for raw data tape: 200PB ILC site (100PB), USA(50PB), EU(50PB) by year 20. ( 3xATLAS(2013), BELLE2(2022), 12xKEKCC(2013) )  Disk storage for MC and analysis: 86PB world wide by Y20 ILC site(35PB), USA(27PB), EU(27PB) by year 20. ( ~ATLAS(2013) )  In Year 0 to Year 9, About 1/10 to these resources are required. CPU resources:  Requires 220k HepSpec world wide. ( ~ ¼ of ATLAS(2013) ) Network:  Japan to/from oversea band width should be increased significantly with a help of NII (National Institute for Information ) /5/14 AWLC14

Summary 2 Still many unknowns  What is the role of computing stuff at the Lab ? hardware operation software maintenance ( GRID middle ware ) management ?  SiD and ILD computing resources at site. Do they prepare and operate by their own resources, or do they need common infrastructure ?  Campus design will start soon. Software group should give a proper input to CFS group if any /5/14 AWLC14