ILD Ichinoseki Meeting

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Computing plans for the post DBD phase Akiya Miyamoto KEK ILD Session ECFA May 2013.
Status of ILC Computing Model and Cost Study Akiya Miyamoto Norman Graf Frank Gaede Andre Sailer Marcel Stanitzki Jan Strube 1.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Computing for ILC experiments Akiya Miyamoto KEK 14 May 2014 AWLC14 Any comments are welcomed.
HEP 2005 WorkShop, Thessaloniki April, 21 st – 24 th 2005 Efstathios (Stathis) Stefanidis Studies on the High.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
Sim/Recon DBD Editors Report Norman Graf (SLAC) Jan Strube (CERN) SiD Workshop SLAC, August 22, 2012.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
ILC DBD Common simulation and software tools Akiya Miyamoto KEK ILC PAC 14 December 2012 at KEK.
Status of tth analysis 1. √s=500 GeV 2. √s=1 TeV 3. MC requests R. Yonamine, T. Tanabe, K. Fujii T. Price, H. Tabassam, N. Watson, V. Martin ILD Workshop,
Software Overview Akiya Miyamoto KEK JSPS Tokusui Workshop December-2012 Topics MC production Computing reousces GRID Future events Topics MC production.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Summary of Software and tracking activities Tokusui Workshop December 2015 Akiya Miyamoto KEK.
Software Common Task Group Report Akiya Miyamoto ILC PAC POSTECH 3 November 2009.
Next Step Akiya Miyamoto, KEK 17-Sep-2008 ILD Asia Meeting.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
OPERATIONS REPORT JUNE – SEPTEMBER 2015 Stefan Roiser CERN.
KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010.
Summary of MC requests for DBD benchmarks ILD Workshop, Kyushu University May 23,
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Octobre 2007LAL Orsay Very Forward Instrumentation of the ILC Detector Wolfgang Lohmann, DESY.
Whizard2 test at KEK report of work in progress Akiya Miyamoto 27 October 2015 Contents: Comparison with DBD sample Recoil mass distribution of e + e -
A proposal for the KM3NeT Computing Model Pasquale Migliozzi INFN - Napoli 1.
Status Report on Data Reconstruction May 2002 C.Bloise Results of the study of the reconstructed events in year 2001 Data Reprocessing in Y2002 DST production.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
Computing requirements for the experiments Akiya Miyamoto, KEK 31 August 2015 Mini-Workshop on ILC Infrastructure and CFS for Physics and KEK.
Measurement of sZH using Zqq mode - preliminary result -
ILD MCProduction with ILCDirac
BESIII data processing
ILD Soft & Analysis meeting
Overview of the Belle II computing
Pasquale Migliozzi INFN Napoli
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
evoluzione modello per Run3 LHC
Workshop Computing Models status and perspectives
Akiya Miyamoto KEK 1 June 2016
Brief summary of discussion at LCWS2010 and draft short term plan
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Trigger, DAQ, & Online: Perspectives on Electronics
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Computing in ILC Contents: ILC Project Software for ILC
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
US ATLAS Physics & Computing
Heavy Ion Physics Program of CMS Proposal for Offline Computing
The ATLAS Computing Model
Development of LHCb Computing Model F Harris
Presentation transcript:

ILD Ichinoseki Meeting ILD Computing needs Akiya Miyamoto ILD Ichinoseki Meeting 21 Feburary 2018

Introduction ILD LOI and DBD did not describe the computing cost, because Hardware performance would improve Software evolution would require more resources TDR A building for the data analysis was included. Base on the estimation in LOI era. FNAL LHC facility as a bases. Computing hardware itself, human resources for operation, network were not included After DBD, LCC “Yamada” committee made a request ILD Study: presented at ILD Workshop 2014. H20 Scenario, DBD experience LCC Software and Computing WG report http://www.linearcollider.org/P-D/Working-groups Last fall, a request by LCC and KEK Preparation for a query about cost and infrastructure required. 250 GeV staging in mind. 2018/02/21 ILD Computing needs

Computing concept Role of each computing facility: ILC Lab. Detector IP Campus Main Campus GRID World Role of each computing facility: IP Campus : Event building, Fast Data Monitor Main Campus : Data storage, Event(BX) selection, Quick data analysis GRID Computing: Secondary Data Analysis, User Analysis, Simulation Computing at IP Campus: DAQ of the experimental group Role of the computing in ILC Lab. Main Campus Trigger less readout. Remove background data at early stage of analysis Share among ILD and SiD. Lab. wide uniform support of mail, security, … Follows the past tradition, basic resource being supported by the lab. as a part of running cost. 2018/02/21 ILD Computing needs

Bases of estimation: ILD raw data size in TDR ( @500 GeV) 2014 raw data size per train estimated @ 500 GeV VXD : ~ 100MB BeamCal : 126 MB reduced to 5% = 6MB Others < 40MB Dominated by lowE e+/e- background due to beamstrahlung 130 ~277MB/Train, ~1.4GB/sec, ~11.1 PB/year Total data size : ~180 MB/Train, ~0.9GB/sec, ~7.1 PB/year(0.8x107sec)

Storage estimation (ILD) Running scenario: 1st stage 250 GeV. Total int. lumi H20 Raw data size: TDR 500 GeV with AHCAL corr. : ~ 11PB/year 250 GeV nominal : Same as 500 GeV (bkg would be similar ) Run with x2 luminosity: x2 of nominal 2 raw data set: one set at Lab, another set somewhere in the world. Filtered/analyzed data. A fraction of signal BX ( DBD signal samples + ) : ~ 1%. Assume 3% of BXs remains after filtering. Event size per BX would be x2 after filtering and initial analysis ( REC/SIM ratio of DBD samples ) After reanalysis on GRID, event size would be x3 of raw event. DST files would be replicated to 10 sites world side. Simulation data Produce x10 luminosity than real data on GRID Event data size : adapt DBD data size. 2018/02/21 ILD Computing needs

CPU needs MC Simulation (on GRID) Real data processing: x10 real data statistics CPU time: DBD signal + bhabha etc + Reconstruction Assume bhabha etc = DBD signal reconstruction = 0.5 x DBD signal sim. Real data processing: Data filtering: all BXs, same CPU time as data reconstruction  Major part of CPU demands Reconstruction : Filtered event ( 3 % of all BXs ). Same CPU time as Sim. CPU capacity enough to analyze 1 year of data in 240 days. Another reconstruction after re-calibration, on GRID User analysis, detector calibration, are not counted. 2018/02/21 ILD Computing needs

Year by Year evolution Assume SiD = ILD Luminosity ramp up scenario is included. Only on site estimation is shown below. Annual Integrated Luminosity (ab-1) Annual Integrated Luminosity (fb-1) 250GeV 250GeV 2xLumi 500GeV 500GeV 2xLumi 350GeV 2018/02/21 ILD Computing needs

Computing cost in Lab Assumption Rental. 4 year service + 0.5 year for replacement 10% of year0 system, before year 0 Additional cost to be added to the TDR value would be Hardware to support ILD and SiD needs at Lab. Includes CPU, Tape robot, disk, software, UPS, cooling. Tape media not included. Base on KEKCC 2017, assume cost reduction of CPU(2%/y), Storage(10%/y) Network and human resources for operation of scientific system Support by running cost A building space for the computing system ~ a space in TDR. 2018/02/21 ILD Computing needs

Summary Computing resources for ILD data analysis was revised. The estimated resources at the lab was used to estimate a building space and an operational cost for the lab. The estimation is based on many assumptions. - Raw data size with the latest beam parameters ? - Efficiency to remove backgrounds ? - CPU time for background removal ?  Timely update of these estimation is desirable. 2018/02/21 ILD Computing needs

B A C K U P

A model of ILD data processing F.E. Online Computer @ control room build a train data send data to Main Computer and monitor processes Data sample and reconstruction for monitoring Temporary data storage for emergency Temp. Storage DAQ/ Online ~1GB/sec Offline Main Computer @ main campus

DAQ/ Online ~1GB/sec Main Computer @ main campus Write data sub-detector based preliminary reconstruction identify bunches of interest calibration and alignment background hit rejection full event reconstruction event classification Fast Physics Data (FPD) Online reconstruction chain Raw Data (RD) Online Processed Data (OPD) JOB-A Re-processing with better constants Offline ReconstructedData (ORD) DST MC data JOB-C MC-Production JOB-B Produce condensed data sample GRID based Offline computing Calibration Data (CD) ~1GB/sec Raw Data Copy DAQ/ Online