Thomas Jefferson National Accelerator Facility Page 1 CLAS12 Computing Requirements G.P.Gilfoyle University of Richmond.

Slides:



Advertisements
Similar presentations
Operated by the Southeastern Universities Research Association for the U.S. Depart. Of Energy Thomas Jefferson National Accelerator Facility Strained superlattice.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
Thomas Jefferson National Accelerator Facility Page 1 Internal Software Review Jefferson Lab November 6-7, 2014 Hall B:User Software Contributions Gerard.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Agnes Lundborg Open Seminar 9 th of April 2003 in Uppsala on the SweGrid project PANDA computing “a tiny specification”
MINERvA DAQ Software D. Casper University of California, Irvine.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
8-1 Percent and Estimation Pages Indicator(s)  N8 Develop and analyze algorithms for computing with percents and demonstrate fluency in use.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Thomas Jefferson National Accelerator Facility Page 1 EC / PCAL ENERGY CALIBRATION Cole Smith UVA PCAL EC Outline Why 2 calorimeters? Requirements Using.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
ATLAS Liquid Argon Calorimeter Monitoring & Data Quality Jessica Levêque Centre de Physique des Particules de Marseille ATLAS Liquid Argon Calorimeter.
Performance measurement with ZeroMQ and FairMQ
November 2013 Review Talks Morning Plenary Talk – CLAS12 Software Overview and Progress ( ) Current Status with Emphasis on Past Year’s Progress:
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
CEBAF The Continuous Electron Beam Accelerating Facility(CEBAF) is the central particle accelerator at JLab. CEBAF is capable of producing electron beams.
The PHysics Analysis SERver Project (PHASER) CHEP 2000 Padova, Italy February 7-11, 2000 M. Bowen, G. Landsberg, and R. Partridge* Brown University.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Thomas Jefferson National Accelerator Facility Page Hall B:User Software Contributions Gerard Gilfoyle University of Richmond 12 GeV Upgrade Software Review.
1 Cluster Development at Fermilab Don Holmgren All-Hands Meeting Jefferson Lab June 1-2, 2005.
The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.
Dual Target Design for CLAS12 Omair Alam and Gerard Gilfoyle Department of Physics, University of Richmond Introduction One of the fundamental goals of.
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Thomas Jefferson National Accelerator Facility Page 1 Hall B: Software Utilization Gerard Gilfoyle University of Richmond 12 GeV Upgrade Software Review.
Computing for Alice at GSI (Proposal) (Marian Ivanov)
Thomas Jefferson National Accelerator Facility Page 1 Overview Talk Content Break-out Sessions Planning 12 GeV Upgrade Software Review Jefferson Lab November.
David Stickland CMS Core Software and Computing
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
LHCb datasets and processing stages. 200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages.
DMBS Internals I. What Should a DBMS Do? Store large amounts of data Process queries efficiently Allow multiple users to access the database concurrently.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
Thomas Jefferson National Accelerator Facility Page 1 ClaRA Stress Test V. Gyurjyan S. Mancilla.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
Operated by the Southeastern Universities Research Association for the U.S. Department of Energy Thomas Jefferson National Accelerator Facility Page 1.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
May 23, 2007ALICE DOE Review - Computing1 ALICE-USA Computing Overview of Hard and Soft Computing Resources Needed to Achieve Research Goals 1.Calibration.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
Getting the Most out of Scientific Computing Resources
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
Getting the Most out of Scientific Computing Resources
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Andy Wang COP 5611 Advanced Operating Systems
Experiences with Large Data Sets
The CMS-HI Computing Plan Vanderbilt University
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Progress with MUON reconstruction
Preparation for the Di-Jet Tsukuba
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
ILD Ichinoseki Meeting
Scientific Computing At Jefferson Lab
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
TeraScale Supernova Initiative
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Heavy Ion Physics Program of CMS Proposal for Offline Computing
LHCb Computing Project Organisation Manage Steering Group
The ATLAS Computing Model
Development of LHCb Computing Model F Harris
Expanding the PHENIX Reconstruction Universe
Presentation transcript:

Thomas Jefferson National Accelerator Facility Page 1 CLAS12 Computing Requirements G.P.Gilfoyle University of Richmond

Thomas Jefferson National Accelerator Facility Page 2 CLAS12 Computing Requirements Assume October, 2014 start date. Present major assumptions and results for:  Data acquisition  Calibration  Simulation  Reconstruction  Reconstruction studies  Physics analysis.

Thomas Jefferson National Accelerator Facility Page 3 CLAS12 Computing Requirements Data Acquisition Assumptions:Event rate = 10 kHzWeeks running = 35 Event size = 10 kBytes24-hour duty factor = 60% Data Rate = Event Rate x Event Size = 100 Mbyte/s Average 24-hour rate = Data Rate x 24-hour duty factor = 60 Mbyte/s Events/year = Event Rate x Weeks Running x 24-hour duty factor = 1.3x10 11 Events/year Data Volume/year = Events/year x Event size = 1270 TByte/year

Thomas Jefferson National Accelerator Facility Page 4 CLAS12 Computing Requirements Calibration Assumptions:CPU-time/event = 155 msData fraction = 5% Data Passes = 5Core efficiency = 90% CPU-time/year = Events/year x CPU-time/event-core x Data fraction x Data passes = 4.9x10 9 s Calibration Cores = (CPU-time/year)/(year in seconds x Core efficiency) = 173 cores

Thomas Jefferson National Accelerator Facility Page 5 CLAS12 Computing Requirements Reconstruction - 1 Assumptions:CPU-data-time/event = 155 ms Output/input size = 2 Data passes = 2 Fraction to disk = 10% Event size = 10 kBytes Events/year = 1.3 x Data volume/year = 1.3 Pbytes/year Core efficiency = 90% CPU time/year = Data-events/year x CPU-data-time/event x Data passes = 3.9 x CPU-s/year Reconstruction Cores = CPU-time/year/(year in seconds x Core efficiency) = 1387 cores Cooked data to tape = Data volume/year x Data passes x Output/input size = 5080 TByte/year

Thomas Jefferson National Accelerator Facility Page 6 CLAS12 Computing Requirements Reconstruction - 2 AssumptionsCPU-data-time/event = 155 ms Output/input size = 2 and Results:Data passes = 2 Fraction to disk = 10% Event size = 10 kBytes Events/year = 1.3 x Data volume/year = 1.3 Pbytes/year Core efficiency = 90% Disk Storage = Cooked data to tape x Fraction to disk = 508 TByte Average bandwidth = Event size x (1 + Output/input size) x Cores/(CPU-data-time/event) = 268 Mbyte/s

Thomas Jefferson National Accelerator Facility Page 7 CLAS12 Computing Requirements Simulation -1 Assumptions: CPU-sim-time/event = 485 ms Events/year = 1.3 x Electron fraction = 50% Simulated/data events = 10 Analyzed fraction = 50% Multiplicity = 1.5 Core efficiency = 90% Sim-events/year = Events/year x Electron fraction x Analyzed fraction x Simulated/data events = 3.2 x CPU-sim-time/year = CPU-sim-time/event x Sim-events/year x Multiplicity = 2.3 x CPU-s/year Simulation Cores = (CPU-sim-time/year)/(year in seconds x Core efficiency) = 8,139 cores

Thomas Jefferson National Accelerator Facility Page 8 CLAS12 Computing Requirements Simulation - 2 AssumptionsCPU-sim-time/event = 485 ms Fraction to disk = 2% and Results: Sim-events/year = 3.2x10 11 Fraction to tape = 10% Simulation Cores = 8,139 Multiplicity = 1.5 Output event size = 50 kBytes Work Disk = Sim-events/year x Output event size x Fraction to Disk = 318 TBytes Tape storage = Sim-events/year x Output event size x Fraction to Tape = 1,588 TBytes/year Average Bandwidth = (Output event size x Simulation cores)/CPU-sim-time/event = 839 MBytes/s

Thomas Jefferson National Accelerator Facility Page 9 CLAS12 Computing Requirements Reconstruction Studies Assumptions:CPU-data-time/event = 155 ms Fraction to disk = 5% Data passes = 10 Core efficiency = 90% CPU-time/year = Fraction to disk x Events/year x Data passes CPU-data-time/event = 3.4 x s Cores = CPU-time/year/(year in seconds x Core efficiency) = 1,214 cores

Thomas Jefferson National Accelerator Facility Page 10 CLAS12 Computing Requirements Physics Analysis CPU-time/year = Fraction to disk x Events/year x Data passes x CPU-data-time/event = 1.7 x CPU-s/year Cores = CPU-time/year/(year in seconds x Core efficiency) = 607 cores AssumptionsCPU-data-time/event-core = 8 msFraction of events = 50% Data passes = 10Core efficiency = 90%

Thomas Jefferson National Accelerator Facility Page 11 CLAS12 Computing Requirements Summary CoresDisk (TByte)Tape (TByte/yr) DAQ1,270 Calibration173 Reconstruction1, ,080 Simulation8, ,558 Reconstruction Studies1, Physics Analysis Sum11,5202,2237,938