Offline resource allocation M. Moulson, 11 February 2003 Discussion Outline: Currently mounted disk space Allocation of new disk space CPU resources.

Slides:



Advertisements
Similar presentations
Buffers & Spoolers J L Martin Think about it… All I/O is relatively slow. For most of us, input by typing is painfully slow. From the CPUs point.
Advertisements

1 Offline status report LNF 19 Sep News:  MC04  all LSF=1 completed (~535 pb -1 )  mrc dst of 2005 with correct   +  - e + e - ~completed.
Offline Status Report M. Moulson, 20 December 2001 Summary presentation for KLOE General Meeting Outline: Processing of 2001 data History of datarec executable.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Work finished M. Antonelli ISR and f decay simulation M. MoulsonBank-reduction code for DST’s M. Palutan et al.Trigger simulation parameters S. GiovannellaGEANFI.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Notes on offline data handling M. Moulson Frascati, 29 March 2006.
KLOE Offline Status Report Data Reconstruction MonteCarlo updates MonteCarlo production Computing Farm C.Bloise – May 28th 2003.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, ZEUS Computing Board Report Zeus Executive Meeting Wesley H. Smith, University.
Offline Discussion M. Moulson 22 October 2004 Datarec status Reprocessing plans MC status MC development plans Linux Operational issues Priorities AFS/disk.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
SuperBelle Collaboration Meeting December 2008 Martin Sevior University of Melbourne Computing Model for SuperBelle Outline Scale and Motivation Definitions.
Snapshot of the D0 Computing and Operations Planning Process Amber Boehnlein For the D0 Computing Planning Board.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.
The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.
Stephen Wolbers CHEP2000 February 7-11, 2000 Stephen Wolbers CHEP2000 February 7-11, 2000 CDF Farms Group: Jaroslav Antos, Antonio Chan, Paoti Chang, Yen-Chu.
MC/Offline Planning M. Moulson Frascati, 21 March 2005 MC status Reconstruction coverage and data quality Running time estimates MC & reprocessing production.
Offline/MC status report M. Moulson 6th KLOE Physics Workshop Sabaudia, May 2006.
SuperBelle Collaboration Meeting December 2008 Martin Sevior University of Melbourne A Computing Model for SuperBelle This is an idea for discussion only!
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
KLOE Computing Update Paolo Santangelo INFN LNF KLOE General Meeting University of Rome 2, Tor Vergata 2002, December
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
STATUS OF KLOE F. Bossi Capri, May KLOE SHUTDOWN ACTIVITIES  New interaction region  QCAL upgrades  New computing resources  Monte Carlo.
25th October 2006Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar UK Physics Meeting Queen Mary, University of London 25 th October 2006.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
DMBS Internals I. What Should a DBMS Do? Store large amounts of data Process queries efficiently Allow multiple users to access the database concurrently.
Update on MC-04(05) –New interaction region –Attenuation lengths vs endcap module numbers –Regeneration and nuclear interaction –EMC time resolution –
The PHysics Analysis SERver Project (PHASER) CHEP 2000 Padova, Italy February 7-11, 2000 M. Bowen, G. Landsberg, and R. Partridge* Brown University.
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
Status of the KLOE experiment M. Moulson, for the KLOE collaboration LNF Scientific Committee 23 May 2002 Data taking in 2001 and 2002 Hardware and startup.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
Offline meeting  Status of MC/Datarec  Priorities for the future LNF, Sep
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Offline Status Report A. Antonelli Summary presentation for KLOE General Meeting Outline: Reprocessing status DST production Data Quality MC production.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
20 Dec Peter Clarke Computing production issues 1.Disk Space – update from high-mu workshop 2.Disk planning 3.Use of T2s NCB meeting 20-Dec-2010.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
David Stickland CMS Core Software and Computing
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC sign-off and running plans M. Moulson, 1 March 2004 Offline discussion Outline: Results of tests on new MC files Sign-off on production version What.
DMBS Internals I February 24 th, What Should a DBMS Do? Store large amounts of data Process queries efficiently Allow multiple users to access the.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
DMBS Internals I. What Should a DBMS Do? Store large amounts of data Process queries efficiently Allow multiple users to access the database concurrently.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
1 Reminder: Why reprocess? data Reconstruction: DBV DSTs: DBV data DBV-19 for reconstruction & DSTs, Run < DBV-20 for.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
1 Lecture 16: Data Storage Wednesday, November 6, 2006.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Status Report on Data Reconstruction May 2002 C.Bloise Results of the study of the reconstructed events in year 2001 Data Reprocessing in Y2002 DST production.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
MC/OFFLINE meeting  Status  Request by Ke2 group  aob LNF
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
MC/OFFLINE meeting Status of data reprocessing
KLOE offline & computing: Status report
SuperB and its computing requirements
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
MC/OFFLINE meeting News since the last meeting
ALICE Computing Upgrade Predrag Buncic
ILD Ichinoseki Meeting
Presentation transcript:

Offline resource allocation M. Moulson, 11 February 2003 Discussion Outline: Currently mounted disk space Allocation of new disk space CPU resources

DST space requirements DSTLuminosity processed (pb -1 ) Current output size (GB) Event size (KB) Events per pb -1 (K) Output size for 460 pb -1 (GB) Output size for 1 fb -1 (GB) dkc 34* dk d3p †170† drc drn Total Notes:* Includes some duplicate DST’s at different DBV levels † Assuming DST’s produced for 20% of runs

MC DST space requirements Type Events requested Event size (KB) Total size (GB) K S  all, K L  all 400 M (400 pb -1 ) f  all 300 M (100 pb -1 ) 3.4*1020 Notes:* Scaled from K S K L assuming.dst size proportional to.mcr size Active production requests: Need for space could substantially increase: MC needs not satisfied by active requests Impossible to estimate future needs (2003 and beyond)

Currently mounted disk space Label Capacity (GB) GroupDescription /recalled540DSTDST latency /recalled1540PRODInput staging for datarec /recalled2180DSTPRODInput staging for DST production /recalled3540DSTDST latency /recalled4540USERGeneral user recall area /datarec540Output staging for datarec/DST prod. /analysis180Log files, histograms for datarec /data/farm1340Data acquisition Recover 540 GB for DST’s by reallocating /datarec1, but only temporarily

Currently available AFS disk space Total AFS capacity: 1745 GB kloeafs1 3× GB kloeafs2 3× GB Free space: 146 GB (8%) Redistribution: Not much to be gained Usage patterns: Files tend to accumulate Capacity of users’ PC’s saturated? AFS suitable for analysis of Ntuples? PAW/HBOOK very slow if Ntuple not entirely cached Mainly a problem for Ntuple-to- Ntuple analysis Need some NFS user/group disk? AreaCapacity (GB)Used (%) user180 cpwrk19587 ecl5049 emc13085 kaon17089 kwrk15042 mc906 phidec30077 recwrk3071 trg10071 trk9061

Allocation of new disk space In a nutshell: Immediate needs for DST’s:3.9 TB (460 pb -1 ) Immediate needs for MC DST’s: 2.7 TB (700 Mevts currently planned) Not necessary to have 100% in cache, but having DST’s in cache greatly accelerates analysis work Current space for DST latency: 1.1 TB  1.6 TB if PROD reallocated 4 TB disk space newly acquired, mounted on fibm0a/0b (SAN/FC) Options for use: If all allocated to DST’s: 75% of DST’s in disk cache Expand AFS user/group storage space Move current recalled disks to AFS in 240 GB chunks Create some NFS user/group storage space

Reconstruction and CPU consumption yeartrigger rate, Hz Luminosity (ub -1 /sec) Integrated luminosity (pb -1 )  + Bhabha Rate (Hz) Processing time CPU hours/pb -1 Processing time on 76 CPU’s (days) (62.5 d) extrapolated assuming 2002 background and trigger conditions 200x nominal processing power for concurrent reconstruction (in units of B80 CPUs) is 34, 70 and 300 CPU units for years 2002, 2003 and 200x respectively these numbers do not include the sources of inefficiencies, MC production and concurrent reprocessing