W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, 2006 - 1 ZEUS Computing Board Report Zeus Executive Meeting Wesley H. Smith, University.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
NOT FOR PUBLIC DISTRIBUTION State of Minnesota Technology Summary February 24, 2011.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
WINDOWS XP PROFESSIONAL Bilal Munir Mughal Chapter-1 1.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
The GlueX Collaboration Meeting October 4-6, 2012 Jefferson Lab Curtis Meyer.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
CERN Physics Database Services and Plans Maria Girone, CERN-IT
Operating Systems & Information Services CERN IT Department CH-1211 Geneva 23 Switzerland t OIS Update on Windows 7 at CERN & Remote Desktop.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Offline Status Report A. Antonelli Summary presentation for KLOE General Meeting Outline: Reprocessing status DST production Data Quality MC production.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
LHCbComputing Resources requests : changes since LHCb-PUB (March 2013) m Assume no further reprocessing of Run I data o (In.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Offline resource allocation M. Moulson, 11 February 2003 Discussion Outline: Currently mounted disk space Allocation of new disk space CPU resources.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
PCAP Close Out Feb 2, 2004 BNL. Overall  Good progress in all areas  Good accomplishments in DC-2 (and CTB) –Late, but good.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Introduction to FCC Software FCC Istanbul 11 March, 2016 Alice Robson (CERN/UNIGE) on behalf of / with thanks to the FCC software group.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Status Report on Data Reconstruction May 2002 C.Bloise Results of the study of the reconstructed events in year 2001 Data Reprocessing in Y2002 DST production.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
Hall D Computing Facilities Ian Bird 16 March 2001.
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
Emanuele Leonardi PADME General Meeting - LNF January 2017
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Bulk production of Monte Carlo
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Computing Infrastructure for DAQ, DM and SC
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
Scientific Computing At Jefferson Lab
ZEUS Computing Board Report
University of Wisconsin This talk is available on:
DØ MC and Data Processing on the Grid
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, ZEUS Computing Board Report Zeus Executive Meeting Wesley H. Smith, University of Wisconsin on behalf of the ZEUS Computing Board: This talk is available on: Membership: The offline coordination team Analysis software & MC: A Geiser Reconstruction, code mgmt & data base: U. Stoesslein User support and infrastructure: K. Wrona Five external members J. Ferrando, T. Haas, M. Kuze, R. Mankel, W. Smith (Ch.) First Meeting: February 27, 2006

W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, ZCB Mandate ZCB will normally meet 3 times a year to coincide with the ZEUS collaboration meetings. ZCB is charged with the following: Review the current status and short term planning of ZEUS offline. Recommend changes and new initiatives where necessary. Review the progress of the medium term planning as outlined in ZEUS-note (ZEUS Computing Strategy: an update for years ). Recommend changes and new initiatives where necessary. Towards the end of 2006, formulate a plan for computing beyond ZCB Chair reports to ZEUS executive session at the ZEUS collaboration meetings.

W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, ZCB Planning Decide that the first step is to establish the context by which the present activities and future plans can be evaluated. Take a view of the long range plans and establish what steps need to be taken now to prepare ourselves for the rest of ZEUS physics. Review the salient issues to understand what we need to look at in our next meetings in order to make recommendations to the ZEUS executive: Present status of computing systems & resources Data & MC processing Long term physics analysis & file storage

W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, Computing Hardware Evaluation (info. from K. Wrona) Present Computing is in good shape Mass storage costs are manageable Maintenance (silos, drives, servers by DESY-IT -- Zeus share) cost of 70K€ Cartridge cost of 60K€ for 850 cartridges for new Zeus data, assumes: Long term storage of only 1 MDST version restricting amount of MC (see later) reprocessed data stored on recovered tapes (old MDSTs) CPU capacity is adequate for analysis & reconstruction ZARAH farm: 184 CPUs in 92 dual CPU nodes Including CPU upgrades planned this year to replace oldest nodes Central File server has enough capacity Remove objectivity, use zeslite Maintenance is 6.6K€/yr Workgroup servers are presently sufficient OK if institutes continue to provide disk space and servers for their members Central WG servers being reduced in number, but higher power

W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, Evaluate: Software, Cache, GRID (info. from K. Wrona) Present SW evolution needs to continue PC Farms & WG servers migrate DL5  SL3 Widespread use & support thru 2008 Analysis SW migrate PAW  ROOT5 (ZEUS default) Good for training of students, postdocs & compatible with latest tools Keep “legacy” PAW capability We need to invest in dCache Replace old dCache servers & add more Add 36 TB using better, more cost effective hardware supported by DESY-IT. Increase use of GRID resources for MC & analysis Plenty of GRID resources for ZEUS MC generation If shortage of analysis computing, move more analysis to the GRID First prototype analysis working on the GRID -- some analyses are appropriate Not appropriate for reconstruction, reprocessing, etc. Need to have a computing reserve beyond ZEUS resources alone for analysis Need to maintain/enhance ZEUS operation on GRID - dynamic target! May need “deals” with ZEUS groups w/GRID resources

W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, Computing Finances in the Future 2006 ZARAH budget reduced 160K€  140K€by VAT Assumed 2007 budget is the same Have no clue about 2008 budget and beyond May be significantly reduced? Need to start the discussion now…critical to planning In what follows, assume as little additional funding as possible…basically just support for what we have with no enhancements.

W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, Data Processing (info. From U. Stoesslein) Estimate ultimate ZEUS data size Start with 04/05 e - data TLT  = 1  b, 41% efficiency for 346 HERA running 1 pb -1 /day RAW 15 TB & MDST 7 TB Estimate 06 data TLT  = 0.75  b, 50% efficiency for 310 HERA running 1 pb -1 /day RAW 13 TB & MDST 7 TB Estimate 07 data TLT  = 0.75  b, 60% efficiency for 180 HERA running 1 pb -1 /day RAW 10 TB & MDST 4.5 TB Reprocessing Need resources if plan to start reprocessing data as soon as possible Suppose start reprocessing of 06 data in summer -- keep 6 months interval max Allows completion of all reprocessing by end of 2007 Implies reprocessing of 04/05 data also starting in summer 2006 Implies greater demand on computing since reconstruction/reprocess/analyses in parallel Seek additional resources for analysis from GRID What about “grand reprocessing”? If we follow previous ZEUS timeline -- ready in middle realistic?

W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, Analysis & MC (info from A. Geiser) Analysis Plan Assumptions MC = 5 x Data MC event records are longer than data records (30%) Move to common ntuples Based on ORANGE routines “certified & stable” Make analyses more efficient (common tools, systematics, etc.) Not realistic to have each analysis keep own MC or own ntuples MC Sample Size Based on available storage, limit 2006 sample to ~ 75 TB Corresponds to ~ 625 M events at 120 kb/event Need to generate at ~ 12.5 M events/week -- OK on GRID Use of common ntuples to save resources Physics groups maintain “in common” Move resources from individual ntuple sets to common ntuple sets Some groups (HFL) have done this, others (QCD) not. Eventually, remove need for MDST? (Size of ntuple?) As a test, temporarily allocate 1TB ZEUS central resources for common ntuple