MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Particle Physics and the Grid Randall Sobie Institute of Particle Physics University of Victoria Motivation Computing challenge LHC Grid Canadian requirements.
Grid Canada CLS eScience Workshop 21 st November, 2005.
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
1 Data Management D0 Monte Carlo needs The NIKHEF D0 farm The data we produce The SAM data base The network Conclusions Kors Bos, NIKHEF, Amsterdam Fermilab,
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
Stephen Wolbers CHEP2000 February 7-11, 2000 Stephen Wolbers CHEP2000 February 7-11, 2000 CDF Farms Group: Jaroslav Antos, Antonio Chan, Paoti Chang, Yen-Chu.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Ashok Agarwal University of Victoria 1 GridX1 : A Canadian Particle Physics Grid A. Agarwal, M. Ahmed, B.L. Caron, A. Dimopoulos, L.S. Groer, R. Haria,
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Spending Plans and Schedule Jae Yu July 26, 2002.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
BES III Computing at The University of Minnesota Dr. Alexander Scott.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
CDF computing in the GRID framework in Santander
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
May Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
5 June 2003Alan Norton / Focus / EP Topics1 Other EP Topics Some 2003 Running Experiments - NA48/2 (Flavio Marchetto) - Compass (Benigno Gobbo) - NA60.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
M.C. Vetterli; SFU/TRIUMF Simon Fraser ATLASATLAS SFU & Canada’s Role in ATLAS M.C. Vetterli Simon Fraser University and TRIUMF SFU Open House, May 31.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
The INFN TIER1 Regional Centre
Proposal for the LHCb Italian Tier-2
Grid Canada Testbed using HEP applications
Gridifying the LHCb Monte Carlo production system
Development of LHCb Computing Model F Harris
Presentation transcript:

MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003

28 Oct 03P. Savard, IFC Meeting2 Overview Collaboration has need for production of large, common Monte Carlo simulation datasets –Frank Wuerthwein originally setup MC production group to organize production of “official” MC datasets –Production of datasets were done in large part on CDF’s reconstruction and analysis clusters at FNAL CDF Canadian institutions have large beowulf clusters with excellent connectivity to FNAL New investigators (Pinfold, Savard, Warburton) proposed to exploit these resources to produce most of CDF’s official MC Granted operation funds by Canadian funding agency, signed MOU with CDF management

Resources in Canada Toronto: last year acquired large linux cluster with ~450 cpus

Resources in Canada Toronto connectivity: Directly connected to Canada research network (Gigabit across country and to STARLIGHT) Alberta: Thor multi-processor facility 170 nodes of dual 2GHz cpus 4 TB of disk Network fabric is GigE McGill: machine comparable to Toronto cluster, has not been used yet

MC Production MOU Dedicate a MINIMUM of 250 GHz of equivalent cpus to produce official CDF MC datasets for 2 years Assume coordination of MC production Assume responsibility for transfer of data to FNAL and to tape (all official MC datasets go in DFC) Dedicate necessary human resources to coordinate and manage production at computing facilities (1.7 FTE)

MC Production in Canada Proposal a good match to our resources: MC requires a lot of cpu (generation + simulation + reconstruction), good bandwidth (to push data back) Toronto farm setup as a CDF FNAL machine: Fermi Linux, Fermi batch system. Main other user of Toronto and Alberta clusters is ATLAS for data/GRID challenges Note that human resource component should not be underestimated: need to prepare and submit jobs, to do data handling, to keep track of logs, transfer data etc. This does not include machine admin or maintenance

Delivered MC Production Last year, produced more than half of official MC for CDF. Just for Summer conferences: produced 43 of 55 million events transferred ~10 TB at Mbytes/s (average rate) 260 datasets written to tape We operated beyond the proposed “minimum”: During period of high demand, typically allocated more than 300 cpus in Toronto and 50 cpus in Alberta Also regularly produce large samples to test new simulation releases. Total of ~80 million events produced last year

28 Oct 03P. Savard, IFC Meeting8 Conclusions Large-scale Monte Carlo dataset production at remote institutions has been a success Reduces load on FNAL reconstruction and analysis clusters Have followed a detector hardware model: –provide hardware component and necessary human resources to operate it. –Computers are funded by Canadian government which also provides operation funds