The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
M.C. Vetterli – WLCG-OB, CERN; October 27, 2008 – #1 Simon Fraser Status of the WLCG Tier-2 Centres M.C. Vetterli Simon Fraser University and TRIUMF WLCG.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
UVic Advanced Networking Day 28 November 2005 University of Victoria Research Computing Facility Colin Leavett-Brown.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Ian Gable University of Victoria/HEPnet Canada 1 GridX1: A Canadian Computational Grid for HEP Applications A. Agarwal, P. Armstrong, M. Ahmed, B.L. Caron,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Particle Physics and the Grid Randall Sobie Institute of Particle Physics University of Victoria Motivation Computing challenge LHC Grid Canadian requirements.
Grid Canada CLS eScience Workshop 21 st November, 2005.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Ashok Agarwal University of Victoria 1 GridX1 : A Canadian Particle Physics Grid A. Agarwal, M. Ahmed, B.L. Caron, A. Dimopoulos, L.S. Groer, R. Haria,
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
TRIUMF a TIER 1 Center for ATLAS Canada Steven McDonald TRIUMF Network & Computing Services iGrid 2005 – San Diego Sept 26 th.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
ATLAS Grid Computing Rob Gardner University of Chicago ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science THE CENTER.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
Interfacing Grid-Canada to LCG M.C. Vetterli, R. Walker Simon Fraser Univ. Grid Deployment Area Mtg August 2 nd, 2004.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
Accounting in LCG/EGEE Can We Gauge Grid Usage via RBs? Dave Kant CCLRC, e-Science Centre.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
M.C. Vetterli; SFU/TRIUMF Simon Fraser ATLASATLAS SFU & Canada’s Role in ATLAS M.C. Vetterli Simon Fraser University and TRIUMF SFU Open House, May 31.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
TRIUMF Site Report for HEPiX, JLAB, October 9-13, 2006 – Corrie Kost TRIUMF SITE REPORT Corrie Kost & Steve McDonald Update since Hepix Spring 2006.
CERN LCG1 to LCG2 Transition Markus Schulz LCG Workshop March 2004.
The ALICE Production Patricia Méndez Lorenzo (CERN, IT/PSS) On behalf of the ALICE Offline Project LCG-France Workshop Clermont, 14th March 2007.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
Report from US ALICE Yves Schutz WLCG 24/01/2007.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
A Dutch LHC Tier-1 Facility
Belle II Physics Analysis Center at TIFR
LCG Deployment in Japan
The INFN TIER1 Regional Centre
ALICE Computing Model in Run3
Computing activities at Victoria
The LHCb Computing Data Challenge DC06
Presentation transcript:

The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I might not depending on time and the scope Les wants covered.

M.C. Vetterli; SFU/TRIUMF The Canadian Model  Establish a large computing centre at TRIUMF that will be on the LCG and will participate in the common tasks associated with Tier-1 and Tier-2 centres.  Canadian groups will use existing CFI facilities (or what they will become) to do physics analysis. They will access data and the LCG through TRIUMF.  The jobs are smaller at this level and can be more easily integrated into shared facilities. We can also be independent of LCG middleware. In this model, the TRIUMF centre acts as the hub of the Canadian computing network, and as an LCG node

TRIUMF Gateway cpu/storage Experts - MC data - ESD’ - calibration - access to CDN Grid - algorithms - calibration - MC production - access to ATLAS Grid - AOD - DPD - technical expertise The ATLAS-Canada Computing Model CA*Net4 USA, Germany, France, UK, Italy, … CERN ATLAS Grid Canadian Grid - ESD - access to RAW & ESD UVic, SFU, UofA, UofT, Carleton, UdeM(CFI funded)

M.C. Vetterli; SFU/TRIUMF What Will We Need at TRIUMF?  Total computing power needed: 1.8 MSI2k 250 dual 10GHz 5000 x 1GHz CPUs  Total storage required: 340 TB of disk 1.2 PB of tape  We assume that the network will be 10 GbitE for both the LAN and WAN  These numbers have been supported by an expert advisory committee

M.C. Vetterli; SFU/TRIUMF Acquisition Profile Year Fraction10%33%67%100%133% Disk (TB) Tape (TB) CPU (MSI2k)

M.C. Vetterli; SFU/TRIUMF The TRIUMF Centre - II  8 NEW people to run the center are included in the budget.  4 for system support; 4 for software/user support.  Also one dedicated technician.  Personnel in the university centers will be mostly for system support.  More software support will be available from ATLAS postdocs.

M.C. Vetterli; SFU/TRIUMF Status of Funding  The TRIUMF center will be funded through the next TRIUMF 5-year plan; starts Apr.1,  Decision on this is expected around the end of this year.  University centers are funded through the Canada Foundation for Innovation and the provincial governments. These centers exist and should continue to be funded. Shared facilities.  Ask CFI for funds for a second large center? Driven by new requirements for T1 centers. Just started discussing this.

M.C. Vetterli; SFU/TRIUMF The TRIUMF Prototype Centre  Hardware: - 5 dual 2.8 GHz Xeon nodes - 6 white boxes (2 CE, LCfGng, UI, LCG-GIIS, spare) - 1 SE (770 Gbytes usable disk space)  Functionality: - LCG core node (CE #1) - Gateway to Grid-Canada & WestGrid (CE #2) - Canadian regional centre: + coordinates & pre-certifies Canadian LCG centres + primary contact with LCG  Middleware: - Grid inter-operability: + integrate non-LCG sites; there is a lot of interest in this (UK, US) Rod Walker (SFU research associate) as been invaluable!

M.C. Vetterli; SFU/TRIUMF The Other Canadian Sites  Victoria: - Grid-Canada Production Grid (PG-1) - Grid inter-operability (Dan Vanderster et al)  SFU/WestGrid: - Non-LCG test site (incorporate into LCG through TRIUMF)  Alberta: - Grid-Canada Production Grid (PG-1) - LCG node - Coordination of DC2 for Canada (Bryan Caron)  Toronto :- LCG node - ATLAS software mirror  Montreal: - LCG node  Carleton: - LCG node

M.C. Vetterli; SFU/TRIUMF Canadian DC2 Computing Resources Note: 1 kSI2k  2.8 GHz Xeon  400 x 2.8 GHz CPUs  23 TBytes of disk  50 TBytes of tape

M.C. Vetterli; SFU/TRIUMF Federated Grids for ATLAS DC2 Grid-Canada PG-1 WestGrid LCG/WestGrid SFU/TRIUMF LCG/Grid-Can LCG In addition to LCG resources in Canada

M.C. Vetterli; SFU/TRIUMF Linking HEPGrid to LCG..... GC Res.1 GC Res.n Grid-Can negotiator/ scheduler WG UBC/ TRIUMF TRIUMF TRIUMF cpu & storage negotiator/ scheduler RB/ scheduler LCG BDII/RB/ scheduler Class ad 1) Each GC resource publishes a class ad to the GC collector 2) The GC CE aggregates this info and publishes it to TRIUMF as a single resource Class ad 3) The same is done for WG 3) The CondorG job manager at TRIUMF builds a submission script for the TRIUMF Grid 4) The TRIUMF negotiator matches the job to GC or WG 1) The LCG RB decides where to send the job (GC/WG or the TRIUMF farm) Job class ad 2) Job goes to the TRIUMF farm or TRIUMF decides to send the job to WG 5) The job is submitted to the proper resource TRIUMF decides to send the job to GC 6) The process is repeated on GC if necessary MDS 4) TRIUMF aggregates GC & WG and publishes this to LCG as one resource 5) TRIUMF also publishes its own resources separately