LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.

Slides:



Advertisements
Similar presentations
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Particle Physics and the Grid Randall Sobie Institute of Particle Physics University of Victoria Motivation Computing challenge LHC Grid Canadian requirements.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Randall Sobie BCNET Annual Meeting April 24,2002 The Grid A new paradigm in computing Randall Sobie Institute of Particle Physics University of Victoria.
Grid Canada CLS eScience Workshop 21 st November, 2005.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Particle Physics at UVic Research directions and initiatives Ministry of Advanced Education/UVic Round Table Friday 27 February 2004 M. Lefebvre Physics.
U.T. Arlington High Energy Physics Research Summary of Activities August 1, 2001.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Large Hadron Collider and ATLAS in Canada First beam circulation 10 September 2008 Rob McPherson Canadian Institute of Particle Physics and the University.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
Astroparticle physics
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Grid Computing Status Report Jeff Templon PDP Group, NIKHEF NIKHEF Scientific Advisory Committee 20 May 2005.
Ashok Agarwal University of Victoria 1 GridX1 : A Canadian Particle Physics Grid A. Agarwal, M. Ahmed, B.L. Caron, A. Dimopoulos, L.S. Groer, R. Haria,
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
1 HiGrade Kick-off Welcome to DESY Hamburg Zeuthen.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
…building the next IT revolution From Web to Grid…
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
LHC Computing, CERN, & Federated Identities
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Interfacing Grid-Canada to LCG M.C. Vetterli, R. Walker Simon Fraser Univ. Grid Deployment Area Mtg August 2 nd, 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
M.C. Vetterli; SFU/TRIUMF Simon Fraser ATLASATLAS SFU & Canada’s Role in ATLAS M.C. Vetterli Simon Fraser University and TRIUMF SFU Open House, May 31.
2009 Lynn Sutherland February 4, From advanced networks to economic development WURCNet – Western University Research Consortium.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHC experiments Requirements and Concepts ALICE
UK GridPP Tier-1/A Centre at CLRC
Grid Canada Testbed using HEP applications
Collaboration Board Meeting
Gridifying the LHCb Monte Carlo production system
HEC Beam Test Software schematic view T D S MC events ASCII-TDS
Computing activities at Victoria
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHCb Computing Data Challenge DC06
Presentation transcript:

LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada

Introduction LHC is a leap forward in beam energy, density and collision frequency Analysis of the data will be a computing challenge –unprecendented complexity and rates of the data –length of the experimental programme –large geographically distributed collaborations CERN has undergone a review of LHC computing –material and human resources at CERN are insufficient –shortfall is estimated to be MCH/yr and FTE/yr

Scale of the challenge 30 million events per second Online selection: –reduces rate to 100 events/s –size to 1-2 MB per event Data rate: –100 MB/s or 10 TB/day –100 days/year = 1 PB/year 1 interesting event is buried in a background of 1E13 events

Typical computing parameters for a single LHC experiment: processing power> 1,000,000 SpecInt95 storage accumulation rate> 2 PB/year data throughput > 1 Terabit/s simultaneous users~ 1000 Length of the program is expected to be > 10 years represents ~10 evolutionary market cycles and one “revolutionary” change in each category (hardware, system architecture and operating system) Large geographically distributed community ~2000 physicists (4 times the LEP experiments) “globalization” of the community who all want transparent access Planetary scale, the length of the program, and the large data volumes and rates make up the unique challenge of LHC Computing

LHC Computing Model The computing requirements cannot be satisfied at a single site –need to exploit geographically distributed resources –utilize established computing expertise and infrastructure –tap into funding sources not usually available to particle physics LHC Computing Model: –hierarchical structure with the CERN at the top tier –Large national or intra-national centres –National or institutional centres –Desktop Close integration of computing resources at external institutions with those at the accelerator laboratory has not been attempted

LHC Computing Model One Tier 0 site at CERN for data taking ATLAS (Tier 0+1) in 2007: 400 TB disk, 9 PB tape, 700,000 SpecInt95 Multiple Tier 1 sites for reconstruction and simulation 400 TB disk, 2 PB tape 200,000 SpecInt95 Tier 2 sites for user analysis

Grid Activities in CERN The goal is to make many activities Grid-aware –data transfers between the regional centres –production of simulated data –reconstruction of real/simulated data –data analysis Tools (middleware) for the Grid are being developed –management tools for the jobs, data, computing farms and storage –monitoring tools for the network and computing facilities –security and authentication CERN is leading the Data-Grid Project which will address these issues for particle physics, genome project and space science

Deployment at CERN Prototype stage: –goal is to have a facility with 50% of the scale required by a single LHC experiment Pilot stage: 2005 –services set up for each experiment Production: –expansion and conversion of pilot service in production mode

Human Resources Manpower estimates at CERN for all LHC expts –do not include online or core software which are the responsibility of the experiment LHC Review identified Core Software as critically undermanned Prototype phase: ~ 75 FTE’s per year Production phase:~ 60 FTE’s per year CERN IT can provide 20 FTE’s per year. The shortfall must be picked up by the external institutions.

Financial implications Prototype MCH/yr –10 MCH/yr hardware + 13 MCH/yr manpower Pilot and Production > MCH/yr –35 MCH/yr hardware + 10 MCH/yr manpower Missing funding: –prototype :13 MCH/yr –production > 2005:40 MCH/yr ATLAS institutions will need to contribute 3 MCH/year in and 10 MCH/year for CERN based resources

ATLAS-Canada Victoria UBC TRIUMF Alberta Montreal Toronto York Carleton 30 faculty at 8 institutions 100 Canadians TRIUMF has contributed $30 million to the LHC accelerator through in-kind contributions of items such as magnets NSERC MIG $15 million for detector components that are currently under construction

ATLAS-Canada Computing We want to be leaders in the analysis of ATLAS data. To do this we require: –A large Tier 1-2 type facility in Canada we want all the data during the first few years of ATLAS –Tier 2-3 sites at each institution or groups of institutions 1 or 2 may have additional resources dedicated to MC production

ATLAS Computing in Canada Tier 1 Tier2 Tier1: 400 TB disk, 2 PB tape 200,000 SpecInt95 10% in % in % in 2006 CERN

Current plans Project Outlines to the 2 CFI International Funds due July 3 Joint Venture - $100M for 4 projects in Canada –Canadian-Tier1 facility at University of Toronto with mass storage and processing –Monte Carlo production sites (UofA and UdeM) –Tier2 sites at other sites (Carleton, TRIUMF,Victoria) –ballpart estimate is $20M for hardware and manpower

Current plans Access Venture - $100M for projects outside Canada –Establish a team of 5 HQ Canadians at CERN to contribute to central activities for the team would focus on a single project team members would be resident at CERN for 2-3 years strongly supported by CERN and ATLAS management $4 M for –Contribute to ATLAS computing hardware at CERN ATLAS hardware costs are estimated to be $9M ( ) and $25M ( ) ATLAS-Canada is 3% : $250K (2001-4) and $750K (2005-7)

Other activities In parallel, there are a number of requests submitted to CFI for shared computing resources –Toronto Linux cluster support CDF and ATLAS-dev –WestGrid (UA, Calgary, SFU, UBC, TRIUMF) shared-memory machines, Linux cluster and storage support TRIUMF expts and ATLAS-dev –Victoria (UVic, DAO, PFC, PGC, as well as UBC, TRIUMF) large storage facility and shared-memory machine support BaBar and ATLAS-dev

TRIUMF’s Role Our community needs a centre for Subatomic Computing –Canada has few physicists with computing expertise particularly evident with new OO software US leads the ATLAS software development –ATLAS is not the only experiment with large storage requirements –We should be active in Grid Research and E-Science TRIUMF is the natural location for such a facility –TRIUMF cannot apply to CFI –Possible option for next 5 year plan

R and D Activities Commodity Linux farms: –Alberta (Thor) for ATLAS 3rd level trigger –Victoria for BaBar physics analysis Storage management –use of Hierarchical Storage Manager (HSM) underway at Victoria –plan to connect with 20TB tape library with BaBar Linux farm Grid –R.Kowalewski and R.S. funded by C3.ca for Grid research –established a small testbed between Victoria and TRIUMF –collaborating with HPC NRC Group in Ottawa –goal is run BaBar MC production over a Grid

Summary Exciting time in computing for particle physics. –LHC and ATLAS will push the limits of computing technology. Canada will need to contribute to central resources and create a RegCentre –We are applying to the CFI Int Funds –Other options include existing facilities funded by the CFI Innov Fund –Also, NSERC and the TRIUMF 5 year plan Canadians are actively involved in computing issues –playing a leading role in ATLAS software development –pursuing initiatives to build up local resources (CFI) as will as development projects (Linux clusters, storage, Grid)