U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL 18-20 January 2000.

Slides:



Advertisements
Similar presentations
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
IACT 901 Module 9 Establishing Technology Strategy - Scope & Purpose.
7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
November 15, 2000US CMS Tier2 Plans Matthias Kasemann1 US CMS Software and Computing Tier 2 Center Plans Matthias Kasemann Fermilab DOE/NSF Baseline Review.
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects LBNL, Berkeley, California.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Atlas CAP Closeout Thanks to all the presenters for excellent and frank presentations Thanks to all the presenters for excellent and frank presentations.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Hall D Computing Facilities Ian Bird 16 March 2001.
UK GridPP Tier-1/A Centre at CLRC
LHC Computing re-costing for
LQCD Computing Operations
Nuclear Physics Data Management Needs Bruce G. Gibbard
RHIC Computing Facility Processing Systems
Development of LHCb Computing Model F Harris
Presentation transcript:

U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 2 US ATLAS Computing Facilities Facilities procured, installed and operated –…to meet U.S. ‘MOU’ Obligations Direct IT responsibility (Monte Carlo, for example) Support for detector construction, testing, & calib. Support for software development and testing –…to enable effective participation by US physicists in ATLAS physics program ! Direct access to and analysis of physics data sets Support simulation, re-reconstruction, and reorganization of data associated with that analysis

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 3 Setting the Scale Uncertainties in Defining Facilities Scale –Five years of detector, algorithm & software development –Five years of computer technology evolution Start from ATLAS Estimate & Regional Center Guidelines Adjust for US ATLAS perspective (experience, priorities and facilities model)

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 4 ATLAS Estimate & Guidelines Tier 1 Center in ‘05 should include... –30,000 SPECint95 for Analysis –20,000 SPECint95 for Simulation –100 TBytes/year of On-line (Disk) Storage –200 TBytes/year of Near-line (Robotic Tape) Storage –100 Mbit/sec connectivity to CERN Assume no major raw data processing or handling outside of CERN

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 5 US ATLAS Perspective US ATLAS facilities must be adequate to meet any reasonable U.S. ATLAS computing needs ( U.S. role in ATLAS should not be constrained by a computing shortfall, rather the U.S. role should be enhanced by computing strength ) There must be significant capacity beyond that formally committed to International ATLAS which can be allocated at the discretion of U.S. ATLAS

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 6 Facilities Architecture Consists of Transparent Hierarchically Distributed Computing Resources Connected into a GRID –Primary ATLAS Computing Centre at CERN –US ATLAS Tier 1 Computing Center at BNL National in scope at ~20% of CERN –US ATLAS Tier 2 Computing Centers Six, each regional in scope at ~20% of Tier 1 Likely one of them at CERN –US ATLAS Institutional Computing Facilities Institutional in scope, not project supported –US ATLAS Individual Desk Top Systems

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 7 Schematic of Model

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 8 Distributed Model Rationale (benefits) –Improved user access to computing resources Higher performance regional networks Local geographic travel –Enable local autonomy Less widely shared resources More locally managed –Increased capacities Encourage integration of other equipment & expertise –Institutional, base program Additional funding options –Com Sci, NSF

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 9 Distributed Model (2) But increase vulnerability (Risk) –Increased dependence on network –Increased dependence on GRID infrastructure software and hence R&D efforts –Increased dependence on facility modeling tools –More complex management Risk / benefit analysis must yield positive result

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 10 Adjusted For U.S. ATLAS Perspective Total US ATLAS facilities in ‘05 should include... –10,000 SPECint95 for Re-reconstruction –85,000 SPECint95 for Analysis –35,000 SPECint95 for Simulation –190 TBytes/year of On-line (Disk) Storage –300 TBytes/year of Near-line (Robotic Tape) Storage –Dedicated OC Mbit/sec Tier 1 connectivity to each Tier 2 –Dedicated OC Mbit/sec to CERN 

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 11 GRID Infrastructure GRID infrastructure software must supply –Efficiency (optimizing hardware use) –Transparency (optimizing user effectiveness) Projects –PPDG : Distributed data services - Common Day talk by D. Malon –APOGEE: Complete GRID infrastructure including: distributed resources management, modeling, instrumentation, etc. –GriPhyN: Staged development toward delivery of a production system Alternative to success with these projects is a cumbersome to use and/or reduce efficiency overall set of facilities U.S. ATLAS involvement includes - ANL, BNL, LBNL

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 12 Facility Modeling Performance of Complex Distribute System is Difficult but Necessary to Predict MONARC - LHC centered project –Provide toolset for modeling such systems –Develop guidelines for designing such systems –Currently capable of relevant analyses –Common Day talk by K. Sliwa

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 13 Technology Trends CPU –Range: Commodity processors -> SMP servers –Factor 2 decrease in price/performance in 1.5 years Disk –Range: Commodity disk -> RAID disk –Factor 2 decrease in price/performance in 1.5 years Tape Storage –Range: Desktop storage -> High-end storage –Factor 2 decrease in price/performance in years

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 14 Technology Trends & Choices For Costing Purpose –Start with familiar established technologies –Project by observed exponential slopes Conservative Approach –There are no known near term show stoppers to evolution of these established technologies –A new technology would have to be more cost effective to supplant projection of an established technology

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 15 Technology Choices CPU Intensive processing –Farms of commodity processors - Intel/Linux I/O Intensive Processing and Serving –Mid-scale SMP’s (SUN, IBM, etc.) Online Storage (Disk) –Fibre Channel Connected RAID Nearline Storage (Robotic Tape System) –STK / 9840 / HPSS LAN –Gigabit Ethernet

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 16 Requirements Profile Facilities Ramp-up Driven by… –Core software needs ODBMS scalability tests in ‘01-’02 time frame –Subdetector needs Modest for next few years –Mock Data Exercises - not officially schedule so… Assume MDC I at 10% scale in 2003 and MDC II at 30% scale in 2004 –Facilities model validation

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 17 Tier 1 Full Function Facility Including... –Dedicated Connectivity to CERN –Primary Site for Storage/Serving Cache/Replicate CERN & other data needed by US ATLAS –Computation Primary Site for Re-reconstruction (perhaps only site) Major Site for Simulation & Analysis (~2 x Tier 2) –Regional support plus catchall for those without a region –Repository of Technical Expertise and Support Hardware, OS’s, utilities, other standard elements of U.S. ATLAS Network, AFS, GRID, & other infrastructure elements of WAN model

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 18 Tier 1 (2) Commodity processor farms (Intel/Linux) Mid-scale SMP servers (SUN) Fibre Channel connected RAID disk Robotic tape / HSM system (STK / HPSS)

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 19 Current Tier 1 Status U.S. ATLAS Tier 1 facility now operating as a ~5 % adjunct to the RHIC Computing Facility including –Intel/Linux farms (28 CPU’s) –Sun E450 server (2 CPU’s) –200 GBytes of Fibre Channel RAID Disk –Intel/Linux web server –Archiving via low priority HPSS Class of Service –Shared use of an AFS server (10 GBytes)

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 20

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 21 RAID Disk Subsystem

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 22 Intel/Linux Processor Farm

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 23 Intel/Linux Nodes

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 24 Tier 1 Staffing Estimate

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 25 Tier 2 Ramp-up Assume 2 years for Tier 2 to fully establish –Initiate first Tier 2 in 2001 True Tier 2 prototype Demonstrate Tier 1 - Tier 2 interaction –Second Tier 2 initiated in 2002 (CERN?) –Four remaining initiated in 2003 All fully operational by 2005 Six are to be identical (CERN exception?)

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 26 Tier 2 Limit Personnel and Maintenance Support Costs Focused Function Facility –Excellent connectivity to Tier 1 (Network + GRID) –Tertiary storage via Network at Tier 1 (none local) –Primary Analysis site for its region –Major Simulation capabilities –Major online storage cache for its region Leverage Local Expertise and Other Resources –Part of site selection criteria – For example: ~1 FTE contributed,

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 27 Tier 2 (2) Commodity processor farms (Intel/Linux) Mid-scale SMP servers Fibre Channel connected RAID disk

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 28 Tier 1 / Tier 2 Staffing (In Pseudo Detail)

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 29 Staff Evolution

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 30 Network Tier 1 Connectivity to CERN and to Tier 2’s is Critical to Facilities Model –Must be adequate –Must be guaranteed and allocable (dedicated and differentiate) –Should grow with need; OC12 should be practical by 2005 –While estimate is highly uncertain this cost must be covered in a distributed facilities plan

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 31 WAN Configurations and Cost (FY 2000 k$)

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 32 Capacities by Year

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 33 Annual Equipment Costs at Tier 1 Center (FY 2000 k$)

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 34 Annual Equipment Costs at Tier 2 Center (FY 2000 k$)

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 35 US ATLAS Facilities Annual Costs (FY2000 k$)

19 January, 2000 DOE/NSF Review of US LHC Software & Computing Projects B. Gibbard 36 Major Milestones