U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL 10-11 January 2000.

Slides:



Advertisements
Similar presentations
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
November 15, 2000US CMS Tier2 Plans Matthias Kasemann1 US CMS Software and Computing Tier 2 Center Plans Matthias Kasemann Fermilab DOE/NSF Baseline Review.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects LBNL, Berkeley, California.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Atlas CAP Closeout Thanks to all the presenters for excellent and frank presentations Thanks to all the presenters for excellent and frank presentations.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Plenary December 9 Agenda u Introductions HN, LP15’ è Status of Actual CMS ORCA databases.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Hall D Computing Facilities Ian Bird 16 March 2001.
LQCD Computing Operations
Nuclear Physics Data Management Needs Bruce G. Gibbard
RHIC Computing Facility Processing Systems
Development of LHCb Computing Model F Harris
Presentation transcript:

U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000

11 January, 2000 U.S. ATLAS Physics & Computing Review 2 US ATLAS Computing Facilities Facilities procured, installed and operated –…to meet US ‘MOU’ Obligations Direct IT responsibility (Monte Carlo, for example) Support for detector construction, testing, & calib. Support for software development and testing –…to enable effective participation by US physicists in ATLAS physics program ! Direct access to and analysis of physics data sets Support simulation, re-reconstruction, and reorganization of data associated with that analysis

11 January, 2000 U.S. ATLAS Physics & Computing Review 3 Setting the Scale Uncertainties in Defining Requirements –Five years of detector, algorithm & software development –Five years of computer technology evolution Start from ATLAS estimate & rules of thumb Adjust for US ATLAS perspective (experience and priorities) Adjust for details of architectural model of US ATLAS facilities

11 January, 2000 U.S. ATLAS Physics & Computing Review 4 Atlas Estimate & Rules of Thumb Tier 1 Center in ‘05 should include... –30,000 SPECint95 for Analysis –10-20,000 SPECint95 for Simulation – TBytes/year of On-line (Disk) Storage –200 TBytes/year of Near-line (Robotic Tape) Storage –100 Mbit/sec connectivity to CERN Assume no major raw data processing or handling outside of CERN

11 January, 2000 U.S. ATLAS Physics & Computing Review 5 US ATLAS Perspective US ATLAS facilities must be adequate to meet any reasonable U.S. ATLAS computing needs ( U.S. role in ATLAS should not be constrained by a computing shortfall, rather the U.S. role should be enhanced by computing strength ) –Store & re-reconstruct 10-30% of events –Take high end of simulation capacity range –Take high end of disk capacity range –Augment analysis capacity –Augment CERN link bandwidth

11 January, 2000 U.S. ATLAS Physics & Computing Review 6 Adjusted For US ATLAS Perspective US ATLAS Tier 1 Center in ‘05 should include... –10,000 SPECint95 for Re-reconstruction –50,000 SPECint95 for Analysis –20,000 SPECint95 for Simulation –100 TBytes/year of On-line (Disk) Storage –300 TBytes/year of Near-line (Robotic Tape) Storage –Dedicate OC12, 622 Mbit/sec to CERN

11 January, 2000 U.S. ATLAS Physics & Computing Review 7 Architectural Model Consists of Transparent Hierarchically Distributed Grid Connected Computing Resources –Primary ATLAS Computing Centre at CERN –US ATLAS Tier 1 Computing Center at BNL National in scope at ~20% of CERN –US ATLAS Tier 2 Computing Centers Six, each regional in scope at ~20% of Tier 1 Likely one of them at CERN –US ATLAS Institutional Computing Facilities Local LAN in scope, not project supported –US ATLAS Individual Desk Top Systems

11 January, 2000 U.S. ATLAS Physics & Computing Review 8 Schematic of Model

11 January, 2000 U.S. ATLAS Physics & Computing Review 9 Distributed Model Rationale (benefits) –Improved user access to computing resources Local geographic travel Higher performance regional networks –Enable local autonomy Less widely shared More locally managed resources –Increased capacities Encourage integration of other equipment & expertise –Institutional, base program Additional funding options –Com Sci, NSF

11 January, 2000 U.S. ATLAS Physics & Computing Review 10 Distributed Model But increase vulnerability (Risk) –Increased dependence on network –Increased dependence on GRID infrastructure R&D –Increased dependence on facility modeling tools –More complex management Risk / benefit analysis must yield positive result

11 January, 2000 U.S. ATLAS Physics & Computing Review 11 Adjusted For Architectural Model US ATLAS facilities in ‘05 should include... –10,000 SPECint95 for Re-reconstruction –85,000 SPECint95 for Analysis –35,000 SPECint95 for Simulation –190 TBytes/year of On-line (Disk) Storage –300 TBytes/year of Near-line (Robotic Tape) Storage –Dedicated OC Mbit/sec Tier 1 connectivity to each Tier 2 –Dedicate OC Mbit/sec to CERN 

11 January, 2000 U.S. ATLAS Physics & Computing Review 12 GRID Infrastructure GRID infrastructure software must supply –Efficiency (optimizing hardware use) –Transparency (optimizing user effectiveness) Projects –PPDG : Distributed data services - Later talk by D. Malon –APOGEE: Complete GRID infrastructure including: distributed resources management, modeling, instrumentation, etc. –GriPhyN: Staged development toward delivery of a production system Alternative to success with these projects is a difficult to use and/or inefficient overall system U.S. ATLAS involvement includes - ANL, LBNL, LBNL

11 January, 2000 U.S. ATLAS Physics & Computing Review 13 Facility Modeling Performance of Complex Distribute System is Difficult but Necessary to Predict MONARC - LHC centered project –Provide toolset for modeling such systems –Develop guidelines for designing such systems –Currently capable of relevant analyses U.S. ATLAS Involvement –Later talk by K. Sliwa

11 January, 2000 U.S. ATLAS Physics & Computing Review 14 Components of Model: Tier 1 Full Function Facility –Dedicated Connectivity to CERN –Primary Site for Storage/Serving Cache/Replicate CERN data needed by US ATLAS Archive and Serve WAN all data of interest to US ATLAS –Computation Primary Site for Re-reconstruction (perhaps only site) Major Site for Simulation & Analysis (~2 x Tier 2) –Repository of Technical Expertise and Support Hardware, OS’s, utilities, and other standard elements of U.S. ATLAS Network, AFS, GRID, & other infrastructure elements of WAN model

11 January, 2000 U.S. ATLAS Physics & Computing Review 15 Components of Model: Tier 2 Limit personnel and maintenance support costs Focused Function Facility –Excellent connectivity to Tier 1 (Network + GRID) –Tertiary storage via Network at Tier 1 (none local) –Primary Analysis site for its region –Major Simulation capabilities –Major online storage cache for its region Leverage local expertise and other resources –Part of site selection criteria, ~1 FTE contributed, for example

11 January, 2000 U.S. ATLAS Physics & Computing Review 16 Technology Trends & Choices CPU –Range: Commodity processors -> SMP servers –Factor 2 decrease in price/performance in 1.5 years Disk –Range: Commodity disk -> RAID disk –Factor 2 decrease in price/performance in 1.5 years Tape Storage –Range: Desktop storage -> High-end storage –Factor 2 decrease in price/performance in years

11 January, 2000 U.S. ATLAS Physics & Computing Review 17 Price/Performance Evolution From Harvey Newman presentation, Third LCB Workshop, Marseilles, Sept As of Dec 1996

11 January, 2000 U.S. ATLAS Physics & Computing Review 18 Technology Trends & Choices For Costing Purpose –Start with familiar established technologies –Project by observed exponential slopes This is a Conservative Approach –There are no known near term show stoppers to these established technologies –A new technology would have to be more cost effective to supplant projection of an established technology

11 January, 2000 U.S. ATLAS Physics & Computing Review 19 Technology Trends & Choices CPU Intensive processing –Farms of commodity processors - Intel/Linux I/O Intensive Processing and Serving –Mid-scale SMP’s (SUN, IBM, etc.) Online Storage (Disk) –Fibre Channel Connected RAID Nearline Storage (Robotic Tape System) –STK / 9840 / HPSS LAN –Gigabit Ethernet

11 January, 2000 U.S. ATLAS Physics & Computing Review 20 Composition of Tier 1 Commodity processor farms (Intel/Linux) Mid-scale SMP servers (SUN) Fibre Channel connected RAID disk Robotic tape / HSM system (STK / HPSS)

11 January, 2000 U.S. ATLAS Physics & Computing Review 21 Current Tier 1 Status U.S. ATLAS Tier 1 facility is currently operating as a small, ~5 %, adjunct to the RHIC Computing Facility (RCF) Deployment includes –Intel/Linux farms (28 CPU’s) –Sun E450 server (2 CPU’s) –200 Mbytes of Fibre Channel RAID Disk –Intel/Linux web server –Archiving via low priority HPSS Class of Service –Shared use of an AFS server (10 GBytes)

11 January, 2000 U.S. ATLAS Physics & Computing Review 22 Current Tier 1 Status These RCF chosen platforms/technologies are common to ATLAS –Allows wide range of services with only 1 FTE of sys admin contributed (plus US ATLAS librarian) –Significant divergence of direction between US ATLAS and RHIC has been allowed for –Complete divergence, extremely unlikely, would exceed current staffing estimates

11 January, 2000 U.S. ATLAS Physics & Computing Review 23

11 January, 2000 U.S. ATLAS Physics & Computing Review 24 RAID Disk Subsystem

11 January, 2000 U.S. ATLAS Physics & Computing Review 25 Intel/Linux Processor Farm

11 January, 2000 U.S. ATLAS Physics & Computing Review 26 Intel/Linux Nodes

11 January, 2000 U.S. ATLAS Physics & Computing Review 27 Composition of Tier 2 (Initial One) Commodity processor farms (Intel/Linux) Mid-scale SMP servers Fibre Channel connected RAID disk

11 January, 2000 U.S. ATLAS Physics & Computing Review 28 Staff Estimate (In Pseudo Detail)

11 January, 2000 U.S. ATLAS Physics & Computing Review 29 Time Evolution of Facilities Tier 1 functioning as early prototype –Ramp up to meet needs and validate design Assume 2 years for Tier 2 to fully establish –Initiate first Tier 2 in 2001 True Tier 2 prototype Demonstrate Tier 1 - Tier 2 interaction –Second Tier 2 initiated in 2002 (CERN?) –Four remaining initiated in 2003 Fully operational by 2005 Six are to be identical (CERN exception?)

11 January, 2000 U.S. ATLAS Physics & Computing Review 30 Staff Evolution

11 January, 2000 U.S. ATLAS Physics & Computing Review 31 Network Tier 1 connectivity to CERN and to Tier 2’s is critical –Must be guaranteed and allocable (dedicated and differentiate) –Must be adequate (Triage of functions is disruptive) –Should grow with need; OC12 should be practical by 2005 when serious data will flow

11 January, 2000 U.S. ATLAS Physics & Computing Review 32 WAN Configurations and Cost (FY 2000 k$)

11 January, 2000 U.S. ATLAS Physics & Computing Review 33 Annual Equipment Costs for Tier 1 Center (FY 2000 k$)

11 January, 2000 U.S. ATLAS Physics & Computing Review 34 Annual Equipment Costs Tier 2 Center (FY 2000 k$)

11 January, 2000 U.S. ATLAS Physics & Computing Review 35 Integrated Facility Capacities by Year

11 January, 2000 U.S. ATLAS Physics & Computing Review 36 US ATLAS Facilities Annual Costs (FY2000 k$)

11 January, 2000 U.S. ATLAS Physics & Computing Review 37 Major Milestones