Western Analysis Facility

Slides:



Advertisements
Similar presentations
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Advertisements

S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Planning for a Western Analysis Facility Richard P. Mount Planning for a Western Analysis FacilityPage 1.
Tier 3g Infrastructure Doug Benjamin Duke University.
SLAC National Accelerator Laboratory Site Report A National Lab in Transition Randy Melen, Deputy CIO Computing Division, Operations Directorate SLAC National.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
SLUO LHC Workshop: Closing RemarksPage 1 SLUO LHC Workshop: Closing Remarks David MacFarlane Associate Laboratory Directory for PPA.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
Spending Plans and Schedule Jae Yu July 26, 2002.
Tier 3 Computing Doug Benjamin Duke University. Tier 3’s live here Atlas plans for us to do our analysis work here Much of the work gets done here.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
ATLAS Computing at SLAC Future Possibilities Richard P. Mount Western Tier 2 Users Forum April 7, 2009.
Brookhaven Analysis Facility Michael Ernst Brookhaven National Laboratory U.S. ATLAS Facility Meeting University of Chicago, Chicago 19 – 20 August, 2009.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
HEP and Non-HEP Computing at a Laboratory in Transition Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear Accelerator.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Expansion Plans for the Brookhaven Computer Center HEPIX – St. Louis November 7, 2007 Tony Chan - BNL.
Hall D Computing Facilities Ian Bird 16 March 2001.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Extreme Scale Infrastructure
Journey to the HyperConverged Agile Infrastructure
CEPC software & computing study group report
Grid and Cloud Computing
Executive Summary Inform server acquisition with consolidation strategy Tactical server acquisition should be made in a strategic context of server consolidation.
Experience of Lustre at QMUL
H2020, COEs and PRACE.
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Computing models, facilities, distributed computing
Working Group 4 Facilities and Technologies
INFN Computing Outlook The Bologna Initiative
Appro Xtreme-X Supercomputers
Experience of Lustre at a Tier-2 site
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Dagmar Adamova, NPI AS CR Prague/Rez
Clouds of JINR, University of Sofia and INRNE Join Together
Luca dell’Agnello INFN-CNAF
UK GridPP Tier-1/A Centre at CLRC
Grid Computing.
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
LHC Computing re-costing for
Proposal for the LHCb Italian Tier-2
Cognitus: A Science Case for HPC in the Nordic Region
Your Next LIMS: SaaS or On-Premise? Presented by:
Introduce yourself Presented by
Hyper-Converged Technology a User's Perspective
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Parks and Recreation Services
Grid Laboratory Of Wisconsin (GLOW)
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
Presentation transcript:

Western Analysis Facility Richard P. Mount US-ATLAS Tier2/Tier3 Workshop University of Chicago Aug 20, 2009

Western Analysis Facility Outline SLAC HEP Computing Facilities SLAC Power and Cooling Infrastructure Hosting University Equipment SLAC Strengths Longer-Term Goals 6/23/2018 Western Analysis Facility

SLAC HEP Computing Facilities SLAC HEP Batch CPU 8600 Cores 72 Cores: SMP (SGI Altix) 360 Cores: Infiniband Cluster 128 Cores: Myrinet Cluster 8040 Cores: HEP Data Processing Plus 106 Interactive/ Build Cores 6/23/2018 Western Analysis Facility

SLAC HEP Computing Facilities 6/23/2018 Western Analysis Facility

SLAC HEP Computing Facilities 34 TB: AFS 47 TB: NFS-Network Appliance 700 TB: NFS 1200 TB: xrootd 74 TB: Lustre, Objectivity … 6/23/2018 Western Analysis Facility

SLAC HEP Computing Facilities 6/23/2018 Western Analysis Facility

SLAC HEP Computing Facilities Mass Storage Managed by HPSS Upgrade in Progress x 6  2 x Transfer ~ 3 Petabytes 30,000 Slots Total 13,000 Slots Total 6/23/2018 Western Analysis Facility

Power and Cooling for Computing at SLAC 6/23/2018 Western Analysis Facility

Western Analysis Facility Infrastructure: 2013 on Proposed (Stanford) Scientific Research Computing Facility Modular – up to 8 modules Up to 3MW payload per module Ambient air cooled Cheaper than Sun BlackBoxes But not free! (~$10 per W capital cost) 6/23/2018 Western Analysis Facility

Concept for a Stanford Research Computing Facility at SLAC (~2013) 6/23/2018 Western Analysis Facility

Western Analysis Facility First Two Modules 6/23/2018 Western Analysis Facility

Western Analysis Facility Module Detail 6/23/2018 Western Analysis Facility

Western Analysis Facility Green Savings 6/23/2018 Western Analysis Facility

Ideas on Hosting Costs (1) Model the acquisition costs, power bill + infrastructure maintenance + support labor for: CPU boxes: Acquisition Annual Costs Raw Box (Dell R410, 8*2.93 GHz, 24GB, 500GB) $3776 Network (data + management), rack, cables $981 Power Bill (including cooling) $475 Power/cooling infrastructure maintenance $176 Labor (sysadmin, ATLAS environment) $440 6/23/2018 Western Analysis Facility

Ideas on Hosting Costs (2) Model the acquisition costs, power bill + infrastructure maintenance + support labor for: Disk Space: Acquisition Annual Costs Raw Box (Sun “Thor”, 2*6-core, 48*1TB = 33TB usable) $25,400 Network (data + management), rack, cables $3,500 Power Bill (including cooling) $1,381 Power/cooling infrastructure maintenance $512 Labor (sysadmin, ATLAS environment) $4,000 6/23/2018 Western Analysis Facility

Western Analysis Facility Hosting Terms The historical approach: Rich Uncle: University buys or pays for some equipment SLAC provides space/power/cooling/network/support for free. Possible future scenarios: Cash economy: University buys or pays for some equipment and pays for the rack/network/cable acquisition costs; University pays the annual operating costs to SLAC Barter economy: SLAC assigns some fraction of this equipment to meet SLAC HEP program needs In exchange, SLAC installs, powers, cools and supports the equipment for 3 or 4 years 6/23/2018 Western Analysis Facility

Western Analysis Facility SLAC Strengths Pushing the envelope of Data Intensive Computing e.g. Scalla/xrootd (in use at the SLAC T2) Design and implementation of efficient and scalable computing systems (1000s of boxes) Strongly supportive interactions with the university community (and 10 Gbits/s to Internet2). Plus a successful ongoing computing operation: Multi-tiered multi-petabyte storage ~10,000 cores of CPU Space/power/cooling continuously evolving 6/23/2018 Western Analysis Facility

Western Analysis Facility SLAC Laboratory Goals Maintain, strengthen and exploit the Core Competency in Data Intensive Computing; Collaborate with universities exploiting the complementary strengths of universities and SLAC. 6/23/2018 Western Analysis Facility

ATLAS Western Analysis Facility Concept Focus on data-intensive analysis on a “major-HEP-computing-center” scale; Flexible and Nimble to meet the challenge of rapidly evolving analysis needs; Flexible and Nimble to meet the challenge of evolving technologies: Particular focus on the most effective role for solid state storage (together with enhancements to data-access software); Close collaboration with US ATLAS university groups: Make best possible use of SLAC-based and university-based facilities. Coordinate with ATLAS Analysis Support Centers. 6/23/2018 Western Analysis Facility

ATLAS Western Analysis Facility Possible Timeline Today: Interactive access to SLAC computing available to US ATLAS Jobs may be submitted to (most of) the HEP funded CPUs Hosting possibilities for Tier 3 equipment Today through 2010: Tests of various types of solid-state storage in various ATLAS roles (conditions DB, TAG access, PROOF-based analysis, xrootd access to AOD …). Collaborate with BNL and ANL. 2010: Re-evaluation of ATLAS analysis needs after experience with real data. 2011 on: WAF implementation as part of overall US ATLAS strategy. 6/23/2018 Western Analysis Facility

Western Analysis Facility Longer Term Goals There is a likelihood that US ATLAS will need additional data-intensive analysis capability: There is an even higher likelihood that there will be major software and architectural challenges in ATLAS data analysis. SLAC’s goal is to address both issues as part membaer of of US ATLAS. 6/23/2018 Western Analysis Facility