US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.

Slides:



Advertisements
Similar presentations
10-Feb-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 February 2000 Les Robertson CERN/IT.
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
LIGO-G Z 23 October 2002NSF Review of LIGO Laboratory1 The Penn State LIGO Data Analysis Center Lee Samuel Finn Penn State.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
U.S. ATLAS Physics and Computing Budget and Schedule Review John Huth Harvard University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
November 15, 2000US CMS Tier2 Plans Matthias Kasemann1 US CMS Software and Computing Tier 2 Center Plans Matthias Kasemann Fermilab DOE/NSF Baseline Review.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Meeting the Data Protection Demands of a 24x7 Economy Steve Morihiro VP, Programs & Technology Quantum Storage Solutions Group
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects LBNL, Berkeley, California.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Questions for ATLAS  How can the US ATLAS costs per SW FTE be lowered?  Is the scope of the T1 facility matched to the foreseen physics requirements.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Atlas CAP Closeout Thanks to all the presenters for excellent and frank presentations Thanks to all the presenters for excellent and frank presentations.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
1 EIR Nov 4-8, 2002 DAQ and Online WBS 1.3 S. Fuess, Fermilab P. Slattery, U. of Rochester.
RCF Status - Introduction PHENIX and STAR Counting Houses are connected to RCF at a Network Bandwidth of 20 Gbits/sec each –Redundant (Bandwidth-wise and.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
ATLAS Computing Requirements LHCC - 19 March ATLAS Computing Requirements for 2007 and beyond.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
US Tier 2 issues Jim Shank Boston University U.S. ATLAS Tier 2 Meeting Harvard University August, 2006.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Vanderbilt Tier 2 Project
LHC Computing re-costing for
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
Nuclear Physics Data Management Needs Bruce G. Gibbard
Presentation transcript:

US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000

10/26/00US ATLAS Tier 1 Facility - Rich Baker Existing Facility

10/26/00US ATLAS Tier 1 Facility - Rich Baker

10/26/00US ATLAS Tier 1 Facility - Rich Baker Full Scale Facility (1) Based on NCB Review Numbers –Focus on Analysis (200k of 209k SI95) –Probably Insufficient for Simulation CPU: 209,000 SpecInt95 –Commodity Pentium/Linux –Estimated 640 Dual Processor Nodes Online Storage: 365 TB Disk –High Performance Storage Area Network –Baseline: Fibre Channel Raid Array

10/26/00US ATLAS Tier 1 Facility - Rich Baker Full Scale Facility (2) Tertiary Storage: 2 PB Tape Library –Baseline: HPSS, STK Media & Tape Drives –75% Event Summary Data –25% Simulation, Analysis Objects, Local Data –“Raw” I/O Rate: 400 MB/second, 12.5 PB/year –Exploit Use Patterns to Maximize Efficiency Random Access to AOD - Always on Disk Managed Access to ESD - Grid SW? Custom SW?

10/26/00US ATLAS Tier 1 Facility - Rich Baker Timeline Overview Prototype – FY ‘01 & FY ‘02 –Initial Development & Test, 1% to 2% scale –Establish Facility Independent from RCF –Lessons Learned from RCF Experience System Tests – FY ‘03 & FY ‘04 –Large Scale System Tests, 5% to 10% scale –Support Growing Tier 2 Network Operation – FY ‘05, FY ‘06 & beyond –Full Scale System Operation, 20% (‘05) to 100% (‘06)

10/26/00US ATLAS Tier 1 Facility - Rich Baker Tier 1 Facility Capacity

10/26/00US ATLAS Tier 1 Facility - Rich Baker

10/26/00US ATLAS Tier 1 Facility - Rich Baker Estimation Methods - Hardware (1) Use Recent RCF Purchases as Cost Baseline Moore’s Law Scaling for Commodity Components (CPU, Disk, Tape) STK Tape Drives: Constant Cost per Drive, Double I/O Capacity Every 2 Years Similar Constant Cost Projections for High Performance Data Mover Nodes –$40K per HPSS Mover Node –$30K per SAN Control Node

10/26/00US ATLAS Tier 1 Facility - Rich Baker Estimation Methods - Hardware (2) Local Area Network: 8% of Disk+CPU Cost plus $20K per HPSS Mover Firewall/WAN Hardware: 25% of LAN Cost Interactive Nodes –2 Linux Nodes Purchased per Year –Maintain One Sun/Solaris Node “General Purpose” Nodes –21 Currently for RCF - Estimate 25 for ATLAS

10/26/00US ATLAS Tier 1 Facility - Rich Baker Share Site License Costs with RCF –HPSS: 50% of $200K by 2005 –LSF: 50% of $65K Starting 2002 Veritas: $5K per SAN Control Node –Or Other SW Choice Most Other SW License Costs Can Only be Estimated - Total $97K in 2005 –Good Estimate Based on Actual RCF Costs to Support Operational Facility & Development Estimation Methods - Software

10/26/00US ATLAS Tier 1 Facility - Rich Baker Tier 1 Budget Numbers (k$) HPSS License Double Counted

10/26/00US ATLAS Tier 1 Facility - Rich Baker

10/26/00US ATLAS Tier 1 Facility - Rich Baker Tier 1 Material Numbers (k$) HPSS License Included in HPSS Column Volume Manager SW (Veritas) Included in Disk Column “Other” Includes Power, Videoconference, Supplies

10/26/00US ATLAS Tier 1 Facility - Rich Baker

10/26/00US ATLAS Tier 1 Facility - Rich Baker Tier 1 Facility Beyond 2006 Staffing: Constant at 25.5 FTE Major HW Components –Constant $ at 33% of 2006 Full Facility Cost –Allows for Continual Upgrade All Other Costs Level at 2005/2006 Levels

10/26/00US ATLAS Tier 1 Facility - Rich Baker Comments Scaleable Design - Recent 2.5X Expansion Very Late Bulk Procurement –Working System Earlier - Minimize Design Risk –Maximize Moore’s Law Advantage –Retain Flexibility as Long as Possible Leverage RCF Knowledge –Lessons Learned –Improved Estimation

10/26/00US ATLAS Tier 1 Facility - Rich Baker Summary Facility Already Running Near Term Prototype Planning in Progress Budget Exceeds Agency Guideline by 33% –Despite Recent 2.5X Scale Expansion! Estimates Are Realistic –Detailed Cost Basis From Recent Purchases –Moore’s Law Uncertainty Build to Cost Contingency Feasible –As Long as Tier 2 Facilities Are Funded