ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.

Slides:



Advertisements
Similar presentations
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
High Performance Computing Center North  HPC2N 2002 all rights reserved HPC2N and SweGrid Åke Sandgren, HPC2N and SweGrid Technology Group.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
ATLAS Tier 2 Status (IU/BU) J. Shank Boston University iVDGL Facilities Workshop (March 20-22, 2002) BNL.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
BNL Facility Status and Service Challenge 3 Zhenping Liu, Razvan Popescu, Xin Zhao and Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
U.S. ATLAS Facilities Jim Shank Boston University (Danton Yu, Rob Gardner, Kaushik De, Torre Wenaus, others)
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
Portuguese Grid Infrastruture(s) Gonçalo Borges Jornadas LIP 2010 Braga, Janeiro 2010.
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Overview of U.S. ATLAS Computing Facility Michael Ernst Brookhaven National Laboratory BNL-FZK-IN2P3 Tier-1 Meeting 22 May, 2007.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
U.S. ATLAS Tier 2 Computing Center
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Luca dell’Agnello INFN-CNAF
Southwest Tier 2.
The INFN Tier-1 Storage Implementation
Nuclear Physics Data Management Needs Bruce G. Gibbard
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006

B. Gibbard Grid Deployment Board 2 General  ATLAS Tier 1 at BNL is co-located and co-operated with RHIC Computing Facility (RCF)  Long term, 2008+, 2 facilities will be of comparable scale  Currently: o ATLAS Tier 1 capacities are ~25% of RCF o ATLAS Tier 1 staff level ~ 70% of RCF  Organizationally located in the Physics Dept.  Equipment located on raised floor computer area of IT Division Bldg.  Facility functions within the context of the Open Science Grid (OSG) and supports 5 US Tier 2 Centers/Federations (2 recently designated *)  Boston Univ. & Harvard Univ.  Midwest (Univ. of Chicago & Indiana Univ.)  Southwest (Univ. of Texas at Arlington, Oklahoma Univ., Univ. of New Mexico, Langston Univ.)  Stanford Linear Accelerator Center *  Great Lakes (Univ. of Michigan & Michigan State Univ.) *  Production /analysis operations via US ATLAS specific PanDA job management system

5-6 September 2006 B. Gibbard Grid Deployment Board 3 Storage Service - Disk  NFS  ~30 TB of RAID 5 from MTI and IBM  Served by SUN and IBM servers  In context of (250 TB for RHIC)  AFS  ~5 TB of RAID 5/6 from Aberdeen  Served by Linux servers  dCache  Served by processor farm nodes  ~200 TB in service (for more than a year)  ~300 TB additional on site but not yet commissioned

5-6 September 2006 B. Gibbard Grid Deployment Board 4 Storage Service - Tape  SUN/StorageTek SL8500 Automated Tape Library  6500 Tape Capacity => 2.6 PB for current tape technology  Current ATLAS data volume is 0.3 PB  Compared to RHIC (4.5 PB of data in 7.4 PB of capacity)  10 LTO Gen 3 Tape Drives  Theoretical native, uncompressed, streaming of 80 MB/sec per drive with 400 GB per tape  Compared to RHIC (20 LTO Gen 3 and B drives)  IBM Raid 5 Disk Cache  ~8 TB with MB/sec throughput  Hierarchical Storage Manager is HPSS from IBM  Version 5.1

5-6 September 2006 B. Gibbard Grid Deployment Board 5 Compute Service - cpu  Rack mounted, Condor managed, Linux nodes  ATLAS ~( kSI2K) o ~300 dual Intel processor nodes in operation o 160 dual processor dual core Opterons awaiting commissioning  Compared to o RHIC & Physics Dept ~( kSI2K)  ~1450 dual Intel processor nodes in operation  186 dual processor dual core Opterons awaiting commissioning  Primary Grid Interface for Production & Distributed Analysis  OSG / PanDA  Utilization  Utilization → over last year over last year

5-6 September 2006 B. Gibbard Grid Deployment Board 6 20 Gb/s NSF RAID 5 (20 TB) HPSS Mass Storage System Gridftp (2 nodes / 0.8 TB local) HRM SRM dCache SRMdCache Doors (M nodes) WAN 2x10 Gb/s LHC OPN VLAN 2 x 1 Gb/s 1 Gb/s Write Pool (~10 nodes / 2 RAID 5 TB) Read Pool (~300 nodes / 150 TB) M x 1 Gb/s Tier 1 VLANS 20 Gb/s M x 1 Gb/s dCache.... N x 1 Gb/s Gb/s Logical Connections BNL Tier 1 WAN Storage Interfaces and Logic View

5-6 September 2006 B. Gibbard Grid Deployment Board 7 Other connections MAN LAN CERN (?) NLR ESnet GEANT, etc. BNL internal WAN/MAN Connectivity

5-6 September 2006 B. Gibbard Grid Deployment Board 8 Physical Infrastructure  Last year the limit of capacities on existing floor space were reached for  … chilled water, cooled air, UPS power, and power distribution  Therefore this year for first time major physical infrastructure improvements were needed  New chilled water feed  Local rather than building wide augmentation of services in the form of o 250KW of local UPS / PDU systems in three local units o Local rack top cooling  Approaching limit of available floor space itself  Raised floor with fire detection & suppression, physical security  Current space will allow 2007 and 2008 (perhaps even 2009) expansion o Additional power & cooling will be needed each year  Brookhaven Lab has committed to supply new computing space for 2009/2010 and beyond  Optimization of planning goes beyond ATLAS needs  No firm plan in place yet

5-6 September 2006 B. Gibbard Grid Deployment Board 9