Presentation is loading. Please wait.

Presentation is loading. Please wait.

ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.

Similar presentations


Presentation on theme: "ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006."— Presentation transcript:

1 ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006

2 B. Gibbard Grid Deployment Board 2 General  ATLAS Tier 1 at BNL is co-located and co-operated with RHIC Computing Facility (RCF)  Long term, 2008+, 2 facilities will be of comparable scale  Currently: o ATLAS Tier 1 capacities are ~25% of RCF o ATLAS Tier 1 staff level ~ 70% of RCF  Organizationally located in the Physics Dept.  Equipment located on raised floor computer area of IT Division Bldg.  Facility functions within the context of the Open Science Grid (OSG) and supports 5 US Tier 2 Centers/Federations (2 recently designated *)  Boston Univ. & Harvard Univ.  Midwest (Univ. of Chicago & Indiana Univ.)  Southwest (Univ. of Texas at Arlington, Oklahoma Univ., Univ. of New Mexico, Langston Univ.)  Stanford Linear Accelerator Center *  Great Lakes (Univ. of Michigan & Michigan State Univ.) *  Production /analysis operations via US ATLAS specific PanDA job management system

3 5-6 September 2006 B. Gibbard Grid Deployment Board 3 Storage Service - Disk  NFS  ~30 TB of RAID 5 from MTI and IBM  Served by SUN and IBM servers  In context of (250 TB for RHIC)  AFS  ~5 TB of RAID 5/6 from Aberdeen  Served by Linux servers  dCache  Served by processor farm nodes  ~200 TB in service (for more than a year)  ~300 TB additional on site but not yet commissioned

4 5-6 September 2006 B. Gibbard Grid Deployment Board 4 Storage Service - Tape  SUN/StorageTek SL8500 Automated Tape Library  6500 Tape Capacity => 2.6 PB for current tape technology  Current ATLAS data volume is 0.3 PB  Compared to RHIC (4.5 PB of data in 7.4 PB of capacity)  10 LTO Gen 3 Tape Drives  Theoretical native, uncompressed, streaming of 80 MB/sec per drive with 400 GB per tape  Compared to RHIC (20 LTO Gen 3 and 35 9940B drives)  IBM Raid 5 Disk Cache  ~8 TB with 300-400 MB/sec throughput  Hierarchical Storage Manager is HPSS from IBM  Version 5.1

5 5-6 September 2006 B. Gibbard Grid Deployment Board 5 Compute Service - cpu  Rack mounted, Condor managed, Linux nodes  ATLAS ~(600 + 700 kSI2K) o ~300 dual Intel processor nodes in operation o 160 dual processor dual core Opterons awaiting commissioning  Compared to o RHIC & Physics Dept ~(2600 + 900 kSI2K)  ~1450 dual Intel processor nodes in operation  186 dual processor dual core Opterons awaiting commissioning  Primary Grid Interface for Production & Distributed Analysis  OSG / PanDA  Utilization  Utilization → over last year over last year

6 5-6 September 2006 B. Gibbard Grid Deployment Board 6 20 Gb/s NSF RAID 5 (20 TB) HPSS Mass Storage System Gridftp (2 nodes / 0.8 TB local) HRM SRM dCache SRMdCache Doors (M nodes) WAN 2x10 Gb/s LHC OPN VLAN 2 x 1 Gb/s 1 Gb/s Write Pool (~10 nodes / 2 RAID 5 TB) Read Pool (~300 nodes / 150 TB) M x 1 Gb/s Tier 1 VLANS 20 Gb/s M x 1 Gb/s dCache.... N x 1 Gb/s.... 20 Gb/s Logical Connections BNL Tier 1 WAN Storage Interfaces and Logic View

7 5-6 September 2006 B. Gibbard Grid Deployment Board 7 Other connections MAN LAN CERN (?) NLR ESnet GEANT, etc. BNL internal WAN/MAN Connectivity

8 5-6 September 2006 B. Gibbard Grid Deployment Board 8 Physical Infrastructure  Last year the limit of capacities on existing floor space were reached for  … chilled water, cooled air, UPS power, and power distribution  Therefore this year for first time major physical infrastructure improvements were needed  New chilled water feed  Local rather than building wide augmentation of services in the form of o 250KW of local UPS / PDU systems in three local units o Local rack top cooling  Approaching limit of available floor space itself  Raised floor with fire detection & suppression, physical security  Current space will allow 2007 and 2008 (perhaps even 2009) expansion o Additional power & cooling will be needed each year  Brookhaven Lab has committed to supply new computing space for 2009/2010 and beyond  Optimization of planning goes beyond ATLAS needs  No firm plan in place yet

9 5-6 September 2006 B. Gibbard Grid Deployment Board 9


Download ppt "ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006."

Similar presentations


Ads by Google