Download presentation
Presentation is loading. Please wait.
Published byLeslie Robert Gaines Modified over 9 years ago
1
Database Hardware Resources at Tier-1 Sites Gordon D. Brown Rutherford Appleton Laboratory 3D/WLCG Workshop CERN, Geneva 11 th -14 th November 2008
2
Overview Database Machines Storage Plans Discussion
22
Nodes in Production Cluster 3D Atlas3D LHCbFTSLFCCASTORSRM CERN3432 CA-TRIUMF21 DE-GridKa2 ES-PIC32 FR-IN2P3 IT-CNAF32L2L 3A3A 2 L /3 A 4 NDGF1 NL-SARA2 TW-ASGC3232 UK-RAL32310 US-BNL4
23
Hardware Manufacturer CERNDell PowerEdge 2950 CA-TRIUMFHP ProLiant DL380 G5, Dell PowerEdge 1950 DE-GridKaIBM x336 ES-PICFujitsu PRIMERGY BX600 S3 FR-IN2P3 IT-CNAFDell PowerEdge 1950/2950 NDGFDell PowerEdge 1950 NL-SARADell 1850 TW-ASGCQuanta Blade Server, QB600 Xeon_E7520Server Blade UK-RALSupermicro US-BNL
24
DB Machines ProcCoreHyperSpeedMemDiskRedundancy CERN24No2.33GHz16GB476GB x 2dual power, mirrored disks, 4 NIC (2 private/2 public), dual HBA CA-TRIUMF22No3.00GHz10GB73GBDual power, RAID, switches DE-GridKa22No3.2GHz4GB73GBNo ES-PIC22?1.6GHz8GB75GBDual power, three ethernet FR-IN2P3 IT-CNAF22Yes3.2GHz4GB100GBDual power NDGF22No3.00GHz4GB72GBDual power NL-SARA22Yes3.2 GHz4GB140GBDual Power TW-ASGC21No3.0 GHz8GB80 GBN+1 power supplies UK-RAL22No2.4GHz4GB250GBDual power US-BNL223.0GHZ16GB
25
Storage TypeModelRAIDRawAfterRedundancy CERNSAN1+0 (ASM) 44.8TB~20TBDual channel CA-TRIUMFSANHP MSA20ASM4.5TB2.1TBFC switches, n/w switches, dual-port FC cards DE-GridKaSANCondor62TBNo ES-PICSANNetApp – FAS3040Double parity 18TB~6TBDual channel, dual controllers FR-IN2P3 IT-CNAFSANEMC CX3-801/54TB2.3TBFull, dual-port FC cards NDGFSANEMC CX7001/53.2TB2TB2 networks and controllers NL-SARASANSGI TP 91005n/a700GBDual channel TW-ASGCSANInfortrend A24F-G2224-1618TB15TBSingle controller UK-RALSANInfortrend A16F-G22211+04TB1.75TB US-BNLSANIBM DS34006TB
26
CERN CASTOR Hardware Setup –2 nodes CASTOR name server –6 nodes CASTOR+SRM ATLAS (2 stager, 2 DLF, 2 SRM) –6 nodes CASTOR+SRM ALICE (2 stager, 2 DLF, 2 SRM) –6 nodes CASTOR+SRM CMS (2 stager, 2 DLF, 2 SRM) –6 nodes CASTOR+SRM LHCb (2 stager, 2 DLF, 2 SRM) –6 nodes CASTOR+SRM "Public" (2 stager, 2 DLF, 2 SRM) –a few others for "ITDC" environment (testing) / dev / test Hardware manufacturer –HP and Dell
27
CERN CASTOR Hardware Machine model –HP DL380 G5 and Dell 1950 Number of processors per machine –2 Number of cores per processer –2 for the HPs and 4 for the Dells Processor clock speed –2.33GHz Hyperthreading –No
28
CERN CASTOR Hardware Memory size –8 or 16GB Local disk size –73GB Redundancy –dual power supplies –mirrored local disks –5 NIC (2 storage, 2 private, 1 public) Clustered storage type –NAS
29
CERN CASTOR Hardware Storage manufacturer –NetApp 3020 and 3040 Storage RAID (0, 1, 5, 1+0 etc) –RAID-DP (double parity) Storage space (raw) –Fibre Channel disks (10k rpm) and SATA disks –41.1TB Storage space (after RAID) –32.3TB Redundancy (dual channel etc) –Two paths to storage
30
Hardware Plans GridKa –New server in Spring 2009 PIC –None Triumf –3D,FTS - add a new 2-node cluster and storage by April 2009 –LFC - monitor performance, review situation at least each year, before renewing hardware support contract. NDGF –We are planning and have been planning to have a 3-node cluster for some time now. It looks like the storage system will be SAN with 4Gbit/s FC and SAS disks. Nodes will be something adequate, maybe dual Quad-core Xeons with 8 or 16GB of RAM.
31
Hardware Plans CNAF –Just installed: 10TB of SATA storage (for flash recovery area) and10 TB of FibreChannel storage (4Gbit) The Storage device is the same CX3-80 mentioned before. This new storage will be mainly used for INFN services, but some of it can be allocated to 3D or LHC experiments clusters if needed. CERN (CASTOR) –renewal of the HPs, not decided yet, blades likely SARA –We're in the process of purchasing new hardware.This process should be completed within a few months.
32
Hardware Plans ASGC –as outline in server spec, we plan to add new instances into same rac group but base on different profile from which the remote management is more stable and robust enough comparing with current solution adopt. CERN (3D) –probably to DELL blade systems, storage not known yet. RAL –SAN redundancy –failover on CASTOR
33
Group Questions When do you replace hardware? Is the 3D kit old now? Priorities for redundancy? Switch redundancy? How much is DBA involved in procurement? How much does sysadmin know about databases? Is warranty important?
34
Questions and (hopefully) Answers databaseservices@stfc.ac.uk
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.