Download presentation
Presentation is loading. Please wait.
Published byEmma McKinnon Modified over 11 years ago
1
Hardware Reliability at the RAL Tier1 Gareth Smith 16 th September 2011
2
Staffing The following staff have left since GridPP26: James Thorne (Fabric - Disk servers) Matt Hodges (Grid Services Team leader) Derek Ross (Grid Services Team) Richard Hellier (Grid Services & Castor Teams) We thank them for their work whilst with the Tier1 team. 31 March 2014 Tier-1 Status
3
Some Changes CVMFS in use for Atlas & LHCb: –The Atlas (NFS) software server used to give significant problems. –Some CVMFS teething issues but overall much better! Virtualisation: –Starting to bear fruit. Uses Hyper-V. Numerous test systems Production systems that do not require particular resilience. Quattor: –Large gains already made. 31 March 2014 Tier-1 Status
4
4 6th May 2011 CernVM-FS Service Architecture Squid cvmfs-public@cern Batch Node Random site cvmfs-ral@cerncvmfs-bnl@cern Stratum 0 web in PH Atlas install box in PHLHCb install box in PH Replication to Stratum 1 by hourly cron (for now) Stratum 0 moving to IT by end of year BNL almost in production
5
5 6th May 2011 CernVM-FS Service At RAL Squid Batch Node squid cache lcgsquid05 iSCSI Storage webfs.gridpp.rl.ac.uk WEB Server - replicated Stratum 0 at CERN lcgsquid06 Squid(s) Batch Node Random site Batch Node RAL batch cvmfs.gridpp.rl.ac.uk The Replica at RAL - presented as a virtual host in 2 reverse proxy squids accelerating webfs in the background squid cache
6
Database Infrastructure We making Significant Changes to the Oracle Database Infrastructure. Why? Old servers are out of maintenance Move from 32bit to 64bit databases Performance improvements Standby systems Simplified architecture
7
Database Disk Arrays - Now 31 March 2014 Tier-1 Status Fibrechannel SAN Oracle RAC Nodes Disk Arrays Power Supplies (on UPS)
8
Database Disk Arrays - Future 31 March 2014 Tier-1 Status Fibrechannel SAN Oracle RAC Nodes Disk Arrays Power Supplies (on UPS) Data Guard
9
Castor Changes since last GridPP Meeting: Castor upgrade to 2.1.10 (March) Castor version 2.1.10-1 (July) needed for the higher capacity "T10KC" tapes. Updated Garbage Collection Algorithm (to LRU rather than the default which is based on size). (July) (Moved logrotate to 1pm rather than 4am.) 31 March 2014 Tier-1 Status
10
Castor Issues. Load related issues on small/full service classes (e.g. AtlasScratchDisk; LHCbRawRDst) –Load can become concentrated on one or two disk servers. –Exacerbated if uneven distribution if disk server sizes. Solutions: –Add more capacity; clean-up. –Changes to tape migration policies. –Re-organization of service classes. 31 March 2014 Tier-1 Status
11
Procurement All existing bulk capacity orders in production or being deployed. Problems SL08 generation overcome. Tenders under way for disk and tape. Disk: Anticipate 2.66 PB usable space. –Vendor 24 day proving test using our tests. Then –Re-install and 7 days acceptance tests by us. CPU: Anticipate 12k HEPSpec06 –14 day proving tests by vendor. Then –14 day acceptance tests by us. Evaluation based on 5 year Total Cost of Ownership. 31 March 2014 Tier-1 Status
12
Disk Server Outages by Cause (2011) 31 March 2014Tier-1 Status
13
Disk Server Outages by Service Class (2011) 31 March 2014Tier-1 Status
14
Disk Drive Failure – Year 2011
15
Double Disk Failures (2011) In process of updating the firmware on the particular batch of disk controllers. 31 March 2014 Tier-1 Status
16
Disk Server Issues - Responses New Possibilities with Castor 2.1.9 or later: Draining (passive and active) Read –only server. Checksumming (2.1.9) – easier to validate files plus regular check of files written. All of these used regularly when responding to a disk server problems. 31 March 2014 Tier-1 Status
17
Data Loss Incidents Summary of losses since GridPP26 Total of 12 incidents logged: 1 – Due to a disk server failure (loss of 8 files for CMS) 1 – Due to a bad tape (loss of 3 files for LHCb) 1 - Files not in Castor Nameserver but no location. ( 9 LHCb files) 9 – Cases of corrupt files. In most cases the files were old (and pre-date Castor checksumming). Checksumming in place of tape and disk files. Daily and random checks made on disk files. 31 March 2014 Tier-1 Status
18
T10000 Tapes Type CapacityIn UseTotal Capacity A 0.5TB55702.2PB B 1TB21701.9PB (CMS) C 5TB We have 320 T10KC tapes (capacity ~1.5PByte) already purchased. Plan is to move data (VOs) using the A tapes onto the C tapes, leapfrogging the Bs. 31 March 2014 Tier-1 Status
19
T10000C Issues Failure of 6 out of 10 tapes. –Current A/B failure rate roughly 1 in 1000. –After writing part of a tape an error was reported. Concerns are three fold: –A high rate of write errors cause disruption –If tapes could not be filled our capacity would be reduced –We were not 100% confident that data would be secure Updated Firmware in drives. –100 tapes now successfully written without problem. In contact with Oracle. 31 March 2014 Tier-1 Status
20
Long-Term Operational Issues Building R89 (Noisy power v EMC units); Electrical Discharge in 11kV supply. –Noisy electrical current: Fixed by use of isolating transformers in appropriate places. –Await final information on resolution of 11kV discharge. Asymmetric Data Transfer rates in/out of RAL Tier1. –Many possible causes: Load; FTS settings, Disk server settings; TCP/IP tuning, network (LAN & WAN performance). –Have modified FTS settings with some success. –Looking at Tier1-UK Tier2 transfers as within GridPP. 31 March 2014 Tier-1 Status
21
Long-Term Operational Issues BDII issues. (Site BDII stops updating). –Monitoring & re-starters in place. Packet Loss on RAL Network Link –Particular effect on LFC updates (reported by Atlas) –Some evidence of being load related. Some hangs within Castor JobManager –Difficult to trace, very intermittent. 31 March 2014 Tier-1 Status
22
Other reported hardware issues The following were reported via theTier1-Experiments Liaison Meeting. (Isolating transformers in disk array power feeds - final resolution) Network switch stack failure Network (transceiver) failure in switch – in link to tape system CMS LSF machine was also turned off by mistake - resulting in a short outage for srm-cms. T10KC tapes 11kV feed into building - discharge. Failure of Site Access Router 31 March 2014 Tier-1 Status
23
Other Hardware Issues The following were reported via theTier1-Experiments Liaison Meeting. (Isolating transformers in disk array power feeds - final resolution of long standing problem) Network switch stack failure Network (transceiver) failure in switch – in link to tape system CMS LSF machine was also turned off by mistake - resulting in a short outage for srm-cms. T10KC tapes 11kV feed into building – electrical discharge. Failure of Site Access Router 31 March 2014 Tier-1 Status
24
A couple of final comments Disk server issues are the main area of effort for hardware reliability / stability....but do not forget the network. Hardware that has performed reliably in the past may throw up a systematic problem. 31 March 2014 Tier-1 Status
25
Additional Slide: Procedures.... Post Mortem: 1 Post Mortem During 6 months since GridPP26. 10th May 2011: LFC Outage After Database Update 1 hour outage following an At Risk. Caused by configuration error in Oracle ACL lists. Disaster Management: Triggered once (for T10KC tape problem). 31 March 2014 Tier-1 Status
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.