CERN IT Department CH-1211 Genève 23 Switzerland t CCRC’08 Review from a DM perspective Alberto Pace (With slides from T.Bell, F.Donno, D.Duelmann, M.Kasemann, J.Shiers, …)
CERN IT Department CH-1211 Genève 23 Switzerland t Presentation title - 2 Before the main topic Safety reminder –The computer center has different safety requirements than normal offices –This is why authorization is needed to enter ! –This is why there are safety courses ! –Noise above level acceptable for long term work –Wind above level acceptable for long term work –False Floor – 1 meter deep ! –No differential power switch !! In case of accident call the fire brigade
CERN IT Department CH-1211 Genève 23 Switzerland t CCRC’08 Wiki site – mmonComputingReadinessChallengeshttps://twiki.cern.ch/twiki/bin/view/LCG/WLCGCo mmonComputingReadinessChallenges Ongoing challenge with all 4 experiments Presentation title - 3
CERN IT Department CH-1211 Genève 23 Switzerland t Online and offline databases
CERN IT Department CH-1211 Genève 23 Switzerland t CPU Usage ATLAS/CMS DBs
CERN IT Department CH-1211 Genève 23 Switzerland t Physical Reads
CERN IT Department CH-1211 Genève 23 Switzerland t Network traffic
CERN IT Department CH-1211 Genève 23 Switzerland t DB service - some observations In general: DB load still dominated by activities that did not scale-up significantly during CCRC –load changes by CCRC on monitoring, work-flow, production systems smaller than eg fluctuations between software releases –major contribution scaling with reconstruction jobs not yet visible at CERN and Tier 1 sites Exception: ATLAS reprocessing at BNL, TRIUMF and NDGF –increased dCache load on to Calibration files (POOL) introduced bottleneck –Consequence: extremely long (idle) database connections on conditions database CORAL failover between T1 sites worked Increased DB session limits, session sniping added, dCache pool for calibration files added DB service run smoothly and without major disruptions –As usual several node reboots minor impact thanks to cluster architecture –2h streams intervention (downstream capture) was scheduled in agreement with experiments and service coordination during CCRC
CERN IT Department CH-1211 Genève 23 Switzerland t Castor and Grid Data Management
CERN IT Department CH-1211 Genève 23 Switzerland t Tier-0 to Tier-1 Exports
CERN IT Department CH-1211 Genève 23 Switzerland t February Summary
CERN IT Department CH-1211 Genève 23 Switzerland t Not limited by Castor
CERN IT Department CH-1211 Genève 23 Switzerland t Successful Stage-in test
CERN IT Department CH-1211 Genève 23 Switzerland t SRM – 2... Working
CERN IT Department CH-1211 Genève 23 Switzerland t TAPE issues
CERN IT Department CH-1211 Genève 23 Switzerland t Total performance to tape Alice and LHCb running Castor without policies so around 100% improvement in write performance expected with With simulated file sizes, Atlas data rates have improved to 30MB/s writing Focus on file size and policies has shown some improvements in write performance Read efficiency remains low and dominates drive utilisation due to low number of files read per mount and non-production users
CERN IT Department CH-1211 Genève 23 Switzerland t Tape usage read dominated Random read dominates drive time (90% reading) Writing under control of Castor policies Reading much more difficult to improve from the Castor side
CERN IT Department CH-1211 Genève 23 Switzerland t Production vs Users Data retrieved for CCRC period for CMS CMS production is under cmsprod and phedex (25% total) Requests for tape recalls dominated by non-production Equivalent data for Atlas shows production requests < 5%
CERN IT Department CH-1211 Genève 23 Switzerland t Options Do nothing –Hope things work out OK Tape prioritization in Castor –complete minimum implementation of VDQM2 and tape queue prioritization –A new long term strategy may be necessary Dedicate resources –Fragmentation risks Hardware investment –Purchase 50 tape drives and servers –Cost is 15K CHF/drive and 6K CHF/tape server, total 1050 kCHF
CERN IT Department CH-1211 Genève 23 Switzerland t Problems reported
CERN IT Department CH-1211 Genève 23 Switzerland t Castor Invalid checksum value returned by the CASTOR gridftp2 server (reported by CMS on 05/02) FIXED in (07/02) Gsiftp TURLs returned by CASTOR are relative (reported by S2 and CMS on 06/02) FIXED in (07/02) Unable to map request to space for policy TRANSFER_WAN (reported by CMS on 07/02) FIXED in (08/02) The srmDaemon attempts to free an unallocated pointer and crashes (reported by CNAF) FIXED in (14/02) Some of the database at CERN have shown an index to be missing (found by S2). FIXED in (15/02) Insufficient user privileges to make a request of type StagePutDoneRequest in service class 'atldata' (reported by S2 and ATLAS on 19/02) ☺ PutDone executed by and allowed for (root,root) To be fixed Workaround provided on 23/02
CERN IT Department CH-1211 Genève 23 Switzerland t Castor Missing access control on spaces based on voms groups and roles (reported by ATLAS/LHCb on 19/02). Followed by Storage Solution WG Could not get user information: VOMS credential ops does not match grid mapping dteam (reported by S2 and CNAF on 21/02) ☹ Not yet understood Error creating statement, Oracle code: ORA-12154: TNS:could not resolve the connect identifier specified (reported by S2 and CNAF on 12/02) Not yet understood ☞ It happens at service startup. A restart cures the problem Server unresponsive at RAL? - Space token ATLASDATADISK does not exist (reported by S2 and ATLAS on 28/02) Number of threads increased from 100 to 150 (28/2)
CERN IT Department CH-1211 Genève 23 Switzerland t Castor Summary 10 software problems reported, no major problems 6 problems fixed (in 2-3 days average) Developers and operation people very responsive.
CERN IT Department CH-1211 Genève 23 Switzerland t DPM Default ACLs on directories do not work (reported by ATLAS on 13/02) FIXED in (certified) Slow file removal (reported by ATLAS on 22/02): ext3 filesystems much slower than xfs for delete operations (2048 files of 1.5GB removed in 90minutes against 5 seconds of xfs – tests performed on the 25/02) DPM is being certified and will be the release available for CCRC08 in May.
CERN IT Department CH-1211 Genève 23 Switzerland t Conclusion CCRC ’08 is a success so far All DM software and tools has been able to scale to the challenge and beyond All is well under control in both the database and data management areas Remains strategic directions where investigations and major improvements or simplifications need discussion: –Improve efficiency for analysis –Tape area in general –Service for online database, piquet service for support –Synergies between DM tools and Castor –Job scheduling in Castor, improve/common database schema for Grid DM tools and Castor –...