RDMS CMS Computing Activities: current status & participation in ARDA O.Kodolova, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 06, 2006
CMS sw installed at RuTier2 LCG-2 sites IHEP:VO-cms-slc3_ia32_gcc323 INR:VO-cms-OSCAR_3_6_5_SLC3_dar, VO-cms-ORCA_8_7_1_SLC3_dar, VO-cms-slc3_ia32_gcc323, VO-cms-ORCA_8_10_1 ; VO-cms-CMKIN_4_4_0_dar ITEP: VO-cms-CMKIN_4_1_0_dar; VO-cms-CMKIN_4_2_0_dar; VO-cms-CMKIN_4_4_0_dar; VO-cms-PU-mu_Hit3653_g133, VO-cms-OSCAR_3_6_5_SLC3_dar; VO-cms-ORCA_8_7_1_SLC3_dar; VO-cms-slc3_ia32_gcc323; VO-cms-ORCA_8_7_5; VO-cms-COBRA_8_5_0 JINR: VO-cms-CMKIN_4_1_0_dar; VO-cms-CMKIN_4_2_0_dar; ; VO-cms-CMKIN_4_4_0_dar; VO-cms-OSCAR_3_6_5_SLC3_dar, VO-cms-ORCA_8_7_1_SLC3_dar; VO-cms-CMKIN_4_4_0_dar; VO-cms-ORCA_8_4_0; VO-cms-COBRA_8_5_0; VO-cms-ORCA_8_7_5; VO-cms-slc3_ia32_gcc323 RRC KI:VO-cms-CMKIN_4_2_0_dar; VO-cms-OSCAR_3_6_5_SLC3_dar; VO-cms-ORCA_8_7_1_SLC3_dar ; VO-cms-slc3_ia32_gcc323; VO-cms-ORCA_8_7_4 SINP MSU: VO-cms-CMKIN_4_4_0_dar; VO-cms-OSCAR_3_6_5_SLC3_dar, VO-cms-ORCA_8_7_1_SLC3_dar, VO-cms-PU-mu_Hit3653_g133; VO-cms-ORCA_8_7_5; VO-cms-slc3_ia32_gcc323; VO-cms-COBRA_8_5_0;
CMS jobs at Russian Tier2 sites (October, 2005 – March, 2006): Usage of CPU resources at Russian Tier2 during October, 2005 – March, 2006 CMS jobs at Russian Tier2 sites (October, 2005 – March, 2006): PNPI – 30%, ITEP – 27%, JINR - 15%, SINP MSU – 13 %, INR – 9%, IHEP – 5%, RRC KI - 1%
RDMS CMS Data Bases ME1/1 Database: environment Database server: Oracle (provided by CERN IT and JINR LIT) WEB Server: Internet Information Server (Provided by CERN IT) ME1/1 Database: User Interfaces Web interface – provides the initial filling of database, different access levels for users, information search on different criterions, adding and updating data. HE Database: Tubes lengths measurements Radioactive source calibration. HF Database: Wedge calibration Beam wedge calibration Channel calibration. Databases & storage system structure
RDMS CMS Data Bases Activities: reported at 10th RDMS CMS Conference, St.Petersburg, September 10-17, 2005 A.Petrosyan ”Database subsystem for information support of HE calibration process” D.Oleynik “Design and realization of CMS subdetector database systems” See http://agenda.cern.ch/fullAgenda.php?ida=a052044. 20th Int.Symposium on Nuclear Electronics and Computing (NEC’2005), Varna, September, 12-18, 2005 I.Filozova “RDMS CMS Data Bases: Current Status, Development and Plans” See http://rdms-cms.jinr.ru/NEC-2005/NEC2005/irina.ppt CMS HCAL Software Preparedness Review, CERN, February, 23, 2006 A.Petrosyan ”HE data base: current status of JINR work” See http://agenda.cern.ch/fullAgenda.php?ida=a061147
HE Calibration Data Base HE Calibration Data Base is a part of HE calibration tools , a full interactive package based on C++, ROOT and Oracle DB. The package gives possibilities to analyse radioactive source, LED, laser and pedestals data. The results are stored in calibration data base (located at CERN) with remote access by means of application program interface.
Calibration DB & API status Calibration dataflow Current support – storing calibration results for HE- (laser->HPD, LED->HPD, RBX, radioactive source->tile, signal leveling) DB is placed at CERN Oracle server ~12000 records ~250Mb Row data automatically uploads to JINR storage ~200Gb Software is ready for HE calibration Calibration DB & API status
GUI for Laser->HPD DB GUI for Radioactive Source DB GUI for Pedestal DB GUI for LED->HPD DB GUI for Laser->HPD DB GUI for Radioactive Source DB
Nearest plans for HE Databases (before magnet test) 1. Unification of all HE databases (Equipment, Construction, Calibration) in one schema. Provide equipment mapping, provide full succession of relation between equipment description and calibration results. Contact persons: Danila Oleynik, Artem Petrosyan, Alexandre Vichnevski 2. Binding HE equipment/construction DB and CMS DDD Contact persons: From CERN Frank Glege, from JINR Danila Oleynik, Artem Petrosyan. 3. API modification for calibration software, building additional interfaces for reconstruction software Contact persons: From CERN Zhen Xie, from JINR, Roman Semenov, Peter Moissenz. 4. Evolution of web interface for equipment/construction DB. Build new web interface for calibration DB. Contact persons: Oleynik Danila, Vitaliy Smirnov
Participation in ARDA Tasks in 2005 Year: work done and visits distribution of CMS software dar files over storage element production job monitoring based on MonALISA use of PROOF for CMS users visits in 2005 year: 2 persons (3.5 months)- O.Kodolova, S.Berezhnoj
Dashboard: CMS Job Monitoring R-GMA Monalisa RB RB WN CE R-GMA Client API Web Service Interface Collector (RGMA) Collector (Monalisa) Constantly retrieve job information Submission tools Snapshot Statistics Job info. Plots ASAP database Postgres sqLite Dashboard Web UI sqLite pg Others? RDMS CMS staff started the participation in ARDA activities on monitoring for CMS PHP
Participation in ARDA Tasks in 2005 Year For fisrt prototype of the task monitoring for the production jobs: Production templates to be loaded in RefDB were instrumented for the reporting of the job exit status, failure reasons and exit status of the processing steps of the production jobs like staging in, execution, staging out. McRunjob submission part (without BOSS) was modified to send meta information about task and Grid job id to the dashboard at the submission time. The modifications should be included in the officcial McRunjob release and production templates at the RefDB side.
Participation in ARDA Tasks: Plans for 2006 (approved by CMS management) participation in SC4: the general integration and stress test of Tier2's and participation in CMS integrated Computing and Analysis Challenge (SCA06) (run the complete CMS workflows on the LCG grid at 1/4th of the nominal DAQ rate) monitoring of the production and analysis jobs based on MonALISA (transfer into the new production system, as soon as it is ready) a) monitoring of errors (unified common system for LCG and local farms) b) monitoring of the WN, SE, network, installed software c) participation in design of monitoring tables in the Dashboard use of PROOF for CMS users: a) understanding of use cases b) integration into CMS computing model c) participation in PROOF testbed as part of SC4 (to be approved by CMS management) portability of Monitor system participation in the CRAB-Task Manager integration participation in CMS data transfer test (PHEDEX + Dashboard)
Participation in ARDA Tasks: visits planning for 2006 (draft version) 4.5 months: Alexandre Berejnoi - 1 month in March - portability of Monitor system - participation in the CRAB-Task Manager integration - preparation for participating and/or participating in SC4 Olga Kodolova 1.5 months in May-July - use of PROOF for CMS users - monitoring of the production and analysis jobs based on MonAlise Olga Kodolova 1 month in October-November - monitoring of the production and analysis jobs based on MonAlise - participation in SC4 Alexandre Berejnoi - 1 month in November-December (?) This planning is preliminary; the changes of tasks and visits duration can be done in accordance with current needs
Participation in data transfer tests Aggregative 20MB/s transfer load to all CMS sites participating; it is necessary for it: to set 200-500 GB storage to host transfer load files at all RDMS sites plus storage for incoming files from neighbours to subscribe at PHEDEX every site to the transfer loads of its neighbours fully ready – end of March