RDMS CMS Computing Activities: current status & participation in ARDA

Slides:



Advertisements
Similar presentations
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Advertisements

Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
A tool to enable CMS Distributed Analysis
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2 1- Institute of Theoretical and Experimental Physics, Moscow,
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Physicists's experience of the EGEE/LCG infrastructure usage for CMS jobs submission Natalia Ilina (ITEP Moscow) NEC’2007.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Claudio Grandi INFN Bologna CHEP'03 Conference, San Diego March 27th 2003 Plans for the integration of grid tools in the CMS computing environment Claudio.
LCG Middleware Testing in 2005 and Future Plans E.Slabospitskaya, IHEP, Russia CERN-Russia Joint Working Group on LHC Computing March, 6, 2006.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
RDMS CMS DataBases: Current Status, Development and Plans. D.A Oleinik, A.Sh. Petrosyan, R.N.Semenov, I.A. Filozova V.V Korenkov, P.V. Moissenz, A. Vishnevskii,
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
Development of the CMS Databases and Interfaces for CMS Experiment: Current Status and Future Plans D.A Oleinik, A.Sh. Petrosyan, R.N.Semenov, I.A. Filozova,
Korea Workshop May GAE CMS Analysis (Example) Michael Thomas (on behalf of the GAE group)
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Enabling Grids for E-sciencE Grid monitoring from the VO/User perspective. Dashboard for the LHC experiments Julia Andreeva CERN, IT/PSS.
Network integration with PanDA Artem Petrosyan PanDA UTA,
Enabling Grids for E-sciencE CMS/ARDA activity within the CMS distributed system Julia Andreeva, CERN On behalf of ARDA group CHEP06.
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
SAM Status Update Piotr Nyczyk LCG Management Board CERN, 5 June 2007.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
TIFR, Mumbai, India, Feb 13-17, GridView - A Grid Monitoring and Visualization Tool Rajesh Kalmady, Digamber Sonvane, Kislay Bhatt, Phool Chand,
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
EDG Project Conference – Barcelona 13 May 2003 – n° 1 A.Fanfani INFN Bologna – CMS WP8 – Grid Planning in CMS Outline  CMS Data Challenges  CMS Production.
Claudio Grandi INFN Bologna Workshop congiunto CCR e INFNGrid 13 maggio 2009 Le strategie per l’analisi nell’esperimento CMS Claudio Grandi (INFN Bologna)
Daniele Bonacorsi Andrea Sciabà
WLCG Transfers Dashboard
CMS High Level Trigger Configuration Management
Report of Dubna discussion
ALICE Monitoring
BOSS: the CMS interface for job summission, monitoring and bookkeeping
BOSS: the CMS interface for job summission, monitoring and bookkeeping
INFN-GRID Workshop Bari, October, 26, 2004
Ruslan Fomkin and Tore Risch Uppsala DataBase Laboratory
Artem Petrosyan (JINR), Danila Oleynik (JINR), Julia Andreeva (CERN)
BOSS: the CMS interface for job summission, monitoring and bookkeeping
CERN-Russia Collaboration in CASTOR Development
RDIG for ALICE today and in future
Job workflow Pre production operations:
a VO-oriented perspective
Monitoring of the infrastructure from the VO perspective
N. De Filippis - LLR-Ecole Polytechnique
Initial job submission and monitoring efforts with JClarens
ATLAS DC2 & Continuous production
Status and plans for bookkeeping system and production tools
Presentation transcript:

RDMS CMS Computing Activities: current status & participation in ARDA O.Kodolova, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 06, 2006

CMS sw installed at RuTier2 LCG-2 sites IHEP:VO-cms-slc3_ia32_gcc323 INR:VO-cms-OSCAR_3_6_5_SLC3_dar, VO-cms-ORCA_8_7_1_SLC3_dar, VO-cms-slc3_ia32_gcc323, VO-cms-ORCA_8_10_1 ; VO-cms-CMKIN_4_4_0_dar ITEP: VO-cms-CMKIN_4_1_0_dar; VO-cms-CMKIN_4_2_0_dar; VO-cms-CMKIN_4_4_0_dar; VO-cms-PU-mu_Hit3653_g133, VO-cms-OSCAR_3_6_5_SLC3_dar; VO-cms-ORCA_8_7_1_SLC3_dar; VO-cms-slc3_ia32_gcc323; VO-cms-ORCA_8_7_5; VO-cms-COBRA_8_5_0 JINR: VO-cms-CMKIN_4_1_0_dar; VO-cms-CMKIN_4_2_0_dar; ; VO-cms-CMKIN_4_4_0_dar; VO-cms-OSCAR_3_6_5_SLC3_dar, VO-cms-ORCA_8_7_1_SLC3_dar; VO-cms-CMKIN_4_4_0_dar; VO-cms-ORCA_8_4_0; VO-cms-COBRA_8_5_0; VO-cms-ORCA_8_7_5; VO-cms-slc3_ia32_gcc323 RRC KI:VO-cms-CMKIN_4_2_0_dar; VO-cms-OSCAR_3_6_5_SLC3_dar; VO-cms-ORCA_8_7_1_SLC3_dar ; VO-cms-slc3_ia32_gcc323; VO-cms-ORCA_8_7_4 SINP MSU: VO-cms-CMKIN_4_4_0_dar; VO-cms-OSCAR_3_6_5_SLC3_dar, VO-cms-ORCA_8_7_1_SLC3_dar, VO-cms-PU-mu_Hit3653_g133; VO-cms-ORCA_8_7_5; VO-cms-slc3_ia32_gcc323; VO-cms-COBRA_8_5_0;

CMS jobs at Russian Tier2 sites (October, 2005 – March, 2006): Usage of CPU resources at Russian Tier2 during October, 2005 – March, 2006 CMS jobs at Russian Tier2 sites (October, 2005 – March, 2006): PNPI – 30%, ITEP – 27%, JINR - 15%, SINP MSU – 13 %, INR – 9%, IHEP – 5%, RRC KI - 1%

RDMS CMS Data Bases ME1/1 Database: environment Database server: Oracle (provided by CERN IT and JINR LIT) WEB Server: Internet Information Server (Provided by CERN IT) ME1/1 Database: User Interfaces Web interface – provides the initial filling of database, different access levels for users, information search on different criterions, adding and updating data. HE Database: Tubes lengths measurements Radioactive source calibration. HF Database: Wedge calibration Beam wedge calibration Channel calibration. Databases & storage system structure

RDMS CMS Data Bases Activities: reported at 10th RDMS CMS Conference, St.Petersburg, September 10-17, 2005 A.Petrosyan ”Database subsystem for information support of HE calibration process” D.Oleynik “Design and realization of CMS subdetector database systems” See http://agenda.cern.ch/fullAgenda.php?ida=a052044. 20th Int.Symposium on Nuclear Electronics and Computing (NEC’2005), Varna, September, 12-18, 2005 I.Filozova “RDMS CMS Data Bases: Current Status, Development and Plans” See http://rdms-cms.jinr.ru/NEC-2005/NEC2005/irina.ppt CMS HCAL Software Preparedness Review, CERN, February, 23, 2006 A.Petrosyan ”HE data base: current status of JINR work” See http://agenda.cern.ch/fullAgenda.php?ida=a061147

HE Calibration Data Base HE Calibration Data Base is a part of HE calibration tools , a full interactive package based on C++, ROOT and Oracle DB. The package gives possibilities to analyse radioactive source, LED, laser and pedestals data. The results are stored in calibration data base (located at CERN) with remote access by means of application program interface.

Calibration DB & API status Calibration dataflow Current support – storing calibration results for HE- (laser->HPD, LED->HPD, RBX, radioactive source->tile, signal leveling) DB is placed at CERN Oracle server ~12000 records ~250Mb Row data automatically uploads to JINR storage ~200Gb Software is ready for HE calibration Calibration DB & API status

GUI for Laser->HPD DB GUI for Radioactive Source DB GUI for Pedestal DB GUI for LED->HPD DB GUI for Laser->HPD DB GUI for Radioactive Source DB

Nearest plans for HE Databases (before magnet test) 1. Unification of all HE databases (Equipment, Construction, Calibration) in one schema. Provide equipment mapping, provide full succession of relation between equipment description and calibration results. Contact persons: Danila Oleynik, Artem Petrosyan, Alexandre Vichnevski 2. Binding HE equipment/construction DB and CMS DDD Contact persons: From CERN Frank Glege, from JINR Danila Oleynik, Artem Petrosyan. 3. API modification for calibration software, building additional interfaces for reconstruction software Contact persons: From CERN Zhen Xie, from JINR, Roman Semenov, Peter Moissenz. 4. Evolution of web interface for equipment/construction DB. Build new web interface for calibration DB. Contact persons: Oleynik Danila, Vitaliy Smirnov

Participation in ARDA Tasks in 2005 Year: work done and visits distribution of CMS software dar files over storage element production job monitoring based on MonALISA use of PROOF for CMS users visits in 2005 year: 2 persons (3.5 months)- O.Kodolova, S.Berezhnoj

Dashboard: CMS Job Monitoring R-GMA Monalisa RB RB WN CE R-GMA Client API Web Service Interface Collector (RGMA) Collector (Monalisa) Constantly retrieve job information Submission tools Snapshot Statistics Job info. Plots ASAP database Postgres sqLite Dashboard Web UI sqLite pg Others? RDMS CMS staff started the participation in ARDA activities on monitoring for CMS PHP

Participation in ARDA Tasks in 2005 Year For fisrt prototype of the task monitoring for the production jobs: Production templates to be loaded in RefDB were instrumented for the reporting of the job exit status, failure reasons and exit status of the processing steps of the production jobs like staging in, execution, staging out. McRunjob submission part (without BOSS) was modified to send meta information about task and Grid job id to the dashboard at the submission time. The modifications should be included in the officcial McRunjob release and production templates at the RefDB side.

Participation in ARDA Tasks: Plans for 2006 (approved by CMS management) participation in SC4: the general integration and stress test of Tier2's and participation in CMS integrated Computing and Analysis Challenge (SCA06) (run the complete CMS workflows on the LCG grid at 1/4th of the nominal DAQ rate) monitoring of the production and analysis jobs based on MonALISA (transfer into the new production system, as soon as it is ready) a) monitoring of errors (unified common system for LCG and local farms) b) monitoring of the WN, SE, network, installed software c) participation in design of monitoring tables in the Dashboard use of PROOF for CMS users: a) understanding of use cases b) integration into CMS computing model c) participation in PROOF testbed as part of SC4 (to be approved by CMS management) portability of Monitor system participation in the CRAB-Task Manager integration participation in CMS data transfer test (PHEDEX + Dashboard)

Participation in ARDA Tasks: visits planning for 2006 (draft version) 4.5 months: Alexandre Berejnoi - 1 month in March - portability of Monitor system - participation in the CRAB-Task Manager integration - preparation for participating and/or participating in SC4 Olga Kodolova 1.5 months in May-July - use of PROOF for CMS users - monitoring of the production and analysis jobs based on MonAlise Olga Kodolova 1 month in October-November - monitoring of the production and analysis jobs based on MonAlise - participation in SC4 Alexandre Berejnoi - 1 month in November-December (?) This planning is preliminary; the changes of tasks and visits duration can be done in accordance with current needs

Participation in data transfer tests Aggregative 20MB/s transfer load to all CMS sites participating; it is necessary for it: to set 200-500 GB storage to host transfer load files at all RDMS sites plus storage for incoming files from neighbours to subscribe at PHEDEX every site to the transfer loads of its neighbours fully ready – end of March