V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
A tool to enable CMS Distributed Analysis
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, L.Levchuk 4, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2 1- Institute of Theoretical and Experimental Physics, Moscow,
Physicists's experience of the EGEE/LCG infrastructure usage for CMS jobs submission Natalia Ilina (ITEP Moscow) NEC’2007.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
V.Gavrilov 1, I.Golutvin 2, O.Kodolova 3, V.Korenkov 2, L.Levchuk 4, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
ALICE Roadmap for 2009/2010 Patricia Méndez Lorenzo (IT/GS) Patricia Méndez Lorenzo (IT/GS) On behalf of the ALICE Offline team Slides prepared by Latchezar.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Web application for detailed real-time database transaction monitoring for CMS condition data ICCMSE 2009 The 7th International Conference of Computational.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
Monte-Carlo Event Database: current status Sergey Belov, JINR, Dubna.
V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Networking and Grid in Russia V.A. Ilyin DDW’06, Cracow 11 October 2006.
Development of the CMS Databases and Interfaces for CMS Experiment: Current Status and Future Plans D.A Oleinik, A.Sh. Petrosyan, R.N.Semenov, I.A. Filozova,
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Monitoring of the LHC Computing Activities Key Results from the Services.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
David Stickland CMS Core Software and Computing
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Enabling Grids for E-sciencE CMS/ARDA activity within the CMS distributed system Julia Andreeva, CERN On behalf of ARDA group CHEP06.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
RDMS CMS Computing Activities: current status & participation in ARDA
Overview of the Belle II computing
Current Status and Plans
Data Challenge with the Grid in ATLAS
Russian Regional Center for LHC Data Analysis
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
RDIG for ALICE today and in future
Simulation use cases for T2 in ALICE
LCG Monte-Carlo Events Data Base: current status and plans
Monitoring of the infrastructure from the VO perspective
N. De Filippis - LLR-Ecole Polytechnique
ATLAS DC2 & Continuous production
Presentation transcript:

V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing

Outline RDMS CMS Computing Model RDMS CMS DBs LCG MCDB (usage in CMS) participation in ARDA - CMS dashboard (jobs monitoring) participation in SC4 participation in MTCC 2006 plans for CSA06

CERN to be T1 for RDMS is under an active discussion now

RDMS CMS Advanced Tier2 Cluster as a part of RuTier2 Conception: -Tier2 Cluster of institutional computing centers with a partial T1 functionality - summary resources at 1.5 level of the canonical Tier2 center for the CMS experiment + Mass Storage System ~5-10% of RAW DATA, ESD/DST for AOD selections design and checking and AOD for analysis (depending on a concrete task) Basic functions: analysis; simulation; users data support; calibration; reconstruction algorithm development … Host Tier1 in LCG infrastructure: CERN (basically for storage of MC data) access to real data through the T1-T2 ‘mesh” model for Europe Participating institutes: Moscow ITEP, SINP MSU, LPI RAS Moscow region JINR, IHEP, INR RAS St.Petersburg PNPI RAS Minsk (Belarus) NCPHEP Erevan (Armenia) ErPhI Kharkov (Ukraine) KIPT Tbilisi (Georgia) HEPI

Tier1 for RDMS CMS T2 Cluster Now the discussions are in process between CMS, RDMS and LCG management. The basic point under discussion is that CERN will serve as T1 center for RDMS. We suppose this very solution has strong basement to provide the RDMS CMS computing model requirements.

Tier1 for RDMS CMS T2 Cluster (cont.) A short summary of the technical conclusions meeting, September, 19: Presented: T. Virdee, S. Belforte, D. Newbold, S. Ilyin, I. Golutvin, L. Robertson, C. Eck CERN agreed to act as a CMS T1 centre for the purposes of receiving Monte Carlo data from Russian and Ukrainian T2 centres (RU-T2). CERN will take custodial responsibility for data produced at these centres, and for performing any necessary reprocessing, as described in the CMS CTDR. CERN will also act as the distribution point for RU-T2 access to CMS general AOD and RECO data. Additional disk and tape resources to those previously requested by CMS will be provided at CERN for the purposes of the storage and serving of RU-T2 simulated data CERN will act as a special-purpose T1 centre for CMS, taking a share of the general distribution of AOD and RECO data as required. This implies the establishment of data transfer routes to, in principle, any CMS T2. CMS intends to use the second copy of AOD and RECO at CERN to improve the flexibility and robustness of the computing system, and not as a general-purpose T1. CMS will work with CERN to understand the constraints on CERN external dataflows. It was noted that the network provision from RU-T2 sites to CERN was sufficient under the current understanding of the required dataflows.

Data Bases Status Data/Status InterfaceTestedProduction Data Condition Calibration: Pedestals, Rad. Source, LED->HPD, Laser->HPD, Leveling Coeff. ADC/GeV (in progress) Operating parameters C++ API, WEB interface, PL SQL Yes Construction ME 1/1 HE HF WEB interface Yes Equipment ME 1/1 HE HF WEB interface Yes

HE Calibration Data Flow OMDS - Online Master Data Storage ORCON - Offline Reconstruction Conditions DB ONline subset ORCOF - Offline Reconstruction Conditions DB OFfline subset LED Laser Source For monitoring HE stability

HE Calibration and Raw Data Flow OMDS - Online Master Data Storage ORCON - Offline Reconstruction Conditions DB ONline subset ORCOF - Offline Reconstruction Conditions DB OFfline subset

HE Calibration DB Status System is online Full calibration cycle support Integrated into CMS computing environment ~30000 records ~500Mb ~600Gb raw data transferred to JINR

HE Calibration DB Links Database browser: Database scheme:

The major tasks of LCG MCDB Share sophisticated MC samples between different groups –Samples, which require expert knowledge to simulate it –Samples, simulation of which requires a lot of CPU resources Provide infrastructure to keep verified MC samples and sample documentation Facilitate communication between experts of MC generators and users in LHC collaborations Give a program interface to collaboration software (now work is in progress)

MCDB usage in CMS CMS MCDB was used by PRS Higgs group for many years, this system was originated from this CMS group Was used for all MC samples except generated in shower generators (Pythia, Herwig) Now LCG MCDB is planning to use as a storage in all CMS groups which need verified MC samples for hard processes (see next slide)

Request from experiments to LCG MCDB Grid access to MC event files [done] Files replication [in progress] Upload from remote locations [WEB, Castor, AFS - done; GridFTP - in progress] Multiple files upload (from Castor and AFS) [done] Web interface improvements [continuous] API [in progress] HEPML libraries and tools [in progress]

LCG MCDB in CMS From the report of Albert De Roeck,CERN on the last MC4LHC workshop, CMS: MC Experience and Needs ME code ALPGEN MadGraph CompHEP Etc... CASTOR MCDB CMSSW Pythia (custom) Herwig (custom) G4 Parton shower Matching Event file Hard process Provenance Tuned parameters Event Data base: CMS MCDB for PTDR LCG MCDB in future (see LCG generator meeting) The plan:

Participation in ARDA 1.Participating in SC4: Russian sites follow the SC4-CSA06 chain Load test transfer, LCG installation of CMSSW, DBS/DLS, SC4 production,CSA06 2. Monitoring of transfer T1 T2 through Dashboard (observation over Load test transfer). We collaborate with Dashboard team to prepare the page and provide the observation week by week reporting at weekly Integration meetings: 3.Monitoring of production jobs with MonALISA a) monitoring of errors (unique system for LCG and local farms) b) monitoring of the WN, SE, network, installed software c) participation in the design of the monitoring tables in Dashboard 4.Use of PROOF for CMS users: a) understanding of use cases b) integration into CMS computing model 5.Portability of Monitor system 6.Participation in the CRAB-Task Manager integration

Dashboard: CMS Job Monitoring ASAP database RB CE Submission tools WN R-GMAMonalisa RB Constantly retrieve job information Web Service Interface R-GMA Client API pg sqLite Postgres sqLite Collector (RGMA) Collector (Monalisa) Others? Dashboard Web UI PHP RDMS CMS staff participates in ARDA activities on monitoring for CMS monitoring development: monitoring of errors due grid- environment and automatic job restart in these cases; monitoring of a number of events done; the further expansion to private simulation Snapshot Statistics Job info. Plots

Examples of ASAP Job Monitoring ( ) 300 jobs submitted to ITEP450 jobs submitted to SINP MSU

Participation in SC4 participation in CMS event simulation participation in Phedex data transfers (load test transfers and heart beat transfers) – permanently from May till September volume of monthly data transferred ~ about 10 TB

Usage of CPU resources at Russian Tier2 during January, 2006 – August, 2006 IHEP INR ITEP JINR LPI PNPI RRC KI SINP MSU it is 3.2 % of CPU resources used for all CMS jobs in the LCG-2 infrastructure 33 members of CMS VO at RDIG % of a total number of CMS VO members CMS sw is installed at RuTier2 LCG-2 sites 20.7 % of CPU resources at RuTier2 have been used for CMS jobs

Statistics on Phedex SC4 Data Transfers ( ) ITEP JINR SINP 3.8 TB have been transferred

Phedex SC4 Data Transfers ( )

RDMS CMS Participation in CMS SC4 Load tests ( – ) IHEP JINRSINP Transfer rate (MB/s) ITEP

CMS Magnet Test and Cosmic Challenge 2006 Concentrate on core SW functionality required for raw data storage, quality monitoring and further processing (a must for all three MTCC): RAW Data Data Bases Event Data File The magnet test going on now in CERN (April-July) is a milestone in the CMS construction. It completes the commissioning of the magnet system (coil & yoke) before its eventual lowering into the cavern. The MTCC06 have 4 components: Installation validation, Magnet Commissioning, Cosmic Challenge, Field Mapping The cosmic challenge including DAQ and Network tests on cosmic data (850 GB/day) Build an intelligible event from several subdetectors Validate hardware chain Validate integration of subdetectors into DAQ framework, Run Control, DCS Use of databases and network scaled down from final architectures Testing Offline Condition Service and Online-to-Offline(O2O) Transfer The possibility to transfer data from DAQ to RuTier-2 will be tested

CERN - JINR Data management for MTCC: common schema of realization

Plans for CSA06 (Computing and Software Analysis Challenge) RDMS sites are planning to participate in CSA06 starting on 1 October and running until 15 November The RDMS resources which will be provided for CSA06 are now under determination

SUMMARY RDMS CMS Computing Model has been in principle defined as Tier2 Cluster of institutional computing centers with a partial T1 functionality Data processing & analysis scenario has been developed in a context of estimation of resources on a basis of selected physics channels in which the RDMS CMS plans to participate. The very solution that CERN will serve as T1 center for RDMS would be a strong basement to provide the RDMS CMS computing model requirements. The proper RDMS CMS Grid infrastructure has been constructed at RuTier2 The main task now is to test RDMS CMS Grid infrastructure in a full volume to be ready for the LHC start!