Presentation is loading. Please wait.

Presentation is loading. Please wait.

V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing.

Similar presentations


Presentation on theme: "V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing."— Presentation transcript:

1 V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing

2 Outline RDMS CMS Computing Model RDMS CMS DBs LCG MCDB (usage in CMS) participation in ARDA - CMS dashboard (jobs monitoring) participation in SC4 participation in MTCC 2006 plans for CSA06

3 CERN to be T1 for RDMS is under an active discussion now

4 RDMS CMS Advanced Tier2 Cluster as a part of RuTier2 Conception: -Tier2 Cluster of institutional computing centers with a partial T1 functionality - summary resources at 1.5 level of the canonical Tier2 center for the CMS experiment + Mass Storage System ~5-10% of RAW DATA, ESD/DST for AOD selections design and checking and AOD for analysis (depending on a concrete task) Basic functions: analysis; simulation; users data support; calibration; reconstruction algorithm development … Host Tier1 in LCG infrastructure: CERN (basically for storage of MC data) access to real data through the T1-T2 ‘mesh” model for Europe Participating institutes: Moscow ITEP, SINP MSU, LPI RAS Moscow region JINR, IHEP, INR RAS St.Petersburg PNPI RAS Minsk (Belarus) NCPHEP Erevan (Armenia) ErPhI Kharkov (Ukraine) KIPT Tbilisi (Georgia) HEPI

5 Tier1 for RDMS CMS T2 Cluster Now the discussions are in process between CMS, RDMS and LCG management. The basic point under discussion is that CERN will serve as T1 center for RDMS. We suppose this very solution has strong basement to provide the RDMS CMS computing model requirements.

6 Tier1 for RDMS CMS T2 Cluster (cont.) A short summary of the technical conclusions meeting, September, 19: Presented: T. Virdee, S. Belforte, D. Newbold, S. Ilyin, I. Golutvin, L. Robertson, C. Eck CERN agreed to act as a CMS T1 centre for the purposes of receiving Monte Carlo data from Russian and Ukrainian T2 centres (RU-T2). CERN will take custodial responsibility for data produced at these centres, and for performing any necessary reprocessing, as described in the CMS CTDR. CERN will also act as the distribution point for RU-T2 access to CMS general AOD and RECO data. Additional disk and tape resources to those previously requested by CMS will be provided at CERN for the purposes of the storage and serving of RU-T2 simulated data CERN will act as a special-purpose T1 centre for CMS, taking a share of the general distribution of AOD and RECO data as required. This implies the establishment of data transfer routes to, in principle, any CMS T2. CMS intends to use the second copy of AOD and RECO at CERN to improve the flexibility and robustness of the computing system, and not as a general-purpose T1. CMS will work with CERN to understand the constraints on CERN external dataflows. It was noted that the network provision from RU-T2 sites to CERN was sufficient under the current understanding of the required dataflows.

7

8 Data Bases Status Data/Status InterfaceTestedProduction Data Condition Calibration: Pedestals, Rad. Source, LED->HPD, Laser->HPD, Leveling Coeff. ADC/GeV (in progress) Operating parameters C++ API, WEB interface, PL SQL Yes Construction ME 1/1 HE HF WEB interface Yes Equipment ME 1/1 HE HF WEB interface Yes

9 HE Calibration Data Flow OMDS - Online Master Data Storage ORCON - Offline Reconstruction Conditions DB ONline subset ORCOF - Offline Reconstruction Conditions DB OFfline subset LED Laser Source For monitoring HE stability

10 HE Calibration and Raw Data Flow OMDS - Online Master Data Storage ORCON - Offline Reconstruction Conditions DB ONline subset ORCOF - Offline Reconstruction Conditions DB OFfline subset

11 HE Calibration DB Status System is online Full calibration cycle support Integrated into CMS computing environment ~30000 records ~500Mb ~600Gb raw data transferred to JINR

12 HE Calibration DB Links Database browser: http://cms-he.web.cern.ch/ Database scheme: http://iis.jinr.ru/virthead/cern/uniequipment/untitled.htm

13 The major tasks of LCG MCDB Share sophisticated MC samples between different groups –Samples, which require expert knowledge to simulate it –Samples, simulation of which requires a lot of CPU resources Provide infrastructure to keep verified MC samples and sample documentation Facilitate communication between experts of MC generators and users in LHC collaborations Give a program interface to collaboration software (now work is in progress)

14 MCDB usage in CMS CMS MCDB was used by PRS Higgs group for many years, this system was originated from this CMS group Was used for all MC samples except generated in shower generators (Pythia, Herwig) Now LCG MCDB is planning to use as a storage in all CMS groups which need verified MC samples for hard processes (see next slide)

15 Request from experiments to LCG MCDB Grid access to MC event files [done] Files replication [in progress] Upload from remote locations [WEB, Castor, AFS - done; GridFTP - in progress] Multiple files upload (from Castor and AFS) [done] Web interface improvements [continuous] API [in progress] HEPML libraries and tools [in progress]

16 LCG MCDB in CMS From the report of Albert De Roeck,CERN on the last MC4LHC workshop, CMS: MC Experience and Needs ME code ALPGEN MadGraph CompHEP MC@NLO Etc... CASTOR MCDB CMSSW Pythia (custom) Herwig (custom) G4 Parton shower Matching Event file Hard process Provenance Tuned parameters Event Data base: CMS MCDB for PTDR LCG MCDB in future (see LCG generator meeting) The plan:

17 Participation in ARDA 1.Participating in SC4: Russian sites follow the SC4-CSA06 chain Load test transfer, LCG installation of CMSSW, DBS/DLS, SC4 production,CSA06 2. Monitoring of transfer T1 T2 through Dashboard (observation over Load test transfer). We collaborate with Dashboard team to prepare the page and provide the observation week by week reporting at weekly Integration meetings:http://pcardabg.cern.ch:8080/dashboard/phedex/ 3.Monitoring of production jobs with MonALISA a) monitoring of errors (unique system for LCG and local farms) b) monitoring of the WN, SE, network, installed software c) participation in the design of the monitoring tables in Dashboard 4.Use of PROOF for CMS users: a) understanding of use cases b) integration into CMS computing model 5.Portability of Monitor system 6.Participation in the CRAB-Task Manager integration

18 Dashboard: CMS Job Monitoring ASAP database RB CE Submission tools WN R-GMAMonalisa RB Constantly retrieve job information Web Service Interface R-GMA Client API pg sqLite Postgres sqLite Collector (RGMA) Collector (Monalisa) Others? Dashboard Web UI PHP RDMS CMS staff participates in ARDA activities on monitoring for CMS monitoring development: monitoring of errors due grid- environment and automatic job restart in these cases; monitoring of a number of events done; the further expansion to private simulation Snapshot Statistics Job info. Plots http://www-asap.cern.ch/dashboard/

19 Examples of ASAP Job Monitoring (12.06.06 -14.06.06) 300 jobs submitted to ITEP450 jobs submitted to SINP MSU

20 Participation in SC4 participation in CMS event simulation participation in Phedex data transfers (load test transfers and heart beat transfers) – permanently from May till September volume of monthly data transferred ~ about 10 TB

21 Usage of CPU resources at Russian Tier2 during January, 2006 – August, 2006 IHEP INR ITEP JINR LPI PNPI RRC KI SINP MSU it is 3.2 % of CPU resources used for all CMS jobs in the LCG-2 infrastructure 33 members of CMS VO at RDIG - 5.5 % of a total number of CMS VO members CMS sw is installed at RuTier2 LCG-2 sites 20.7 % of CPU resources at RuTier2 have been used for CMS jobs

22 Statistics on Phedex SC4 Data Transfers ( 23.05.2006 - 30.06.2006) ITEP JINR SINP 3.8 TB have been transferred

23 Phedex SC4 Data Transfers (23.05.2006 - 30.06.2006)

24

25 RDMS CMS Participation in CMS SC4 Load tests (14.05.2006 – 13.06.2006) IHEP JINRSINP Transfer rate (MB/s) ITEP

26 CMS Magnet Test and Cosmic Challenge 2006 Concentrate on core SW functionality required for raw data storage, quality monitoring and further processing (a must for all three sub-systems @ MTCC): RAW Data Data Bases Event Data File The magnet test going on now in CERN (April-July) is a milestone in the CMS construction. It completes the commissioning of the magnet system (coil & yoke) before its eventual lowering into the cavern. The MTCC06 have 4 components: Installation validation, Magnet Commissioning, Cosmic Challenge, Field Mapping The cosmic challenge including DAQ and Network tests on cosmic data (850 GB/day) Build an intelligible event from several subdetectors Validate hardware chain Validate integration of subdetectors into DAQ framework, Run Control, DCS Use of databases and network scaled down from final architectures Testing Offline Condition Service and Online-to-Offline(O2O) Transfer The possibility to transfer data from DAQ to RuTier-2 will be tested

27 CERN - JINR Data management for MTCC: common schema of realization

28 Plans for CSA06 (Computing and Software Analysis Challenge) RDMS sites are planning to participate in CSA06 starting on 1 October and running until 15 November The RDMS resources which will be provided for CSA06 are now under determination

29 SUMMARY RDMS CMS Computing Model has been in principle defined as Tier2 Cluster of institutional computing centers with a partial T1 functionality Data processing & analysis scenario has been developed in a context of estimation of resources on a basis of selected physics channels in which the RDMS CMS plans to participate. The very solution that CERN will serve as T1 center for RDMS would be a strong basement to provide the RDMS CMS computing model requirements. The proper RDMS CMS Grid infrastructure has been constructed at RuTier2 The main task now is to test RDMS CMS Grid infrastructure in a full volume to be ready for the LHC start!


Download ppt "V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing."

Similar presentations


Ads by Google