Presentation is loading. Please wait.

Presentation is loading. Please wait.

Russian Regional Center for LHC Data Analysis

Similar presentations


Presentation on theme: "Russian Regional Center for LHC Data Analysis"— Presentation transcript:

1 Russian Regional Center for LHC Data Analysis
by V.A. ILYIN On the base of the Conception approved on 24 August 2001 by Scientific Coordination Board of Russian Regional Center for LHC Computing 6th Annual RDMS CMS Collaboration Meeting December 20, 2001

2 Russian Regional Center for LHC (RRC-LHC)
Russia will participate in LHC computing through creating Russian Regional Center for LHC (RRC-LHC) RRC-LHC should be a functional part of the world-wide distributed LHC computing GRID infrastructure, and it is created in the framework of a common project for the all four LHC experiments: ALICE, ATLAS, CMS, LHCb. All institutes will participate: Moscow – INR RAS, ITEP, KI, LPI, MEPhI, MSU … Dubna – JINR Protvino – IHEP St-Petersburg – PNPI RAS, StP U., … Novosibirsk – INP SD RAS

3 Goals to provide a full-scale participation of Russian scientists in the analysis only in this case Russian investments in LHC would lead to the final goal of obtaining a new fundamental knowledge on the structure of matter to open wide possibilities for participation of students and young scientists in research at LHC support and improve a high level of scientific schools in Russia participation in the creation of international LHC Computing GRID will mean for Russia an access to new advanced computing techniques

4 Functions physical analysis of AOD (Analysis Object Data);
access to (external) ESD/RAW and SIM data bases for preparing necessary (local) AOD sets; replication of AOD sets from Tier1 grid; event simulation at the level of 5-10% of the whole SIM data bases for each experiment; replication and store of 5-10% of ESD required for testing the procedures of the AOD creation; storage of data produced by users.

5 Architecture RRC-LHC will be a cluster of institutional centers with Tier2 functionality distributed system - DataGrid cloud of Tier2(/Tier3) centers a coherent interaction of computing centers of participating Institutes: each Institute knows its resources but can get significantly more if others agree; for each Collaboration summary resources (of about 4-5 basic institutional centers) will reach the level of 50-70% of a canonical Tier1 center: each Collaboration knows its summary resources but can get significantly more if other Collaborations agree; RRC-LHC will be connected to Tier1 at CERN and/or to other Tier(s)1 in a context of a global grid for data store and access: each Institute and each Collaboration can get significantly more if other reg.centers agree.

6 LHC Computing Model 2001 - evolving
The opportunity of Grid technology LHC Computing Model evolving MONARC project Tier3 physics department Desktop Tier2 Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x RDMS Germany UK France Italy CERN Tier1 USA CERN Tier1 regional group

7 Russian Regional Center DataGrid cloud
The opportunity of Grid technology Russian Regional Center DataGrid cloud CMS Tier1 cloud CERN Gbit/s PNPI IHEP INR RAS ITEP JINR MSU (2002: Mbit/s) RRC-LHC RDMS CMS layer Regional connectivity: cloud backbone – Gbit’s/s to labs – 100’s Mbit/s RRC-LHC Other Collab. layer

8 Resources required by the 2007
ALICE ATLAS RDMS CMS LHCb CPU (KSI95) 100 120 70 Disk (TB) 200 250 150 Tape (TB) 300 400 We suppose: each active user will create local AOD sets ~10 times per year, and keep these sets on the disks during the year the general AOD sets will be replicated from the Tier1 cloud ~10 times per year, storing previous sets on the tapes. The disk space usage will be partitioned as 15% to store general AOD+TAG sets; 15% to store local sets of AOD+TAG; 15% to store users data; 15% to store current sets of sim.data (SIM-AOD, partially SIM-ESD); 30-35% to store the 10% portion of ESD; 5-10% cache.

9 Construction timeline
Timeline for the RRC-LHC resources at the construction phase: 2005 2006 2007 15% 30% 55% After 2007 investments will be necessary for supporting the computing and storage facilities and increasing the CPU power and storage space. In 2008 about 30% of the expenses in 2007. Every next year: renewing of 1/3 of CPU, increase the disk space for 50%, and increase the tape storage space for 100%.

10 Prototyping (Phase1) & DataChallenges
2004: minimal acceptable level of complexity – 10% of 2007 summary (of 4 Collab.) resources: CPU (KSI95) 20 Disk (TB) 50 Tape (TB) The time profile for the prototype: 2001 2002 2003 2004 5% 10% 30% 55% should allow the participation of Russian institutes in the CMS Data Challenge program at the level of 5-10%. (talk by E.Tikhonenko) the budget is received (Rus.Min.Ind.Sci.&Tech. + Institutes)

11 EU-DataGrid RRC-LHC participates in the EU-DataGrid activity
WP6 (Testbed and Demonstration) collaboration with CERN IT WP8 (HEP Application) through Collaborations . In 2001 the DataGrid information service (GRIS-GIIS) has been created in Moscow region, as well the DataGrid Certificate Authority and user Registration services. WP6 Testbed0 (Spring-Summer 2001) – 4 sites, 8 FTE. Testbed1 – 4 sites (MSU, ITEP, JINR, IHEP), 12 FTE, significant resources (160 CPUs, 7.5 TB disk), active sites in CMS Testbed1. (talk by L. Shamardin) CERN-INTAS grant ( ) : CERN IT, CNRS, MSU, ITEP, JINR, IHEP Collaboration with - SCC MSU - Keldysh Inst. for Appl. Math. RAS (metacomputing, GRID) - Telecommunication Center “ Sci & Society ” (GigaEthernet)

12 Financial aspects Phase1 (2001-2004)
2 MCHF equipment, 3.5 MCHF network + initial inivestments to regional networks Construction phase ( ) 10 MCHF equipment, 3 MCHF network _____________________________________ in total ( ) MCHF 2008 – 200x MCHF/year

13 Networking 30 Mbit/sec In 2002-2004 data exchange with CERN:
TByte in 2002 2003 2004 30 Mbit/sec 70 Mbit/sec 150 Mbit/sec For (construction phase and 1st physics run): 2005 2006 2007 0.3 Gbit/sec 0.6 Gbit/sec 1.5 Gbit/sec Regional links between basic RRC-LHC institutes should be comparable (or even better) than the link to CERN.

14 Dedicated channel to CERN in 2002
LHC GRID dedicated channel (start in Jan.-Feb. 2002): 30 Mbit/sec (optimal) , 20 Mbit/sec (minimal) based on the FASTnet project (Rus.Min.Ind.Sci&Tech.+NSF, 155 Mbit/sec Moscow-Frankfurt-USA) technical solution – subchannel upgrade of the FASTnet link Moscow-Frankfurt, then through DFN/DESY to CERN by product – to improve the connectivity with DESY, INFN, CNRS and other centers in Europe (small traffic in comparison with LHC needs) budget is assumed to be multisource: - Russian HEP institutes - DESY - CERN then, in Dec.2001-Jan.2002 to apply to INTAS (Infrastructure Action) to double the amount collected.


Download ppt "Russian Regional Center for LHC Data Analysis"

Similar presentations


Ads by Google