Russian Regional Center for LHC Data Analysis

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt.
1 Developing Countries Access to Scientific Knowledge Ian Willers CERN, Switzerland.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LISHEP, Rio de Janeiro, 20 February 2004 Russia in LHC DCs and EDG/LCG/EGEE V.A. Ilyin Moscow State University.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
EU DataGrid segment in Russia. Testbed WP6. V.Ilyin 1, N. Kruglov 1, A. Kryukov 1, V. Korenkov 2, V. Kolosov 3, V. Mitsyn 2, L. Shamardin 1 1 SINP MSU.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks NA3 Activity in Russia Sergey Oleshko, PNPI,
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
EGEE is a project funded by the European Union under contract IST The Russian Research Centre Kurchatov Institute Partner Introduction Dr.Sergey.
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna.
…building the next IT revolution From Web to Grid…
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Networking and Grid in Russia V.A. Ilyin DDW’06, Cracow 11 October 2006.
Russia-CERN Joint Working Group on LHC Computing Russia-CERN Joint Working Group on LHC Computing, 19 March, 2004, CERN V.A. Ilyin 1.Some about JWGC 2.Russia.
INFSO-RI Enabling Grids for E-sciencE RDIG - Russia in EGEE Viatcheslav Ilyin RDIG Consortium Director, EGEE PMB SINP MSU (48),
LHC Computing, CERN, & Federated Identities
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
V. Ilyin, Russia – EU, Russia participation in EGEE stable core infrastructure - new applications/new resources/new.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
November 27, 2001DOE/NSF review of US LHC S&C projects1 The Software and Computing Committee (SC2) in the LHC Computing Grid Project M Kasemann, FNAL.
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
PARTICLE PHYSICS IN RUSSIA R-ECFA Meeting in Moscow, 9-10 October, 2009V.Matveev Participating Institutes and Universities Participating Institutes and.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
NA2 in RUSSIA – Plans for 2005 R D I G Tatiana Strizh
LHCb report G. Passaleva (INFN – Florence)
Computing models, facilities, distributed computing
Data Challenge with the Grid in ATLAS
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Clouds of JINR, University of Sofia and INRNE Join Together
LHCb computing in Russia
The INFN TIER1 Regional Centre
RDIG for ALICE today and in future
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
New strategies of the LHC experiments to meet
Gridifying the LHCb Monte Carlo production system
LHC Computing Grid Project
LHCb thinking on Regional Centres and Related activities (GRIDs)
GRIF : an EGEE site in Paris Region
The LHC Computing Grid Visit of Professor Andreas Demetriou
LHC Computing Grid Project
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

Russian Regional Center for LHC Data Analysis by V.A. ILYIN On the base of the Conception approved on 24 August 2001 by Scientific Coordination Board of Russian Regional Center for LHC Computing 6th Annual RDMS CMS Collaboration Meeting December 20, 2001

Russian Regional Center for LHC (RRC-LHC) Russia will participate in LHC computing through creating Russian Regional Center for LHC (RRC-LHC) RRC-LHC should be a functional part of the world-wide distributed LHC computing GRID infrastructure, and it is created in the framework of a common project for the all four LHC experiments: ALICE, ATLAS, CMS, LHCb. All institutes will participate: Moscow – INR RAS, ITEP, KI, LPI, MEPhI, MSU … Dubna – JINR Protvino – IHEP St-Petersburg – PNPI RAS, StP U., … Novosibirsk – INP SD RAS …

Goals to provide a full-scale participation of Russian scientists in the analysis only in this case Russian investments in LHC would lead to the final goal of obtaining a new fundamental knowledge on the structure of matter to open wide possibilities for participation of students and young scientists in research at LHC support and improve a high level of scientific schools in Russia participation in the creation of international LHC Computing GRID will mean for Russia an access to new advanced computing techniques

Functions physical analysis of AOD (Analysis Object Data); access to (external) ESD/RAW and SIM data bases for preparing necessary (local) AOD sets; replication of AOD sets from Tier1 grid; event simulation at the level of 5-10% of the whole SIM data bases for each experiment; replication and store of 5-10% of ESD required for testing the procedures of the AOD creation; storage of data produced by users.

Architecture RRC-LHC will be a cluster of institutional centers with Tier2 functionality distributed system - DataGrid cloud of Tier2(/Tier3) centers a coherent interaction of computing centers of participating Institutes: each Institute knows its resources but can get significantly more if others agree; for each Collaboration summary resources (of about 4-5 basic institutional centers) will reach the level of 50-70% of a canonical Tier1 center: each Collaboration knows its summary resources but can get significantly more if other Collaborations agree; RRC-LHC will be connected to Tier1 at CERN and/or to other Tier(s)1 in a context of a global grid for data store and access: each Institute and each Collaboration can get significantly more if other reg.centers agree.

LHC Computing Model 2001 - evolving The opportunity of Grid technology LHC Computing Model 2001 - evolving MONARC project Tier3 physics department    Desktop Tier2 Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x RDMS Germany UK France Italy CERN Tier1 USA CERN Tier1 regional group

Russian Regional Center DataGrid cloud The opportunity of Grid technology Russian Regional Center DataGrid cloud CMS Tier1 cloud … CERN 1-1.5-2 Gbit/s PNPI IHEP INR RAS ITEP JINR MSU … (2002: 20-30 Mbit/s) RRC-LHC RDMS CMS layer Regional connectivity: cloud backbone – Gbit’s/s to labs – 100’s Mbit/s RRC-LHC Other Collab. layer

Resources required by the 2007 ALICE ATLAS RDMS CMS LHCb CPU (KSI95) 100 120 70 Disk (TB) 200 250 150 Tape (TB) 300 400 We suppose: each active user will create local AOD sets ~10 times per year, and keep these sets on the disks during the year the general AOD sets will be replicated from the Tier1 cloud ~10 times per year, storing previous sets on the tapes. The disk space usage will be partitioned as 15% to store general AOD+TAG sets; 15% to store local sets of AOD+TAG; 15% to store users data; 15% to store current sets of sim.data (SIM-AOD, partially SIM-ESD); 30-35% to store the 10% portion of ESD; 5-10% cache.

Construction timeline Timeline for the RRC-LHC resources at the construction phase: 2005 2006 2007 15% 30% 55% After 2007 investments will be necessary for supporting the computing and storage facilities and increasing the CPU power and storage space. In 2008 about 30% of the expenses in 2007. Every next year: renewing of 1/3 of CPU, increase the disk space for 50%, and increase the tape storage space for 100%.

Prototyping (Phase1) & DataChallenges 2004: minimal acceptable level of complexity – 10% of 2007 summary (of 4 Collab.) resources: CPU (KSI95) 20 Disk (TB) 50 Tape (TB) The time profile for the prototype: 2001 2002 2003 2004 5% 10% 30% 55% should allow the participation of Russian institutes in the CMS Data Challenge program at the level of 5-10%. (talk by E.Tikhonenko) the budget is received (Rus.Min.Ind.Sci.&Tech. + Institutes)

EU-DataGrid RRC-LHC participates in the EU-DataGrid activity WP6 (Testbed and Demonstration) collaboration with CERN IT WP8 (HEP Application) through Collaborations . In 2001 the DataGrid information service (GRIS-GIIS) has been created in Moscow region, as well the DataGrid Certificate Authority and user Registration services. WP6 Testbed0 (Spring-Summer 2001) – 4 sites, 8 FTE. Testbed1 – 4 sites (MSU, ITEP, JINR, IHEP), 12 FTE, significant resources (160 CPUs, 7.5 TB disk), active sites in CMS Testbed1. (talk by L. Shamardin) CERN-INTAS grant 00-0440 (2001-2002) : CERN IT, CNRS, MSU, ITEP, JINR, IHEP Collaboration with - SCC MSU - Keldysh Inst. for Appl. Math. RAS (metacomputing, GRID) - Telecommunication Center “ Sci & Society ” (GigaEthernet)

Financial aspects Phase1 (2001-2004) 2 MCHF equipment, 3.5 MCHF network + initial inivestments to regional networks Construction phase (2005-2007) 10 MCHF equipment, 3 MCHF network _____________________________________ in total (2001-2007) 18.5 MCHF 2008 – 200x 2 MCHF/year

Networking 30 Mbit/sec In 2002-2004 data exchange with CERN: 50-120-250 TByte in 2002-2003-2004 2002 2003 2004 30 Mbit/sec 70 Mbit/sec 150 Mbit/sec For 2005-2007 (construction phase and 1st physics run): 2005 2006 2007 0.3 Gbit/sec 0.6 Gbit/sec 1.5 Gbit/sec Regional links between basic RRC-LHC institutes should be comparable (or even better) than the link to CERN.

Dedicated channel to CERN in 2002 LHC GRID dedicated channel (start in Jan.-Feb. 2002): 30 Mbit/sec (optimal) , 20 Mbit/sec (minimal) based on the FASTnet project (Rus.Min.Ind.Sci&Tech.+NSF, 155 Mbit/sec Moscow-Frankfurt-USA) technical solution – subchannel upgrade of the FASTnet link Moscow-Frankfurt, then through DFN/DESY to CERN by product – to improve the connectivity with DESY, INFN, CNRS and other centers in Europe (small traffic in comparison with LHC needs) budget is assumed to be multisource: - Russian HEP institutes - DESY - CERN then, in Dec.2001-Jan.2002 to apply to INTAS (Infrastructure Action) to double the amount collected.