V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Networking and Grid in Russia V.A. Ilyin DDW’06, Cracow 11 October 2006.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
EGEE is a project funded by the European Union under contract IST Application Identification and Support (NA4) Activities in the RDIG-EGEE.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2 1- Institute of Theoretical and Experimental Physics, Moscow,
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR AIS-GRID School 2013, April 25.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks NA3 Activity in Russia Sergey Oleshko, PNPI,
GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
А.Минаенко Совещание по физике и компьютингу, 03 февраля 2010 г. НИИЯФ МГУ, Москва Текущее состояние и ближайшие перспективы компьютинга для АТЛАСа в России.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
EGEE is a project funded by the European Union under contract IST The Russian Research Centre Kurchatov Institute Partner Introduction Dr.Sergey.
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
RDIG (Russian Data Intensive Grid) e-Infrastructure: Status and Plans Vyacheslav Ilyin (SINP, MSU), Vladimir Korenkov (JINR, Dubna), Aleksey Soldatov (RRC.
V.A. Ilyin,, RIGF, 14 May 2010 Internet and Science: LHC view V.A. Ilyin SINP MSU, e-ARENA.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Russia-CERN Joint Working Group on LHC Computing Russia-CERN Joint Working Group on LHC Computing, 19 March, 2004, CERN V.A. Ilyin 1.Some about JWGC 2.Russia.
INFSO-RI Enabling Grids for E-sciencE RDIG - Russia in EGEE Viatcheslav Ilyin RDIG Consortium Director, EGEE PMB SINP MSU (48),
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
V. Ilyin, Russia – EU, Russia participation in EGEE stable core infrastructure - new applications/new resources/new.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
ICFA DDW'06 - Cracow1 (HEP) GRID Activities in Hungary Csaba Hajdu KFKI RMKI Budapest.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Operations in R ussian D ata I ntensive G rid Andrey Zarochentsev SPbSU, St. Petersburg Gleb Stiforov JINR, Dubna ALICE T1/T2 workshop in Tsukuba, Japan,
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Bob Jones EGEE Technical Director
U.S. ATLAS Tier 2 Computing Center
Russian Regional Center for LHC Data Analysis
LHCb computing in Russia
RDIG for ALICE today and in future
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
LHC Data Analysis using a worldwide computing grid
Presentation transcript:

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Networking and Grid in Russia V.A. Ilyin DDW’06, Cracow 11 October 2006

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 International Connectivity International connectivity for Russian science is based today on 2.5 Gigabit/s Moscow - St-Perersburg - Stockholm. GEANT2 PoP in Moscow, 622 Mbps link. GLORIAD Moscow Gigabit Network Access Point (G-NAP) for R&E networks has been put into operation in the middle of 2005 where all these links are present. Plans – to have Gbps in 2007 for connectivity with GEANT2, AMS-IX and CERN by two independent links. Russia LHC users are major customers for these links.

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Moscow 1 Gbps (ITEP, RRC KI, SINP MSU, …LPI, MEPhI) plans to go to 10 Gbps in 2007 IHEP (Protvino) 100 Mbps fiber-optic ( to have 1 Gbps to 2007 ) JINR (Dubna) 1 Gbps f/o (from December 2005 ) plans to go to 10 Gbps in mid 2007 BINP (Novosibirsk) Mbps (GLORIAD++) INR RAS (Troitsk) 10 Mbps commodity Internet, new f/o project to start PNPI (Gatchina) 2 Mbps commodity Internet, new f/o link to St- Peterburg 1 Gbps is under testing SPbSU (S-Peterburg) 1 Gbps (potentially is available now, but last mile …) REGIONAL CONNECTIVITY Our pragmatic goal to get in 2007:  all RuTier2 sites to have at least 100 Mbps f/o dedicated for network provision of RDIG users,  1 Gbps dedicated connectivity between basic RDIG sites and 1 Gbps connectivity to EGEE via GEANT2/GLORIAD. Dedicated – means to have reliable level for FTS flows Mbyte/s for Gbps links few Mbyte/s for 100 Mbps links !

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 GLORIAD: 10 Gbps Optical Ring Around the Globe by 2007 GLORIAD Circuits  10 Gbps Korea (Busan)-Hong Kong-Daejon-Seattle  10 Gbps Seattle-Chicago-NYC (CANARIE contribution to GLORIAD)  2.5 Gbps Moscow-AMS  2.5 Gbps Beijing-Hong Kong  622 Mbps Moscow-AMS-NYC  155 Mbps Beijing-Khabarovsk- Moscow  1 GbE NYC-Chicago (CANARIE) China, Russia, Korea, Japan, US, Netherlands Partnership US: NSF IRNC Program G. Cole 

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Backbone links for Russia NREN Mbps or less Main problem – monopoly in telecom internal market in Russia. Examples: the cost for connectivity Ekaterinburg-NY is twice cheaper than for Ekaterinburg-Moscow of the same b/w; cost for 120 km link 1Gbps Dubna-Moscow in 2006 is 5 times larger than for the same link in Slovakia in We recognize another serious problem: officials do not understand that QoS costs much…

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October April 2006: The Working Group on Networking and Distributed Computing (WG-NetG) has been created by Federal Agency for Science and Innovations. Main task – coordination of the development toward the creation of national e-infrastructure for science and education in Russia, including new generation of network and grid (NGI), with integration in world-wide e-structures (in particular, participation in EGI discussions). Also – recommendations on the budget proposals in the networking and grid fields.

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 From ~2000 dominating factor for R&E networking and grid development in Russia is LHC … in nearest future (most probably) this dominating role will move to ITER.

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 LHC Computing Grid (LCG) – start in Sept 2003

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Russia Tier2 facilities in World-wide LHC Computing Grid:  cluster of institutional computing centers;  major centers (now six ones) are of Tier2 level (RRC KI, JINR, IHEP, ITEP, PNPI, SINP MSU), other are (will be) Tier3s. this mapping could be changed depending of the developing efforts of institutes ;  each of the T2-sites operates for all four experiments - ALICE, ATLAS, CMS and LHCb. this model assumes partition/sharing of the facilities (DISK/Tape and CPU) between experiments. This approach has to be developed to get workable production level.  basic functions determined the main data flows: analysis of real data; MC generation; users data support plus analysis of some portion of RAW/ESD data for tuning/developing reconstruction algorithms and corresponding programming. !

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Thus: Real AOD – full sets for four experiments (all passes) RuTier2  (plus local AOD sets) Real (RAW)/ESD ~10% RuTier2  Sim RAW/ESD/AOD generated in Russia  Tier1 while other Sim data necessary for channels to analyze in Russia RuTier2  For RuTier2 it assumes approximately equal partitioning of the storage: Real AOD ~ Real RAW/ESD ~ SimData Note, that main T1 services to be supported for T2s are - access (or distribution point for the access) to real AOD sets - storage and serving of RuTier2 MC data

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 as a result of regular T1-T2 planning procedure ALICE - FZK (Karlsruhe) is agreed to serve as a canonical T1 center for Russia T2 sites ATLAS - SARA (Amsterdam) is agreed to serve as a canonical T1 center for Russia T2 sites LHCb CERN facilities will serve as a canonical T1 center for Russia T2 sites T1s for RuTier2: in May 2006 FZK has got a decision that FZK CMS T1 will not serve Russia because of German CMS T2s are too much already for FZK CMS T1. In this urgent situation the common solution has been found by RDMS, CMS and LCG management: CERN agreed to act as a CMS T1 centre for the purposes of receiving Monte Carlo data from Russian and Ukrainian T2 centres. CERN will also act as the distribution point for access to CMS general AOD and RECO data. Moreover, CERN will act as a special-purpose T1 centre for CMS, taking a share of the general distribution of AOD and RECO data as required. In particular, CMS intends to use the second copy of AOD and RECO at CERN to improve the flexibility and robustness of the computing system, and not as a general-purpose T1.

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 WLCG MoU Some T1s (improved figures) RuTier2 Cluster 2006 corrections due to the budget reduction: no

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 EGEE: > 180 sites, 40 countries > CPUs, ~ 5 PB data

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006

Tier-2s ~100 Identified – Number still growing J. Knobloch The Proliferation of Tier2s  LHC Computing will be More Dynamic & Network-Oriented

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 ATLAS: statistics for production jobs (Dashboard), few recent months Production mode of the grid usage: millions jobs by few users on hundreds sites unique advantage Jobs per user Jobs per site

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 CMS: statistics for analysis jobs – last few months (ASAP Dashboard) Still not chaotic usage of the grid … T2 level is not working yet! Jobs per user Physicists are sleeping … Please wake up! Data will come soon …

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 RuTier2 in the World-Wide Grid RuTier2 Computing Facilities are operated by Russian Data-Intensive Grid (RDIG) We are creating RDIG infrastructure as Russian segment of the European grid infrastructure EGEE RuTier2 sites (institutes) are RDIG-EGEE Resource Centers Basic grid services (including VO management, RB/WLM etc) are provided by SINP MSU, RRC KI and JINR Operational functions are provided by IHEP, ITEP, PNPI and JINR Regional Certificate Authority and security are supported by RRC KI User support (Call Center, link to GGUS in FZK) - ITEP

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October June 2006, ~ 18:00

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 RDIG Resource Broker monitoring

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006

ALICE Resources statistics Resources contribution (normalized Si2K units): 50% from T1s, 50% from T2s –The role of the T2 remains very high!

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 ATLAS Per year ~40-50 Tbyte … before LHC start …

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Statistics on CMS Phedex SC4 Data Transfers ( ) ITEP JINR SINP 3.8 TB have been transferred

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 CMS Magnet Test and Cosmic Challenge 2006 Concentrate on core SW functionality required for raw data storage, quality monitoring and further processing (a must for all three MTCC): RAW Data Data Bases Event Data File The magnet test going on now in CERN (April-July) is a milestone in the CMS construction. It completes the commissioning of the magnet system (coil & yoke) before its eventual lowering into the cavern. The MTCC06 have 4 components: Installation validation, Magnet Commissioning, Cosmic Challenge, Field Mapping The cosmic challenge including DAQ and Network tests on cosmic data (850 GB/day) Build an intelligible event from several subdetectors Validate hardware chain Validate integration of subdetectors into DAQ framework, Run Control, DCS Use of databases and network scaled down from final architectures Testing Offline Condition Service and Online-to-Offline(O2O) Transfer The possibility to transfer data from DAQ to RuTier-2 will be tested

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Plans for CSA06 (Computing and Software Analysis Challenge) RDMS sites are planning to participate in CSA06 starting on 1 October and running until 15 November The RDMS resources which will be provided for CSA06 are now under determination

V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 History of LHCb DCs in Russia K events, 1% contribution M events, 3% contribution M events, 5% contribution ☻ M events, 3% contribution M events, 2% contribution  Since 2004 we have not increased our resources Expecting considerable improvements in a few months...