Operations in R ussian D ata I ntensive G rid Andrey Zarochentsev SPbSU, St. Petersburg Gleb Stiforov JINR, Dubna ALICE T1/T2 workshop in Tsukuba, Japan,

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Torrent base of software distribution by ALICE at RDIG V.V. Kotlyar (IHEP, Protvino), E.A. Ryabinkin (RRC-KI, Moscow), I.A. Tkachenko (RRC-KI, Moscow),
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
COMSATS Institute of Information Technology, Islamabad PK-CIIT Grid Operations in Pakistan COMSATS Ali Zahir Site Admin / Faculty Member ALICE T1/2 March.
Experience of xrootd monitoring for ALICE at RDIG sites G.S. Shabratova JINR A.K. Zarochentsev SPbSU.
Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR AIS-GRID School 2013, April 25.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Security in the Russian region Eygene Ryabinkin March 18 th 2008 OSCT-5.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
Status of the production and news about Nagios ALICE TF Meeting 22/07/2010.
А.Минаенко Совещание по физике и компьютингу, 03 февраля 2010 г. НИИЯФ МГУ, Москва Текущее состояние и ближайшие перспективы компьютинга для АТЛАСа в России.
PROOF Cluster Management in ALICE Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
EGEE is a project funded by the European Union under contract IST The Russian Research Centre Kurchatov Institute Partner Introduction Dr.Sergey.
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
RDIG (Russian Data Intensive Grid) e-Infrastructure: Status and Plans Vyacheslav Ilyin (SINP, MSU), Vladimir Korenkov (JINR, Dubna), Aleksey Soldatov (RRC.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna.
Operating Systems & Information Services CERN IT Department CH-1211 Geneva 23 Switzerland t OIS Update on Windows 7 at CERN & Remote Desktop.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Evaluating distributed EOS installation in Russian Academic Cloud for LHC experiments A.Kiryanov 1, A.Klimentov 2, A.Zarochentsev 3. 1.Petersburg Nuclear.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Networking and Grid in Russia V.A. Ilyin DDW’06, Cracow 11 October 2006.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Russia-CERN Joint Working Group on LHC Computing Russia-CERN Joint Working Group on LHC Computing, 19 March, 2004, CERN V.A. Ilyin 1.Some about JWGC 2.Russia.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
V. Ilyin, Russia – EU, Russia participation in EGEE stable core infrastructure - new applications/new resources/new.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
PDC’06 - status of deployment and production Latchezar Betev TF meeting – April 27, 2006.
ALICE Grid operations +some specific for T2s US-ALICE Grid operations review 7 March 2014 Latchezar Betev 1.
1 1 – Statistical information about our resource centers; ; 2 – Basic infrastructure of the Tier-1 & 2 centers; 3 – Some words about the future.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
WLCG IPv6 deployment strategy
Experience of PROOF cluster Installation and operation
GRID OPERATIONS IN ROMANIA
on behalf of the NRC-KI Tier-1 team
The Beijing Tier 2: status and plans
Grid site as a tool for data processing and data analysis
Belle II Physics Analysis Center at TIFR
Operations and plans - Polish sites
Distributed cross-site storage with single EOS end-point
Update on Plan for KISTI-GSDC
Russian Regional Center for LHC Data Analysis
David Cameron ATLAS Site Jamboree, 20 Jan 2017
Federated Data Storage System Prototype for LHC experiments
LHCb computing in Russia
RDIG for ALICE today and in future
Simulation use cases for T2 in ALICE
AliEn central services (structure and operation)
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Exploring Multi-Core on
Presentation transcript:

Operations in R ussian D ata I ntensive G rid Andrey Zarochentsev SPbSU, St. Petersburg Gleb Stiforov JINR, Dubna ALICE T1/T2 workshop in Tsukuba, Japan, 3-7 march 2014

Structure of Russian sites Tier2 sites by 8 institutes of RDIG, involved into ALICE activity: IHEP, JINR, RRC-KI, ITEP, Troitsk, PNPI, SPbSU + MEPHI Services for all Russian sites

Network Structure 2013 All sites, (except SPbSU), worked via one provider to GEANT - via E-ARENA

Results of Russian sites on 1 June 2013 (workshop in Lion) Resources usage 3,91% Done jobs6.5%

Changes in 2013!!! In the Autumn of 2013 E-Arena terminated connection of Russian institutes with GEANT. Sites were forced to create new network infrastructure. Some loss of production happend as result 01/06/ /01/2014 usage 2,86%

New Structure of Russian T2 sites 2014 Institutes 4 sites are unified to Regional Research Center Kurchatov Ints. (KI): RRC-KI, IHEP, ITEP, PNPI 3 independent Rusian sites: SPbSU, Troitsk, MEPHI International site: JINR Services for all russian sites - ok

Network Structure in 2014 Russian sites have different paths to CERN: (1) some via GEANT) (2) some sites - without GEANT All sites have local connections with each other

First Results in new structure result from the last month Resources usage 3,94% Done jobs 4,94%

Resources of Russian sites 2013 (CPU) Site nameResource (CPU)HEP-SPEC2006/slotHEPSPEC2006/VOSE (TB) SPbSU 12810,13 648,32 43 PNPI 22410,8 604,8 40 JINR , RRC-KI , , Troitsk 1149,59 273, ITEP 18010,26 461,7 140 IHEP , , SUM ,03 (106,8%) 1238 (95,2 %) need by rebus

New resources of Russian sites 2014 Requested by REBUS ( (T2 only) DISK = 1645 TB, CE = HEPSPEC2006 information on 28 February 2014 year: SPbSU: + 20 TB (old server)+ 66 TB (New), +96 Cores (new) (11.53 HS2006) MEPHI: + 40 TB Cores (10.41 HS2006) JINR: +0 Troitsk: +0 RRC-KIAE (ITEP only): +0 Sum: DISK= =1364TB (82 %), CE~ =21775 HEPSPEC2006 (106 %)

Alice requirements to Russian sites on middleware: all ok in Site namecvmfs versionOS versionEMI (UMD) versionxrootd version SPbSU v3.2.6 PNPI v3.3.4 (rpm) JINR v3.3.4 RRC-KI _dbg Troitsk _dbg ITEP v3.2.6 IHEP v3.2.4 MEPHI

Questions from site admins: 1) Xrootd from RPM - instruction? We have the expirience with installation of xrootd from rpm (PNPI) from EOS repository, but eos-apmon is not working for xrootd. We created a new package, but may be sombody has a normal package?? 2) Migration of xrootd to EOS (with saved data)? Some sites think about migration to EOS, but we have no instruction how to save data. 3) Future alice computing (network) model. What is more important: the internal connection or relationship with CERN provided for each individual site? What is more important – high connection to CERN (or GEANT) or high interconnections between sites in one group (maybe in one cloud)?

Additional computing activity for ALICE: 1) Proof cluster at JINR - JRAF Master PC 8 cores with RAM = 32Gb Slaves PCs 48 cores summary. Disk space for data is equal 14.13TB 2) Setting cloudstack for mCVM at SPbSU Patching cloudstack for work EC2-API through ssl, running test tasks, etc.

Conclusions 1) Change of Structure of Russian sites and network 2) ALICE requirements to RDIG in 2013 for middleware are ok 3) Russian pledges for ALICE for 2013 are satisfied 107% for CPU and 95% for SE. 4) We expect to be ok in 2014 for SE….. 5) Questions from site admins - it would be nice to discuss here...

Thank for support to administrators of Russian Tier-2 sites: IHEP V.Kotlyar, JINR V.Mitsyn, RRC-KI E. Ryabinkin, ITEP Y.Lyublev, PNPI A.Kiryanov, MEPHI S. Smirnov, Troitsk L.Stepanova

Thank you!