Download presentation
Presentation is loading. Please wait.
Published byRuby Scott Modified over 9 years ago
1
RDIG (Russian Data Intensive Grid) e-Infrastructure: Status and Plans Vyacheslav Ilyin (SINP, MSU), Vladimir Korenkov (JINR, Dubna), Aleksey Soldatov (RRC “Kurchatov Institute”) NEC’2009, Varna 10 September, 2009
2
Some history 1999 – Monarc Project – Early discussions on how to organise distributed computing for LHC 2001-2003 - EU DataGrid project – middleware & testbed for an operational grid 2002-2005 – LHC Computing Grid – LCG – deploying the results of DataGrid to provide a production facility for LHC experiments 2004-2006 – EU EGEE project phase 1 – starts from the LCG grid – shared production infrastructure – expanding to other communities and sciences 2006-2008 – EU EGEE-II – Building on phase 1 – Expanding applications and communities … 2008-2010 – EU EGEE-III CERN
3
EGEE (Enabling Grids for E-sciencE) The aim of the project is to create a global Pan-European computing infrastructure of a Grid type. - Integrate regional Grid efforts - Represent leading grid activities in Europe 10 Federations, 27 Countries, 70 Organizations
5
350 sites 55 countries 120,000 CPUs 26 PetaBytes >15,000 users >300 VOs >370,000 jobs/day Archeology Astronomy Astrophysics Civil Protection Comp. Chemistry Earth Sciences Finance Fusion Geophysics High Energy Physics Life Sciences Multimedia Material Sciences …
6
The Russian consortium RDIG (Russian Data Intensive Grid, was set up in September 2003 as a national federation in the EGEE project. IHEP - Institute of High Energy Physics (Protvino), IMPB RAS - Institute of Mathematical Problems in Biology (Pushchino), ITEP - Institute of Theoretical and Experimental Physics JINR - Joint Institute for Nuclear Research (Dubna), KIAM RAS - Keldysh Institute of Applied Mathematics PNPI - Petersburg Nuclear Physics Institute (Gatchina), RRC KI - Russian Research Center “Kurchatov Institute” SINP-MSU - Skobeltsyn Institute of Nuclear Physics,MSU, The Russian consortium RDIG
7
Now the RDIG infrastructure comprises 15 Resource Centers with more 7000 kSI2K CPU and more 1850 TB of disc storage. The RDIG infrastructure RDIG Resource Centres: – ITEP – JINR-LCG2 – Kharkov-KIPT – RRC-KI – RU-Moscow-KIAM – RU-Phys-SPbSU – RU-Protvino-IHEP – RU-SPbSU – Ru-Troitsk-INR – ru-IMPB-LCG2 – ru-Moscow-FIAN – ru-Moscow-GCRAS – ru-Moscow-MEPHI – ru-PNPI-LCG2 – ru-Moscow-SINP
8
The main directions in development and maintenance of RDIG e-infrastructure are as the following: - support of basic grid-services; - Support of Regional Operations Center (ROC); - Support of Resource Centers (RC) in Russia; - RDIG Certification Authority; - RDIG Monitoring and Accounting; - participation in integration, testing, certification of grid- software; - support of Users, Virtual Organization (VO) and application; - User & Administrator training and education; - Dissemination, outreach and Communication grid activities. Development and maintenance of RDIG e-infrastructure
12
www.egee-rdig.ru
13
Infrastructure VO's (all RC's): – dteam – ops Most RC support the WLCG/EGEE VO's – Alice – Atlas – CMS – LHCb Supported by some RC's: – gear – Biomed – Fusion Regional VO's – Ams, eearth, photon, rdteam, rgstest VOs Flagship applications: LHC, Fusion (toward to ITER), nanotechnology Current interests from: medicine, engineering
14
RDIG - training, induction courses PNPI, RRC KI, JINR, IHEP, ITEP, SINP MSU: Induction courses ~ 600 Courses for application developers ~ 60 Site administrator training ~ 100 today more than 200 physicists from different LHC centers in Russia got this training
15
RDIG monitoring & accounting http://rocmon.jinr.ru:8080 http://rocmon.jinr.ru:8080
16
Architecture
19
cluster of institutional computing centers; major centers are of Tier2 level (RRC KI, JINR, IHEP, ITEP, PNPI, SINP MSU), other are (will be) Tier3s. each of the T2-sites operates for all four experiments - ALICE, ATLAS, CMS and LHCb. this model assumes partition/sharing of the facilities (DISK/Tape and CPU) between experiments. basic functions: analysis of real data; MC generation; users data support plus analysis of some portion of RAW/ESD data for tuning/developing reconstruction algorithms and corresponding programming. Thus, approximately equal partitioning of the storage: Real AOD ~ Real RAW/ESD ~ SimData : Russia in World-wide LHC Computing Grid
20
LHC experiments support – Networking – Computer power – Data storage – Software installation and maintenance – Mathematic al support Grid – solution for LHC experiments Tiers-1 for JINR ATLAS, CMS,ALICE
21
T1s for Russia: ALICE - FZK (Karlsruhe) to serve as a canonical T1 center for Russia T2 sites ATLAS - SARA (Amsterdam) to serve as a canonical T1 center for Russia T2 sites LHCb - CERN facilities to serve as a canonical T1 center for Russia T2 sites CMS - CERN as a CMS T1 centre for the purposes of receiving Monte Carlo data from Russian T2 centres. Also as a special-purpose T1 centre for CMS, taking a share of the general distribution of AOD and RECO data as required. Russia in World-wide LHC Computing Grid
22
RDIG resource planning for LHC: Russia in World-wide LHC Computing Grid 2008 2009201020112012 2013 ALICE ATLAS CMS LHCb CPU (MSI2K)3.06.28.512.516.024.026% 29%19% Disk (PByte)0.61.83.04.06.08.0 WAN (Gbit/sec)2.5 5.0 10.0
23
The protocol between CERN, Russia and JINR on a participation in LCG Project has been approved in 2003. The tasks of the Russian institutes in the LCG have been defined as : LCG software testing; evaluation of new Grid technologies (e.g. Globus toolkit 3) in a context of using in the LCG; event generators repository, data base of physical events: support and development. LHC Computing Grid Project (LCG)
24
The tasks of the Russian institutes & JINR in the LCG (2009 years): Task 1. MW (gLite) Testsuit (supervisor O. Keeble) Task 2. LCG vs Experiments (supervisor I. Bird) Task 3. LCG monitoring (supervisor J. Andreeva) Task 4/5. Genser/ MCDB ( supervisor A. Ribon) LHC Computing Grid Project (LCG)
25
International connectivity for RDIG/LCG: for connection with LCG centers in Europe: GEANT2 PoP - 2 links 2.5 Gbps (10 Gbps in process) dedicated connectivity to CERN: 1 Gbps connection Moscow-Amsterdam - CERN Russia in World-wide LHC Computing Grid
26
Moscow 10 Gbps - RRC KI, SINP MSU (in process) 1 Gbps - ITEP, LPI, MEPhI JINR (Dubna) 20 Gbps IHEP (Protvino) 1 Gbps INR RAS (Troitsk) 1 Gbps PNPI (Gatchina) 1 Gbps SPbSU (S-Peterburg) 1 Gbps BINP (Novosibirsk) 100 Mbps Russia in World-wide LHC Computing Grid
27
Collaboration with RSCC, RIPN, MSK-IX, JET Infosystems, Nortel JINR - Moscow telecommunication channel
28
28 JINR Central Information and Computing Complex (CICC)
29
29 In 2008, total CICC performance was 1400 kSI2K, the disk storage capacity 100 TB At present, the CICC performance equals 2300 kSI2K (960 core) and the disk storage capacity 500 TB 200820092010 CPU (kSI2K) 125017502500 Disk (TB) 4008001200 Tapes (TB) --500 SuperBlade – 5 BOX 80 CPU Xeon 5430 2.66 GHz Quad Core (two with InfiniBand)
30
Production Normalised CPU time per EGEE site for VO LHC (June – September 2009) 30 GRID-site CPU time Num CPU ------------------------------------------------------------------------ FZK-LCG 8,095,787 8620 CERN-PROD 4,552,891 6812 INFN-T1 4,334,940 2862 GRIF 4,089,269 3454 JINR 3,957,790 960 CYFRONET-LCG 3,948,857 2384 PIC 3,921,569 1337 UKI-GLASGOW 3,860,298 1912 RAL-LCG2 3,793,504 2532 UKI-LT2-IC-HEP 3,752,747 960 IN2P3-CC 3,630,425 4544
31
Frames for Grid cooperation of JINR in 2009 Worldwide LHC Computing Grid (WLCG); Enabling Grids for E-sciencE (EGEE); RDIG Development (Project of FASI) CERN-RFBR project “GRID Monitoring from VO perspective” BMBF grant “Development of the GRID-infrastructure and tools to provide joint investigations performed with participation of JINR and German research centers’ “Development of Grid segment for the LHC experiments” was supported in frames of JINR-South Africa cooperation agreement; NATO project "DREAMS-ASIA“ (Development of gRid EnAbling technology in Medicine&Science for Central ASIA); JINR-Romania cooperation Hulubei-Meshcheryakov programme Project "SKIF-GRID" (Program of Belarussian-Russian Union State). Project GridNNN (National Nanotechnological Net) We work in close cooperation and provide support to our partners in Armenia, Belarus, Bulgaria, Czech Republic, Georgia, Germany, Poland, Romania, South Africa, Ukraine, Uzbekistan.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.