Vladimir Korenkov (JINR, Dubna)

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
EGEE is a project funded by the European Union under contract IST Application Identification and Support (NA4) Activities in the RDIG-EGEE.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
Perspectives of JINR Grid-infrastructure V.V. Ivanov, Gh. Adam, V.V. Korenkov, T.A. Strizh, P.V. Zrelov Laboratory of Information Technologies Joint Institute.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR AIS-GRID School 2013, April 25.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
Participation of JINR in the LCG and EGEE projects V.V.Korenkov (JINR, Dubna) NEC’2005, Varna 17 September, 2005.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks NA3 Activity in Russia Sergey Oleshko, PNPI,
GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
RDIG (Russian Data Intensive Grid) e-Infrastructure: Status and Plans Vyacheslav Ilyin (SINP, MSU), Vladimir Korenkov (JINR, Dubna), Aleksey Soldatov (RRC.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
INFSO-RI Enabling Grids for E-sciencE NA3 activity in Russia during EGEE project Elena Slabospitskaya NA3 Manager for Russia Varna,
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
INFSO-RI Enabling Grids for E-sciencE RDIG - Russia in EGEE Viatcheslav Ilyin RDIG Consortium Director, EGEE PMB SINP MSU (48),
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
V. Ilyin, Russia – EU, Russia participation in EGEE stable core infrastructure - new applications/new resources/new.
Status of AliEn2 Services ALICE offline week Latchezar Betev Geneva, June 01, 2005.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Round-table session «Presentation of the new telecommunication channel JINR-Moscow and of the JINR Grid-segment» of the JINR Grid-segment» Korenkov Vladimir.
Operations in R ussian D ata I ntensive G rid Andrey Zarochentsev SPbSU, St. Petersburg Gleb Stiforov JINR, Dubna ALICE T1/T2 workshop in Tsukuba, Japan,
Information, Computer and Network Support of the JINR's Activity Information, Computer and Network Support of the JINR's Activity Laboratory of Information.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Current status of the JINR Central Information and Computing Complex (CICC) and its user service V.V. Ivanov Laboratory of Information Technologies, JINR.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
Report on availability of the JINR networking, computing and information infrastructure for real data taking and processing in LHC experiments Ivanov V.V.
Laboratory of Information Technologies Proposal of the Laboratory of Information Technologies to the plan for the JINR development during V.V.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
Kilian Schwarz ALICE Computing Meeting GSI, October 7, 2009
WLCG IPv6 deployment strategy
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
(Prague, March 2009) Andrey Y Shevel
The Beijing Tier 2: status and plans
LCG Service Challenge: Planning and Milestones
Report of Dubna discussion
Data Challenge with the Grid in ATLAS
Grid Computing for the ILC
LCG-France activities
Update on Plan for KISTI-GSDC
Russian Regional Center for LHC Data Analysis
Clouds of JINR, University of Sofia and INRNE Join Together
RDIG for ALICE today and in future
UTFSM computer cluster
Short update on the latest gLite status
Simulation use cases for T2 in ALICE
WLCG Collaboration Workshop;
High Energy Physics Computing Coordination in Pakistan
LHC Data Analysis using a worldwide computing grid
Pierre Girard ATLAS Visit
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Project “DUBNA-GRID” Victor Ivanov DUBNA-GRID
GRIF : an EGEE site in Paris Region
The LHCb Computing Data Challenge DC06
Presentation transcript:

Vladimir Korenkov (JINR, Dubna) Status of the JINR GRID-infrastructure and participation in WLCG project Vladimir Korenkov (JINR, Dubna) Physics and Computing at ATLAS Dubna 21 January, 2008

LHC experiments support Tiers-1 for JINR ATLAS, CMS,ALICE Networking Computer power Data storage Software installation and maintenance Mathematical support Grid – solution for LHC experiments Грид-инфраструктура ОИЯИ функционирует в составе глобальной грид-системы EGEE (Enabling Grids for E-sciencE) и проекта WLCG (World LHC Computing GRID), основной задачей которого является создание инфраструктуры региональных центров для обработки, хранения и анализа данных физических экспериментов LHC. ОИЯИ участвует в этом проекте с 2003 года, когда было подписано соглашение между ЦЕРН, Россией и ОИЯИ об участии в проекте LCG (впоследствии WLCG). В 2007 году протестированы выделенные каналы связи между ОИЯИ и Tier1 центрами для экспериментов ATLAS, CMS и ALICE.

LHC experiments support at JINR A necessary level of all the elements of the JINR telecommunication, network and information infrastructure should be provided: High-throughput telecommunication data links, JINR local area network (LAN) backbone, Central computer complex and Grid segment, Software support of the LHC experiments.

Current Status of External Network Communications at JINR

Telecommunication channels N*10 Gbps construction Ключевым моментом является дальнейшее развитие внешних телекоммуникационных каналов связи. В настоящий момент ведутся работы по расширение канала Дубна - Москва до 10 Гбит/с ( в настоящий момент пропускная способность 1 Гбит/с) . Upgrade of Dubna-Moscow data link up to 10 Gbps in 2007 and up to 40 Gbps in 2010.

Network Monitoring: Incoming and outgoing traffic distribution Total: incoming in 2007 – 241.9 TB, outgoing -227.8; Most of traffic: CERN (88.8%), DESY, INFN, SARA, IN2P3 47 local sub-networks; Local traffic – 77.6 TB Created in 2007: direct point-to-point data channel between JINR LAN and CERN as part of the JINR program of participating in LCG in CERN; Dubna-City Internet eXchange. Incoming Outgoing

JINR Local Area Network Backbone (LAN) Comprises 5880 computers and nodes, Users - 3322 Modem pool users - 689 Remote VPN users (Lanpolis, Contact, TelecomMPK) - 500; High-speed transport (1Gbps) (Min. 100 Mbps to each PC); Controlled-access (Cisco PIX-525 firewall) at network entrance; Partially isolated local traffic (8 divisions have own subnetworks with Cisco Catalyst 3550 as gateways); General network authorization system involves many services (AFS, batch systems, Grid, JINR LAN remote access, etc. Plans: Step-by-step modernization of the JINR Backbone – transfer to 10 Gbps Development and modernization of the control system of the JINR highway network

JINR Central Information and Computing Complex (CICC) 670 kSi2K 100 TB Disk Современным решением для различных научных и прикладных исследований, требующих огромных вычислительных ресурсов, является широкое использование инновационных грид-технологий. Сетевая информационно-вычислительная инфраструктура ОИЯИ – это распределенный программно-аппаратный комплекс, использующий специализированное программное обеспечение и многофункциональное оборудование. Ядром этой инфраструктуры является Центральный информационно-вычислительный комплекс (ЦИВК) ОИЯИ. Базисом инфраструктуры является локальная сеть ОИЯИ, объединяющая информационно-вычислительные ресурсы Института в единую информационно-вычислительную среду. Ресурсы этой среды доступны для всех пользователей ОИЯИ, в том числе, с использованием Грид-технологий. На ее основе через телекоммуникационные каналы связи осуществляется предоставление удаленного доступа в российские и зарубежные научные сети и обеспечение удаленного доступа к ресурсам Института. В 2007 году производительность ЦИВК ОИЯИ увеличена до 650 kSI2K (kiloSpecInteger2000), содержит системы дисковых массивов объемом 56 ТВ. Ресурсы ЦИВК используются участниками экспериментов E391A (KEK), COMPASS, D0, DIRAC, HARP, CMS, ALICE, ATLAS, HERAb, H1, NEMO, OPERA, HERMES, CBM, PANDA и др. для моделирования физических процессов и анализа экспериментальных данных. Предусмотрено наращивание производительности ЦИВК ОИЯИ и систем хранения данных, для обеспечения потребностей обработки данных для LHC экспериментов и других экспериментов с участием ОИЯИ; развитие и сопровождение базового программного обеспечения Contract prepared in Dec 2007: SuperBlade – 2 BOX 40 CPU Xenon 5430 2.66 GHz Quad Core ~ 400 kSi2K Total expected in March, 2008 - 1070 kSi2K The JINR Central Information and Computing Complex (CICC) is the element of Russian GRID Segment used for LHC computing and for the other applications.

JINR WLCG infrastructure CICC comprises: 53 servers 7 interactive nodes 60 4-core computing nodes, Xeon 5150, 8GB RAM. 6 2-core computing nodes, Athlon , 2GB RAM, Mirynet. Site name: JINR-LCG2 Internal CICC network – 1Gbit/sec Operating system - Scientific Linux 4.4, Scientific Linux CERN 4.5; Middleware version GLITE-3.1 File Systems – AFS (the Andrew File System) for user Software and home directories is a world-wide distributed file system. AFS permits to share easily files in an heterogeneous distributed environment (UNIXes, NT) with a unique authentication scheme (Kerberos). dCache- for data. User registration system – Kerberos 5 ( AFS use Kerberos 5 for authentication )

JINR Central Information and Computing Complex (CICC) June 2007, the CICC resources and services have been integrated into a unified information and computing structure SL3/32 Int/UI - Interactive nodes/User Interface at 32-bit architecture with SL3, SL4/32 Int/UI - Interactive nodes/User Interface at 32-bit architecture with SL4, SL4/64 Int/UI - Interactive nodes/User Interface at 64-bit architecture with SL4, LCG-RB - LCG Resource Broker, LCG-CE - LCG Computing Elements, WN - Worker Nodes, X509 PX - Proxy, VObox - special node where experiments (ALICE, CMS, etc.) or Virtual organizations (VO) can run specific agents and services to provide a reliable mechanism to accomplish various tasks specific for VO, AFS - AFS servers, dCashe - dCashe servers (82 TB)

LHC software In 2007, the migration to the 64-bit architecture under Scientific Linux 4 operating system has been accomplished at the CICC. The following current versions of the specialized software are installed at the JINR-LCG2 site: for ALICE - AliEn (v2-13.141), VO ALICE.AliRoot.v4-06-Rev-04, VO ALICE.APISCONFIG.V2.2, VO ALICE.GEANT3.v1-8-1, VO ALICE.loadgenerator.v-1.0, VO ALICE.ROOT.v5-16-00; for ATLAS - VO-atlas-cloud-NL, VO-atlas-production(12.0.31, 12.0.5, 12.0.6, 13.0.20, 13.0.30 and 13.0.30.1), VO-atlas-elease(11.0.42 and 11.0.5), VO-atlas-tier-T3; for CMS - VO-cms-CMSSW(1 6 0, 1 6 1 and 1 6 3 - 1 6 7); for LHCb - VO-lhcb-Gauss(v25r9 - v25r12), VO-lhcb-XmlDDDB(v22r2 and 30r14), VO-lhcb-Boole-v12r10, VO-lhcb-DaVinci(v17r6 - v17r8, v18r0 and v19r0 - v19r5), VOlhcb-Brunel(v30r15 and v30r17), VO-lhcb- DecFiles(v13r9, v13r10 and v13r12), VO-lhcb- ParamFiles(v5r0). Several versions of ALICE, ATLAS and CMS software are being also installed at the CICC locally at the AFS system.

JINR WLCG infrastructure JINR provides the following services in the WLCG environment: Basic services - Berkley DB Information Index (top level BDII); site BDII; Computing Element (CE); Proxy Server (PX); Resource Broker (RB); Workload Management System + Logging&Bookkeeping Service (WMS+LB); RGMA-based monitoring system collector server (MON-box); LCG File Catalog (LFC); Storage Element (SE), dCache 82 TB; Special Services - VO boxes for ALICE and for CMS; ROCMON; PPS and testing infrastructure - Pre-production gLite version; Software for VOs: dCache xrootd door, AliROOT, ROOT, GEANT packages for ALICE; ATLAS packages; CMSSW packages for CMS and DaVinchi, Gauss packages for LHCb.

Grid Virtual Organizations at JINR CICC: June-December 2007 Grid VO Jobs number CPU time (kSi2k*hours) ALICE 90 441 1 370 820.40 ATLAS 15 643 48 980.43 CMS 52 249 51 883.18 LHCb 10 484 6 604.50 BIOMED 25 103 164 102.07 FUSION 9 208 145 053.80 Others (ops, dteam, hone) 17 665 47 022.10 TOTAL 220 793 1 834 466.49

dCache in JINR Internet / GRID JINR Backbone lxfs07 lxpub01 Protocols gFTP, SRM, XROOT local lxfs07 Admin 82 TB RAID lxfs07 wn… DCAP PNFS WNs lxfs07 lxfs71 lxfs07 rda02 Doors Pools

dCache: Files, VO, Discs CMS ATLAS other FREE! Total 82 ТВ Files

JINR in the RDIG infrastructure Now the RDIG infrastructure comprises 15 Resource Centers with more 1500 CPU and more 650 TB of disc storage. RDIG Resource Centres: – ITEP – JINR-LCG2 – Kharkov-KIPT – RRC-KI – RU-Moscow-KIAM – RU-Phys-SPbSU – RU-Protvino-IHEP – RU-SPbSU – Ru-Troitsk-INR – ru-IMPB-LCG2 – ru-Moscow-FIAN – ru-Moscow-GCRAS – ru-Moscow-MEPHI – ru-PNPI-LCG2 – ru-Moscow-SINP

RDIG monitoring&accounting http://rocmon.jinr.ru:8080 Monitored values CPUs - total /working / down/ free / busy Jobs - running / waiting Storage space - used / available Network - Available bandwidth Accounting values Number of submitted jobs Used CPU time Totally sum in seconds Normalized (with WNs productivity) Average time per job Waiting time Average ratio waiting/used CPU time per job Physical memory Average per job

Russia and JINR Normalized CPU time per SITE (June 2007 - December 2007) Jun 07 Jul 07 Aug 07 Sep 07 Oct 07 Nov 07 Dec07 Total JINR 103,238 244,393 136,615 320,041 365,456 341,876 11,258 1,522,877 47.26%

RDIG LCG2-sites: statistics on CPU usage, data transfers and site reliability RDIG SITES CPU Usage KSI2K Oct.2007 – Jan. 2008 Reliability Data transfers from CERN (TeraBytes) Oct.-Nov.2007 Oct.2007 Nov.2007 Dec.2007 FIAN 336 51 94 43 - IHEP 75203 71 72 84 25 INR 40620 79 93 35 ITEP 380164 63 11 JINR 1008105 37 MEPHI 479 92 95 44 Phys-SPbSU 96 97 PNPI 366723 85 5 RRC-KI 120 86 6 SINP 92068 49 3 SPbSU 3556 83 88 Totals 1967374 87 Average 73 56

Network bandwidth and reliability of data transfers The following LHC computing centers serve as Tier1 centers for RDIG: FZK (Karlsruhe) – for ALICE, SARA (Amsterdam) – for ATLAS, CERN – for CMS (CERN-PROD) and LHCb. The quality of the JINR – Tier1s connectivity is under close monitoring

FTS Monitoring: CERN-JINR transfers Best transfer-test results for CERN - JINR 01.08.2007-04.08.2007 Average throughput 20 MB/s during the whole 04.08 Average data movement from CERN to JINR 01.2007-11.2007

98% of successful transfers 1.64 TB have been transferred CERN-PROD as T1 for RDMS 4 August, 2007 Transfer rate 12 – 22 MBs 98% of successful transfers 1.64 TB have been transferred October-November, 2006 (CNAF as FTS T1 for JINR) Transfer rates less than 2 MBs Ratio of successful transfers ~20-30% http://rocmon.jinr.ru/scripts/phedex

Testing of JINR-LCG2 site by CMS Job Robot Jobs from October, 2007 till now JINR-LCG2 site demonstrates a high level of job processing reliability The CMS JobRobot is a program, currently operated from a machine at CERN, that creates typical CMS user analysis jobs, submitts them to specific SE sites, and collects them while keeping track of the corresponding information. Its main objective is to test how sites are responding to job processing in order to detect possible problems and correct them as soon as possible. As an example - CMS JobRobot Summary Table for 17.11.2007

EGEE SITES: LHC VOs (ALICE, ATLAS, CMS and LHCb) Normalised CPU time by SITE June 2007 - December 2007 IN2P3-CC 4 731 732 CERN-PROD 4 393 875 FZK-LCG2 3 432 919 TRIUMF-LCG2 3 358 117 INFN-T1 2 244 936 IN2P3-LPC 1 705 242 INFN-PISA 1 438 029 UKI-NORTHGRID-MAN-HEP 1 369 207 GRIF 1 368 942 RAL-LCG2 1 306 579 JINR-LCG2 1 217 267 Statistics obtained from the EGEE Accounting Portal: http://www3.egee.cesga.es/gridsite/accounting/CESGA/egee_view.html

EGEE SITES: LHC VOs (ALICE, ATLAS, CMS and LHCb) Normalised CPU time by SITE September 2007 - January 2007 SITE Sep 07 Oct 07 Nov 07 Dec 07 Jan 08 Total TRIUMF-LCG2 624,019 1,258,052 1,229,198 729,033 377,204 4,217,506 IN2P3-CC 1,386,891 1,121,041 466,991 436,611 122,955 3,534,489 FZK-LCG2 643,124 612,975 598,618 645,248 384,200 2,884,165 CERN-PROD 483,517 869,397 504,751 598,155 302,715 2,758,535 INFN-T1 217,179 387,501 358,604 910,196 365,699 2,239,179 NDGF-T1 354,620 705,382 478,403 405,428 1,943,833 IN2P3-CC-T2 445,528 696,033 638,411 160,448 1,940,420 GRIF 266,099 342,961 248,369 271,297 194,211 1,322,937 IN2P3-LPC 263,765 394,922 312,387 241,328 79,836 1,292,238 JINR-LCG2 228,648 280,674 278,157 268,371 171,503 1,227,353

Worldwide LHC Computing Grid Project (WLCG) The protocol between CERN, Russia and JINR on a participation in LCG Project has been approved in 2003. The tasks of the Russian institutes in the LCG have been defined as: LCG software testing; evaluation of new Grid technologies (e.g. Globus toolkit 3) in a context of using in the LCG; event generators repository, data base of physical events: support and development.

JINR in the WLCG support and development WLCG infrastructure; participation in WLCG middleware testing/evaluation, participation in Data and Service Challenges, grid monitoring and accounting system development; FTS-monitoring and testing MCDB development; Participation in ARDA activities in coordination with experiments; HEP applications; User & Administrator Training and Education support of JINR member states in the WLCG activities.  

User Training and Induction User’s support to stimulate their active usage of LCG resources (courses, lectures, trainings, publication of user guides in Russian): Courses “CMS user analysis using EGEE/LCG infrastructure” - January 19, 2007. Lectures: CMS computing support at JINR: current status and plans; Short introduction to LCG/EGEE; CMS user jobs submission with the usage of ASAP. Practical part: Usage of ASAP ("private" user jobs and jobs needed an access to CMS data bases of simulated events) http://rdms-cms.jinr.ru/docs/rdms_1/cours1.htm Tutorial on distributed analysis of ATLAS data - April 19, 2007. Lectures: Main LCG commands and operations with files; Data analysis with GANGA and practical part. http://atlasinfo.jinr.ru/computing/tutorial_190407.html

User Training and Induction COURSES LECTURES PRACTICAL TRAINING Russian and JINR physicists participants of ATLAS experiment train and practise with Grid and the GANGA Courses for CMS users on submitting jobs to the LCG-infrastructure

Bulgaria, Varna, 10-17 September, 2007. The XXI International Symposium on Nuclear Electronics and Computing (NEC'2007) Bulgaria, Varna, 10-17 September, 2007. The main topics of the symposium are: Detector & Nuclear Electronics Computer Applications for Measurement and Control in Scientific Research Triggering and Data Acquisition Accelerator and Experiment Automation Control Systems Methods of Experimental Data Analysis Information & Data Base Systems Computer Networks for Scientific Research Data & Storage Management GRID computing

2-nd International Conference "Distributed Computing and Grid-technologies in Science and Education“ LABORATORY OF INFORMATION TECHNOLOGIES 26 - 30 June, 2006 The first conference, organized 2 years ago by LIT, became the first forum in Russia in this field. The second conference was attended by more than 200 specialists from 17 countries and from 46 universities and research centers. The scientific program included 96 reports covered 8 topics: 1) creation and operating experience of Grid infrastructures in science and education; 2) methods and techniques of distributed computing; 3) distributed processing and data storage; 4) organization of the network infrastructure for distributed data processing; 5) algorithms and methods of solving applied problems in distributed computing environments; 6) theory, models and methods of distributed data processing; 7) distributed computing within LHC projects and 8) design techniques and experience of using distributed information Grid systems. In the framework of the conference two tutorials on Grid systems gLite and NorduGrid were organized. In general opinion of the conference attendees, such conference should be continued. This will allow one to extend the dialogue of leading experts from Europe, USA and Russia. 3-nd International Conference "Distributed Computing and Grid-technologies in Science and Education“ will be on 30 June – 4 July, 2008

JINR CICC 2007 2008 2009 2010 CPU (kSI2K) 670 (1070) 1250 1750 2500 Disk (Tbytes) 100 400 800 1200 Active Tapes (Tbytes) 200

JINR – cooperation in Grid Worldwide LHC Computing Grid (WLCG); Enabling Grids for E-sciencE (EGEE); CERN-INTAS projects; BMBF grant “Development of the GRID-infrastructure and tools to provide joint investigations performed with participation of JINR and German research centers’ “Development of Grid segment for the LHC experiments” was supported in frames of JINR-South Africa cooperation agreement in 2006-2007; NATO project "DREAMS-ASIA“ (Development of gRid EnAbling technology in Medicine&Science for Central ASIA); JINR-Romania cooperation Hulubei-Meshcheryakov programme LIT team participate in project "SKIF-GRID" - A Program of the Belarussian-Russian Union State. We work in close cooperation and provide support to our partners in Ukraine, Belarus, Czech Republic, Romania, Poland, Germany, South Africa, Bulgaria, Armenia, Uzbekistan, Georgia. protocols of cooperation with INRNE (Bulgaria), ArmeSFo (Armenia), FZK Karlsruhe GmbH (Germany), Wroclaw University (Poland), IHEPI TSU (Georgia), NC PHEP BSU (Belarus), KFTI NASU (Ukraine), etc

Conclusions As a result of JINR participation in the WLCG and EGEE project, the JINR LGG/EGEE site is fully integrated into the worldwide LCG/EGEE grid infrastructure providing all the necessary resources, services and software for participation of JINR specialists in ALICE, ATLAS and CMS experiments after the LHC start which is expected in 2008 year. We shall continue the required computing support for ALICE, ATLAS and CMS at the JINR CICC. We plan to continue our participation in the WLCG project to support and develop the JINR LCG/EGEE site at the running phase of the LHC experiments. The further JINR activities in the WLCG project are based on the Memorandum on Understanding signed in September, 2007 by Russia, JINR and CERN. This agreement gives a juridical and financial basement for participation of Russia and JINR in the WLCG project after the LHC start. Also we shall continue our activities at the next stage of the EGEE project taking into account that these two global grid projects are developing in a close cooperation.