Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

INFSO-RI Enabling Grids for E-sciencE First year of NA3 EGEE courses in Russia Author Elena Slabospitskaya Location IHEP.
EGEE is a project funded by the European Union under contract IST Application Identification and Support (NA4) Activities in the RDIG-EGEE.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
A.Joutchkov *, N.Tverdokhlebov *, S.Arnautov **, A.Yanovsky **, Y.Lyssov ***, A.Cherny *** *Telecommunication Center “Science & Society” ** Institute for.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
Participation of JINR in the LCG and EGEE projects V.V.Korenkov (JINR, Dubna) NEC’2005, Varna 17 September, 2005.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks NA3 Activity in Russia Sergey Oleshko, PNPI,
GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
EGEE is a project funded by the European Union under contract IST The Russian Research Centre Kurchatov Institute Partner Introduction Dr.Sergey.
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
RDIG (Russian Data Intensive Grid) e-Infrastructure: Status and Plans Vyacheslav Ilyin (SINP, MSU), Vladimir Korenkov (JINR, Dubna), Aleksey Soldatov (RRC.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna.
…building the next IT revolution From Web to Grid…
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
Enabling Grids for E-sciencE INFSO-RI Institute of mathematical problems of biology RAS Expertise in mathematical modeling and experimental data.
INFSO-RI Enabling Grids for E-sciencE NA3 activity in Russia during EGEE project Elena Slabospitskaya NA3 Manager for Russia Varna,
Monte-Carlo Event Database: current status Sergey Belov, JINR, Dubna.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Russia-CERN Joint Working Group on LHC Computing Russia-CERN Joint Working Group on LHC Computing, 19 March, 2004, CERN V.A. Ilyin 1.Some about JWGC 2.Russia.
INFSO-RI Enabling Grids for E-sciencE RDIG - Russia in EGEE Viatcheslav Ilyin RDIG Consortium Director, EGEE PMB SINP MSU (48),
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
V. Ilyin, Russia – EU, Russia participation in EGEE stable core infrastructure - new applications/new resources/new.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
Enabling Grids for E-sciencE Experience Supporting the Integration of LHC Experiments Computing Systems with the LCG Middleware Simone.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Sergey Belov, Joint Institute for Nuclear Research, Dubna, Russia Status of the LCG Monte Carlo Data Base(MCDB)
My Jobs at CERN April 2015 My Jobs at CERN2
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Information, Computer and Network Support of the JINR's Activity Information, Computer and Network Support of the JINR's Activity Laboratory of Information.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Report on availability of the JINR networking, computing and information infrastructure for real data taking and processing in LHC experiments Ivanov V.V.
NA2 in RUSSIA – Plans for 2005 R D I G Tatiana Strizh
Russian Regional Center for LHC Data Analysis
Clouds of JINR, University of Sofia and INRNE Join Together
LHC DATA ANALYSIS INFN (LNL – PADOVA)
UK GridPP Tier-1/A Centre at CLRC
Victor Abramovsky Laboratory “GRID technology in modern physics”
RDIG for ALICE today and in future
Presentation transcript:

Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt 9-12 March, 2005 Grid computing for CBM at JINR/Dubna Grid computing for CBM at JINR/Dubna

Integration and shared use of informational and computational resources, distributed databases, electronic libraries. Realisation of the Dubna-Grid project. Use of modern systems for storing and processing large-scale arrays of model and experimental data. Possibility of remote participation of experts of the JINR Member States at the basic facilities of JINR. Joint work on managing corporate networks, including the problems of control, analysis, protection of networks, servers, information. Joint mastering and application of the Grid-technologies for physical experiments and participation in creation of national Grid-segments. Joint work on creation of distributed supercomputer applications. Main directions of this activity includes:

JINR telecommunication links

JINR Gigabit Ethernet infrastructure ( )

Star-like logical topology of the JINR Gigabit Ethernet backbone with the Cisco Catalyst 6509 and Cisco Catalyst 3550 switches in the center of the core, and the Cisco Catalyst 3550 switches in 7 JINR divisions (in 6 Laboratories and in the JINR Administration), and Cisco Catalyst 3750 switch in LIT. LIT DLNP

In the year 2004: The network of Laboratory of Information Technologies was left as a part of the JINR backbone, meanwhile the rest JINR divisions (7) were isolated off backbone with their Catalyst 3550 switches. Controlled-access (Cisco PIX-525 firewall) at the entrance of the network.

Characteristics of the network: High-speed transport structure(1000 Mbit/sec); Security-Controlled access (Cisco PIX-525 firewall) at the entrance of the network; Partially isolated local traffic (6 divisions have their subnetworks with Cisco Catalyst 3550 as a gateway).

Network Monitoring Incoming and outgoing traffic distribution Total year Tb Incoming Total year Tb Outgoing

MYRINET cluster COMMON PC-farm INTERACTIVE PC-farm CCIC JINR 130 CPU 17TB RAID-5 10 – Interactive & UI 32 – Common PC-farm 30 – LHC 14 – MYRINET (Parallel) 20 – LCG 24 – servers

TOTAL CPU (kSI2000) Disk Space (TB) Mass Storage (TB) JINR Central Information and Computing Complex

Russian regional centre : the DataGrid cloud PNPI IHEP RRC KI ITEP JINR SINP MSU RRC-LHC LCG Tier1/Tier2 cloud CERN … Gbits/s FZK Regional connectivity: cloud backbone – Gbit/s to labs – 100–1000 Mbit/s Collaborative centers Tier2 cluster Grid access

LCG Grid Operations Centre LCG-2 Job Submission Monitoring Map

LHC Computing Grid Project (LCG) LCG Deployment and Operation LCG Testsuit Castor LCG AA- Genser&MCDB ARDA

Main results of the LCG project Development of the G2G (GoToGrid) system to maintain installation and debug the LCG site. Participation in the development of the CASTOR system: elaboration of a subservient module that will be served as a garbage collector. Development of structure of the database, creation of a set of base modules, development of a WEB-interface for creation/addition of articles to the database (description of files with events and related objects) Testing a reliability of data transfer on the GidFTP protocol implemented in the Globus Toolkit 3.0 package. Testing the EGEE middleware components (GLite): Metadata and Fireman catalogs. Development of a code of constant WMS (Workload Management System) monitoring the INFN site gundam.chaf.infn.it in the testbed of a new EGEE middleware Glite.

LCG AA- Genser&MCDB Correct Monte Carlo simulation of complicated processes requires rather sophisticated expertise Different physics groups often need same MC samples Public availability of the event files speeds up their validation Central and public location where well-documented event files can be found The goal of MCDB is to improve the communication between Monte Carlo experts and end-users

Main Features of LCG MCDB The most important reason to develop LCG MCDB is to expel the restrictions of the CMS MCDB An SQL-based database Wide search abilities Possibility to keep the events at particle level as well as at partonic level Large event files support – storage: Castor in CERN Direct programming interface from LCG collaboration software Inheritance of all the advantages of the predecessor - CMS MCDB

MCDB Web Interface Only Mozilla Browser Supported (for the time being)

High Energy Physics WEB at LIT Idea: Create a server with WEB access to computing resources of LIT for Monte Carlo simulations, mathematical support and etc. Provide physicists with informational and mathematical support; Monte Carlo simulations at the server; Provide physicists with new calculation/simulation tools; Create copy of GENSER of the LHC Computing GRID project Introduce young physicists into HEP world. Goals: HepWeb.jinr.ru will include FRITIOF, HIJING, Glauber approximation, Reggeon approximation, … HIJING Web Interface

Fixed Bug in the HIJING Monte Carlo Model secures energy conservation Fixed Bug in the HIJING Monte Carlo Model secures energy conservation V.V. Uzhinsky (LIT)

G2G is a web-based tool to support the generic installation and configuration of (LCG) grid middleware –The server runs at CERN –Relevant site-dependent configuration information is stored in a database –It provides added-value tools, configuration files and documentation to install a site manually (or by a third-party fabric management tool)

G2G features are thought to be useful for ALL sites … –First level assistance and hints (Grid Assistant) –Site profile editing tool … for small sites … –Customized tools to make manual installation easier … for large sites … –Documentation to configure fabric management tools … and for us (support sites) –Centralized repository to query for site configuration

Deployment StrategyMIG G2G Worker Node User Interface Computing Element Classical Storage Element Resource Broker LCG-BDII Proxy Mon Box Current LCG Release (LCG-2_2_0) Next LCG Release

EGEE (Enabling Grids for E-sciencE) Participation in the EGEE (Enabling Grids for E- sciencE) project together with 7 Russian scientific centres: creation of infrastructure for application of Grid technologies on a petabyte scale. The JINR group activity includes the following main directions: SA1 - European Grid Operations, Support and Management NA2 – Dissemination and Outreach NA3 – User Training and Induction NA4 - Application Identification and Support

Russian Data Intensive GRID (RDIG) Consortium EGEE Federation Eight Institutes made up the consortium RDIG (Russian Data Intensive GRID) as a national federation in the EGEE project. They are: IHEP - Institute of High Energy Physics (Protvino), IMPB RAS - Institute of Mathematical Problems in Biology (Pushchino), ITEP - Institute of Theoretical and Experimental Physics (Moscow), JINR - Joint Institute for Nuclear Research (Dubna), KIAM RAS - Keldysh Institute of Applied Mathematics (Moscow), PNPI - Petersburg Nuclear Physics Institute (Gatchina), RRC KI - Russian Research Center “Kurchatov Institute” (Moscow), SINP-MSU - Skobeltsyn Institute of Nuclear Physics (MSU, Moscow).

LCG/EGEE Infrastructure The LCG/EGEE infrastructure has been created that comprises managing servers, 10 two- processor computing nodes. Software for experiments CMS, ATLAS, ALICE and LHCb has been installed and tested. Participation in mass simulation sessions for these experiments. A server has been installed for monitoring Russian LCG sites based on the MonALISA system. Research on the possibilities of other systems (GridICE, MapCenter).

Production in frames of DCs was accomplished at local JINR LHC and LCG-2 farms: CMS: events (350 GB); 0.5 TB data on B-physics was downloaded to the CCIC for the analysis; the JINR investment in CMS DC04 was at a level of 0.3%. ALICE: the JINR investment in ALICE DC04 was at a level of 1.4% of a total number of successfully done Alien jobs. LHCb: the JINR investment in LHCb DC %. Participation in DC04

Dubna educational and scientific network Dubna-Grid Project (2004) More than 1000 CPU Laboratory of Information Technologies, JINR University "Dubna" Directorate of programme for development of the science city Dubna University of Chicago, USA University of Lund, Sweden Creation of Grid-testbed on the basis of resources of Dubna scientific and educational establishments, in particular, JINR Laboratories, International University "Dubna“, secondary schools and other organizations

City high-speed network The 1 Gbps city high speed network was built on the basis of a single mode fiber optic cable of the total length of almost 50 km. The total number of network computers in the educational organizations includes more than 500 easily administrated units.

Network of the University “Dubna” The computer network of the University “Dubna” incorporates with the help of a backbone fiber optic highway the computer networks of the buildings housing the university complex. Three server centres maintain applications and services of computer classes, departments and university subdivisions as well as computer classes of secondary schools. Total number of PCs exceeds 500

Concluding remarks JINR/Dubna Grid segment and personal: are well prepared to be effectively involved into the CBM experiment MC simulation and data analysis activity Working group: prepare a proposal on a common JINR-GSI-Bergen Grid activity for the CBM experiment Proposal: present at the CBM Collaboration meeting in September