Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt.

Similar presentations


Presentation on theme: "Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt."— Presentation transcript:

1 Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt 9-12 March, 2005 Grid computing for CBM at JINR/Dubna Grid computing for CBM at JINR/Dubna

2 Integration and shared use of informational and computational resources, distributed databases, electronic libraries. Realisation of the Dubna-Grid project. Use of modern systems for storing and processing large-scale arrays of model and experimental data. Possibility of remote participation of experts of the JINR Member States at the basic facilities of JINR. Joint work on managing corporate networks, including the problems of control, analysis, protection of networks, servers, information. Joint mastering and application of the Grid-technologies for physical experiments and participation in creation of national Grid-segments. Joint work on creation of distributed supercomputer applications. Main directions of this activity includes:

3 JINR telecommunication links

4 JINR Gigabit Ethernet infrastructure (2003-2004)

5

6 Star-like logical topology of the JINR Gigabit Ethernet backbone with the Cisco Catalyst 6509 and Cisco Catalyst 3550 switches in the center of the core, and the Cisco Catalyst 3550 switches in 7 JINR divisions (in 6 Laboratories and in the JINR Administration), and Cisco Catalyst 3750 switch in LIT. LIT DLNP

7 In the year 2004: The network of Laboratory of Information Technologies was left as a part of the JINR backbone, meanwhile the rest JINR divisions (7) were isolated off backbone with their Catalyst 3550 switches. Controlled-access (Cisco PIX-525 firewall) at the entrance of the network.

8 Characteristics of the network: High-speed transport structure(1000 Mbit/sec); Security-Controlled access (Cisco PIX-525 firewall) at the entrance of the network; Partially isolated local traffic (6 divisions have their subnetworks with Cisco Catalyst 3550 as a gateway).

9 Network Monitoring Incoming and outgoing traffic distribution Total year 2004 36.1 Tb Incoming Total year 2004 43.64 Tb Outgoing

10 MYRINET cluster COMMON PC-farm INTERACTIVE PC-farm CCIC JINR 130 CPU 17TB RAID-5 10 – Interactive & UI 32 – Common PC-farm 30 – LHC 14 – MYRINET (Parallel) 20 – LCG 24 – servers

11 TOTAL200520062007 CPU (kSI2000)1006601000 Disk Space (TB)50200400 Mass Storage (TB) 1.550450 JINR Central Information and Computing Complex

12 Russian regional centre : the DataGrid cloud PNPI IHEP RRC KI ITEP JINR SINP MSU RRC-LHC LCG Tier1/Tier2 cloud CERN … Gbits/s FZK Regional connectivity: cloud backbone – Gbit/s to labs – 100–1000 Mbit/s Collaborative centers Tier2 cluster Grid access

13 LCG Grid Operations Centre LCG-2 Job Submission Monitoring Map

14 LHC Computing Grid Project (LCG) LCG Deployment and Operation LCG Testsuit Castor LCG AA- Genser&MCDB ARDA

15 Main results of the LCG project Development of the G2G (GoToGrid) system to maintain installation and debug the LCG site. Participation in the development of the CASTOR system: elaboration of a subservient module that will be served as a garbage collector. Development of structure of the database, creation of a set of base modules, development of a WEB-interface for creation/addition of articles to the database (description of files with events and related objects) http://mcdb.cern.chhttp://mcdb.cern.ch Testing a reliability of data transfer on the GidFTP protocol implemented in the Globus Toolkit 3.0 package. Testing the EGEE middleware components (GLite): Metadata and Fireman catalogs. Development of a code of constant WMS (Workload Management System) monitoring the INFN site gundam.chaf.infn.it in the testbed of a new EGEE middleware Glite.

16 LCG AA- Genser&MCDB Correct Monte Carlo simulation of complicated processes requires rather sophisticated expertise Different physics groups often need same MC samples Public availability of the event files speeds up their validation Central and public location where well-documented event files can be found The goal of MCDB is to improve the communication between Monte Carlo experts and end-users

17 Main Features of LCG MCDB The most important reason to develop LCG MCDB is to expel the restrictions of the CMS MCDB An SQL-based database Wide search abilities Possibility to keep the events at particle level as well as at partonic level Large event files support – storage: Castor in CERN Direct programming interface from LCG collaboration software Inheritance of all the advantages of the predecessor - CMS MCDB

18 MCDB Web Interface http://mcdb.cern.ch Only Mozilla Browser Supported (for the time being)

19 High Energy Physics WEB at LIT Idea: Create a server with WEB access to computing resources of LIT for Monte Carlo simulations, mathematical support and etc. Provide physicists with informational and mathematical support; Monte Carlo simulations at the server; Provide physicists with new calculation/simulation tools; Create copy of GENSER of the LHC Computing GRID project Introduce young physicists into HEP world. Goals: HepWeb.jinr.ru will include FRITIOF, HIJING, Glauber approximation, Reggeon approximation, … HIJING Web Interface

20 Fixed Bug in the HIJING Monte Carlo Model secures energy conservation Fixed Bug in the HIJING Monte Carlo Model secures energy conservation V.V. Uzhinsky (LIT)

21 G2G is a web-based tool to support the generic installation and configuration of (LCG) grid middleware –The server runs at CERN –Relevant site-dependent configuration information is stored in a database –It provides added-value tools, configuration files and documentation to install a site manually (or by a third-party fabric management tool)

22 G2G features are thought to be useful for ALL sites … –First level assistance and hints (Grid Assistant) –Site profile editing tool … for small sites … –Customized tools to make manual installation easier … for large sites … –Documentation to configure fabric management tools … and for us (support sites) –Centralized repository to query for site configuration

23 Deployment StrategyMIG G2G Worker Node User Interface Computing Element Classical Storage Element Resource Broker LCG-BDII Proxy Mon Box Current LCG Release (LCG-2_2_0) Next LCG Release

24 EGEE (Enabling Grids for E-sciencE) Participation in the EGEE (Enabling Grids for E- sciencE) project together with 7 Russian scientific centres: creation of infrastructure for application of Grid technologies on a petabyte scale. The JINR group activity includes the following main directions: SA1 - European Grid Operations, Support and Management NA2 – Dissemination and Outreach NA3 – User Training and Induction NA4 - Application Identification and Support

25 Russian Data Intensive GRID (RDIG) Consortium EGEE Federation Eight Institutes made up the consortium RDIG (Russian Data Intensive GRID) as a national federation in the EGEE project. They are: IHEP - Institute of High Energy Physics (Protvino), IMPB RAS - Institute of Mathematical Problems in Biology (Pushchino), ITEP - Institute of Theoretical and Experimental Physics (Moscow), JINR - Joint Institute for Nuclear Research (Dubna), KIAM RAS - Keldysh Institute of Applied Mathematics (Moscow), PNPI - Petersburg Nuclear Physics Institute (Gatchina), RRC KI - Russian Research Center “Kurchatov Institute” (Moscow), SINP-MSU - Skobeltsyn Institute of Nuclear Physics (MSU, Moscow).

26 LCG/EGEE Infrastructure The LCG/EGEE infrastructure has been created that comprises managing servers, 10 two- processor computing nodes. Software for experiments CMS, ATLAS, ALICE and LHCb has been installed and tested. Participation in mass simulation sessions for these experiments. A server has been installed for monitoring Russian LCG sites based on the MonALISA system. Research on the possibilities of other systems (GridICE, MapCenter).

27 Production in frames of DCs was accomplished at local JINR LHC and LCG-2 farms: CMS: 150 000 events (350 GB); 0.5 TB data on B-physics was downloaded to the CCIC for the analysis; the JINR investment in CMS DC04 was at a level of 0.3%. ALICE: the JINR investment in ALICE DC04 was at a level of 1.4% of a total number of successfully done Alien jobs. LHCb: the JINR investment in LHCb DC04 - 0.5%. Participation in DC04

28

29 Dubna educational and scientific network Dubna-Grid Project (2004) More than 1000 CPU Laboratory of Information Technologies, JINR University "Dubna" Directorate of programme for development of the science city Dubna University of Chicago, USA University of Lund, Sweden Creation of Grid-testbed on the basis of resources of Dubna scientific and educational establishments, in particular, JINR Laboratories, International University "Dubna“, secondary schools and other organizations

30 City high-speed network The 1 Gbps city high speed network was built on the basis of a single mode fiber optic cable of the total length of almost 50 km. The total number of network computers in the educational organizations includes more than 500 easily administrated units.

31 Network of the University “Dubna” The computer network of the University “Dubna” incorporates with the help of a backbone fiber optic highway the computer networks of the buildings housing the university complex. Three server centres maintain applications and services of computer classes, departments and university subdivisions as well as computer classes of secondary schools. Total number of PCs exceeds 500

32 Concluding remarks JINR/Dubna Grid segment and personal: are well prepared to be effectively involved into the CBM experiment MC simulation and data analysis activity Working group: prepare a proposal on a common JINR-GSI-Bergen Grid activity for the CBM experiment Proposal: present at the CBM Collaboration meeting in September


Download ppt "Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt."

Similar presentations


Ads by Google