Download presentation
Presentation is loading. Please wait.
Published byValentine Barnett Modified over 8 years ago
1
Information, Computer and Network Support of the JINR's Activity Information, Computer and Network Support of the JINR's Activity Laboratory of Information Technologies Gheorghe Adam LIT Deputy Director Programme Advisory Committee for Nuclear Physics 26-th meeting, 12-13 April 2007 Gheorghe Adam LIT Deputy Director Programme Advisory Committee for Nuclear Physics 26-th meeting, 12-13 April 2007
2
Direction: Networking, Computing, Computational Physics The main purpose of the activity of the Laboratory of Information Technologies (LIT) in the Joint Institute for Nuclear Research (JINR) is to provide the basic research in frontier particle, nuclear, and condensed matter physics conducted with direct participation of JINR with the most advanced working tools related to the contemporary computing environment. There is a great diversity of the specific research problems in the JINR laboratories and Institutes in JINR member states asking for LIT support. As a consequence, the LIT activity carries an interdisciplinary character. Direction: Networking, Computing, Computational Physics The main purpose of the activity of the Laboratory of Information Technologies (LIT) in the Joint Institute for Nuclear Research (JINR) is to provide the basic research in frontier particle, nuclear, and condensed matter physics conducted with direct participation of JINR with the most advanced working tools related to the contemporary computing environment. There is a great diversity of the specific research problems in the JINR laboratories and Institutes in JINR member states asking for LIT support. As a consequence, the LIT activity carries an interdisciplinary character.
3
1. Provision of JINR and its Member States with high-speed telecommunication data links (Task Coordinators: Korenkov V.V., Dolbilov A.G.) 2. Creation of a high-speed, reliable and protected local area network (LAN) of JINR (Task Coordinators: Ivanov V.V., Popov L.A.) 3. Creation and maintenance of the distributed high- performance computing infrastructure and mass storage resources (Task Coordinators: Ivanov V.V., Korenkov V.V.) 4. Provision of information, algorithmic and software support of the JINR research-and-production activity (Task Coordinators: Zrelov P.V., Korenkov V.V.) 5. Elaboration of the JINR Grid-segment and its inclusion in European and global Grid-structures (Task Coordinators: Ivanov V.V., Korenkov V.V., Strizh T.A.) 1. Provision of JINR and its Member States with high-speed telecommunication data links (Task Coordinators: Korenkov V.V., Dolbilov A.G.) 2. Creation of a high-speed, reliable and protected local area network (LAN) of JINR (Task Coordinators: Ivanov V.V., Popov L.A.) 3. Creation and maintenance of the distributed high- performance computing infrastructure and mass storage resources (Task Coordinators: Ivanov V.V., Korenkov V.V.) 4. Provision of information, algorithmic and software support of the JINR research-and-production activity (Task Coordinators: Zrelov P.V., Korenkov V.V.) 5. Elaboration of the JINR Grid-segment and its inclusion in European and global Grid-structures (Task Coordinators: Ivanov V.V., Korenkov V.V., Strizh T.A.) Specific tasks of theme 1048 (05-6-1048-2003/2007): Specific tasks of theme 1048 (05-6-1048-2003/2007):
4
JINR Local Area Network (1 Gbps) RSCC «Dubna» City Dubna Local Area Network “Dubna” University BSINP MSU Russian Satellite Communications Company, Moscow Russian networks RBNet, RUNNet International networks GEANT, GLORIAD, NORDUNet, StarLight, NetherLight, CSTNet 1 Gbps 2,5 Gbps 1. High-speed telecommunication links Development of external JINR computer communications: a) a high-speed 1 Gbps JINR-Moscow data link, b) JINR participation in the new-generation research computer network with Russian and international (GLORIAD, GEANT) segments for provision of JINR’s activities, c) integration with the educational network of Dubna.
5
Network Monitoring Incoming and outgoing traffic distribution Total year 2006 82.71 Tb (45.86 Tb in 2005) Incoming Total year 2006 78.01 Tb (41.53 Tb in 2005) Outgoing
6
Future plans on telecommunication channels - Participation in the development of the Russian research network of a new generation. - Expansion of the Dubna - Moscow data link at the level of the international segment by use of gray optic fiber (10 Gbps in 2007, 40 Gbps in 2010, 100 Gbps by 2015). - Development of the international segment in the framework of GEANT2, GLORIAD; increase the bandwidth up to 10 Gbps in 2007, 40 Gbps in 2010, 100 Gbps by 2015. - Integration with the city educational network and its development with transition to flagship technologies (10 Gbps Ethernet). - Development of the corporate network of JINR and its member-states.
7
Comprises 5681 computers and nodes; High-speed transport (1Gbps) (Min. 100 Mbps to each PC); Controlled-access (Cisco PIX-525 firewall) at network entrance; Partially isolated local traffic (8 divisions have own subnetworks with Cisco Catalyst 3550 as gateways). Comprises 5681 computers and nodes; High-speed transport (1Gbps) (Min. 100 Mbps to each PC); Controlled-access (Cisco PIX-525 firewall) at network entrance; Partially isolated local traffic (8 divisions have own subnetworks with Cisco Catalyst 3550 as gateways). Main Features Main Features 2. Local Area Network Backbone
8
Modernization of the Central Communication Node in 2006 Goals: to build a fail-proof core of the JINR LAN communication structure to build a fail-proof core of the JINR LAN communication structure to achieve appropriate level of network security to achieve appropriate level of network security to have good data rate parameters to have good data rate parameters to have tools for controlling maintainability, accessibility, reliability to have tools for controlling maintainability, accessibility, reliability The new powerful switching & routing equipment Internet Cisco 7606 router: Internet Cisco 7606 router: processor - Supervisor Engine 720, MSFC3, PFC3B Memory 1 GB 48-port 10/100/1000 Firewall security system March 2006 - central switch March 2006 - central switch Cisco Catalyst 6509E Cisco Catalyst 6509E April 2006 – VPN router Cisco 7513 April 2006 – VPN router Cisco 7513 (FZK Karlsruhe donation) (FZK Karlsruhe donation)
9
Future Development: 10GbE High Performance Computing Datacenter, GRID computing & SONET Replacement Enabling High Performance Clusters of Hundreds of Servers per Catalyst 6500 Cluster of Clusters or ‘GRIDs’ Interconnected by wire-rate 10GbE Long Haul DWDM with Cisco ONS Products for Collaborative Environments 256K Route Support for iBGP Inter-Cluster Connections Supervisor 720 2 x 10GbE / 4x 10GbE 16 x GbE 48 x 10/100/1000 Long Haul 10GbE DWDM GRID and Cluster Computing
10
To keep the JINR LAN as full time working structure under constant control. A)LAN protection is the crucial vital thing to have. B) The network security issues are considered as processes, not the final product – we have to continue the process of permanent evaluation of all possible mechanisms to increase the level of security in every element of JINR computer and network infrastructure. C) The extension of the network services secured remote access to JINR resources from home PC, the Internet access from Dubna hotels – security becomes the parameter of the greatest importance. To be able to deal with security we have to have adequate network monitoring tool to accomplish the look-ahead assessment of the entire network environment. Network security is a crucial and permanent problem, asking for steady research
11
100 kSi2K 57 TB 3. JINR Central Computing and Informational Complex Year 1958 Year 2007 Year 2006 1000 kSi2K 150 -200 TB
12
Total 507 users LIT – 182 DLNP - 120 LPP - 60 VBLHE - 48 NOJINR - 33 FLNR - 28 BLTP- 15 FLNP –12 UPR - 9 Total 18 experiments ATLAS -55 CMS - 28 ALICE -24 HARP -9 COMPASS – 7 DIRAC - 6 NEMO - 6 OPERA – 4 D0 - 3 FOTON-2 – 3 STAR - 1 Special groups for ATLAS, CMS, ALICE, LHCb, HARP, COMPASS, DIRAC, D0, NEMO, OPERA, HERMES, H1, NA48, HERA-B, IREN, STAR, KLOD, FOTON-2 Statistics by Laboratories (12 months 2006) Group statistics (12 months 2006)
13
200720102015 CPU (kSI2K)1000400010000 Disk space, (TB)150-20015004000 Tape Active,(TB)17306000 The future plans include the development of JINR Central Information and Computing Complex as a core of the distributed Grid infrastructure: development of the CICC infrastructure meeting the needs of collaborations, JINR users, and JINR Member States according to the following updated table presented at the Roadmap (under adequate financing) Resources requested by LHC experiments for production in 2007 600 kSI2K and 150-200 TB Development of the JINR Central Information and Computing Complex (CICC) as a core of the distributed infrastructure Development of the JINR Central Information and Computing Complex (CICC) as a core of the distributed infrastructure
14
Development and maintenance of the main information WWW/FTP- servers and retrieval systems; Creation and storage of electronic documents related to scientific and administrative activity of LIT and JINR; Development and support of information sites of conferences, workshops and symposia; Development and maintenance of general- and special-purpose program libraries for various platforms. Participation in the development and support of specialized software complexes for physical processes simulation and experimental data analysis; Maintenance, modernization and support of computer systems for JINR administrative databases (in cooperation with STD AMS JINR); Performance of work on testing and modernizing the system and applied software packages within the projects EGEE and ARDA; Provision of access to the electronic archives of JINR and its Member States and to the global electronic information resources. 4. Current activities in provision of information and software support of the JINR activity
15
Development and maintenance of general and special-purpose program libraries
16
5. Elaboration of the JINR Grid-segment and its inclusion in European and global Grid-structures 5. Elaboration of the JINR Grid-segment and its inclusion in European and global Grid-structures Directions of activity: - participation in LHC Computing GRID Project (LCG) - development of LCG/EGEE infrastructure - participation in the development of the Russian Tier2 Cluster - Grid middleware evaluations - Grid extensions for parallel computing (during 2009 and afterwards)
17
LHC Computing Grid Project (LCG) The protocol between CERN, Russia and JINR on the participation in LCG Project has been approved in 2003. JINR specific general Tasks: LCG software testing; evaluation of new Grid technologies (e.g. Globus toolkit 3) in the context of using them within the LCG; event generators repository, data bases of physical events: support and development The protocol between CERN, Russia and JINR on the participation in LCG Project has been approved in 2003. JINR specific general Tasks: LCG software testing; evaluation of new Grid technologies (e.g. Globus toolkit 3) in the context of using them within the LCG; event generators repository, data bases of physical events: support and development
18
JINR in LCG LCG-infrastructure support and development at JINR; participation in LCG middleware testing/evaluation, participation in Service Challenges, CASTOR usage/development; grid monitoring tools development; JINR LCG portal support and development, MCDB development; support of JINR member states in the LCG activities. LCG-infrastructure support and development at JINR; participation in LCG middleware testing/evaluation, participation in Service Challenges, CASTOR usage/development; grid monitoring tools development; JINR LCG portal support and development, MCDB development; support of JINR member states in the LCG activities.
19
JINR in the LCG: work done and current status At present the JINR LCG infrastructure comprises: User Interface (UI), Computing Element (CE), Storage Element (SE), Worker Nodes (WN); Basic services: Berkley DB Information Index (BDII); Proxy Server (PX); Resource Broker (RB); 2 Voboxes (for ALICE and CMS); ROCMON; MON-box. The current gLite version (302), SE with dCache usage (31 TB), Xrootd door in dCache for ALICE, XROOTD for ALICE, last versions of atlas-releases, atlas-offline, atlas-production packages for ATLAS, CMKIN, CMSSW, OSCAR, ORCA, COBRA packages for CMS and DaVinci, Gauss packages for LHCb are installed.
20
RDIG monitoring&accounting RDIG monitoring&accounting http://rocmon.jinr.ru:8080 http://rocmon.jinr.ru:8080 RDIG monitoring&accounting RDIG monitoring&accounting http://rocmon.jinr.ru:8080 http://rocmon.jinr.ru:8080
21
Further participation in the LCG project Support and development of the JINR LCG-segment in frames of global LCG infrastructure; User’s support to stimulate their active usage of LCG resources (courses, lectures, trainings, publication of user guides in Russian); Participation in Service Challenges in coordination with LHC experiments; LCG middleware testing/evaluation; Grid-monitoring the LCG-infrastructure at JINR and others sites of the Russian Tier2 cluster; Evaluation of new Grid technologies in context of their usage in LCG; JINR LCG web-portal improvement; MCDB development (data base structure; basic modules; interfaces). Installation of the AMPT model. Testing of the model. Participation in development of distributed system Data Quality Monitoring ATLAS in frame activity of Data Preparation ATLAS. Participation in ARDA activities in coordination with experiments; Participation in CASTOR-2 development; Support of JINR member states in LCG activities. Support and development of the JINR LCG-segment in frames of global LCG infrastructure; User’s support to stimulate their active usage of LCG resources (courses, lectures, trainings, publication of user guides in Russian); Participation in Service Challenges in coordination with LHC experiments; LCG middleware testing/evaluation; Grid-monitoring the LCG-infrastructure at JINR and others sites of the Russian Tier2 cluster; Evaluation of new Grid technologies in context of their usage in LCG; JINR LCG web-portal improvement; MCDB development (data base structure; basic modules; interfaces). Installation of the AMPT model. Testing of the model. Participation in development of distributed system Data Quality Monitoring ATLAS in frame activity of Data Preparation ATLAS. Participation in ARDA activities in coordination with experiments; Participation in CASTOR-2 development; Support of JINR member states in LCG activities.
22
CERN to be T1 for RDMS is under an active discussion now
23
CMS Computing Support at LIT JINR CMS jobs at LCG-2 cluster at LIT: 20% of cluster loading from October, 2005 to October, 2006 Current status of RDMS CMS Computing Activities has been reported at the meetings of Russia-CERN JWG on Computing for LHC in March and September, 2006 at CERN, at GRID’2006 conference in Dubna (June, 2006) and at the RDMS CMS Annual conference (Varna, September 2006) CMS SW at LHC cluster: CMSSW_0_7_0, CMSSW_0_8_0 CMS SW at LCG-2 cluster: VO-cms-CMSSW_1_0_4, VO-cms-CMSSW_1_0_5, VO-cms-CMSSW_1_0_6 9 CMS VO members at JINR JINR currently participates in CMS SC4: load test transfers and heart beat transfers – volume of monthly data transferred ~ up to 2 TB from May to October
24
RDMS CMS Data Bases Raw data to JINR by GRID protocols ~1TB of raw data already transferred
25
Analysis Monitoring System in TDAQ of ATLAS Experiment Monitoring Services Error Reporting Information Service Online Histogramming Event Monitoring Gatherer Histograms FrameworksDisplays GNAM Athena Monitoring Event Dump Information Monitors Histogram Presenter Data Quality Monitoring Archiving Histograms, Counters, Etc… Monitoring Data Archive Archive Browser Histograms Message Reporting Histograms, Events, Messages, Counters, Etc… Events Web Monitoring
26
Smart Monitoring Framework
27
RDIG ALICE sites
28
Participation to date in SC4
29
JINR Network resources of the Project City of Dubna 100 Mbps JINR – University channel 100 Mbps University – schools channels Bridges and gates in the University and the city network structure DUBNA-GRID Current Status 212 configured nodes in: 1) Computer classes of “Dubna” university 2) LIT JINR Grid Laboratory 3) MIREA computer classes 4) Dubna secondary schools Mass installation technologies and spreading software to all accessible nodes of the meta-cluster have been developed Monitoring system of the meta-cluster has been developed The meta-cluster has been integrated with JINR batch system First real tasks have been performed (including real tasks for the ATLAS experiment) Integration of HEPWEB server in DUBNA-GRID enviroment is realized (next slide) Logical scheme of the Dubna- Grid meta-cluster Schematic view of loading the computing node Monitoring system of the Dubna- Grid meta-cluster Website of the project
30
LIT GridLab (Part of the Dubna-Grid Project) Within Dubna-Grid project gLite site was created on a basis of virtual machines, which consist of Computing Element and Storage Element (as separate PCs) and Worker Nodes under VMWare Player. OS Scientific Linux CERN 3.0.8. This site is part of World Grid infrastructure. The site will be used for training of young specialists from JINR and from The Member States and students. Also this site will be used for testing purposes. The installation and tuning of the gLite middleware has been organized in form of practical work for gLite administrators. LIT GRID Lab consists of: 1. Linux Server (lgrsrv.jinr.ru) with functions: boot server for Virtual Machines (Warewulf, scripts); router for Linux (NAT, DHCP); monitoring of Meta-Cluster. 2. LCG Servers - Computer Element (lgrce.jinr.ru), Storage Element (lgrse.jinr.ru). Functions: make Virtual Machines as a part of World GRID segment 3. Windows Server (mirea.jinr.ru) with functions: router for Windows (NAT, DHCP, DNS, Proxy) 4. Work-nodes Room 539, 563. Total nodes - 18 (CL51-59, CL61-69). Room 569. Total nodes - 4 (LGR11-14).
31
2-nd International Conference "Distributed Computing and Grid-technologies in Science and Education“ LABORATORY OF INFORMATION TECHNOLOGIES 26 - 30 June, 2006 The first conference, organized 2 years ago by LIT, became the first forum in Russia in this field. The second conference was attended by more than 200 specialists from 17 countries and from 46 universities and research centers. The scientific program included 96 reports covered 8 topics: 1) creation and operating experience of Grid infrastructures in science and education; 2) methods and techniques of distributed computing; 3) distributed processing and data storage; 4) organization of the network infrastructure for distributed data processing; 5) algorithms and methods of solving applied problems in distributed computing environments; 6) theory, models and methods of distributed data processing; 7) distributed computing within LHC projects and 8) design techniques and experience of using distributed information Grid systems. In the framework of the conference two tutorials on Grid systems gLite and NorduGrid were organized. In general opinion of the conference attendees, such conference should be continued. This will allow one to extend the dialogue of leading experts from Europe, USA and Russia.
32
Informational and Technical Support of the XXXIII International Conference on High Energy Physics (ICHEP'06) RAS, Moscow, July 26 – August 2, 2006 Functional scheme of ICHEP’06 Local Area Network Scheme of equipment in Big Hall of Russian Academy of Science for plenary conference sessions Informational System “Conference ICHEP’06” and WEB site of the conference Informational System Conference WEB site Local Area Network IP-Telephony Internet Translation Internet Hall Wireless Wi-Fi Network Scheme of Conference Links
33
The main topics of the symposium are: Detector & Nuclear Electronics Computer Applications for Measurement and Control in Scientific Research Triggering and Data Acquisition Accelerator and Experiment Automation Control Systems Methods of Experimental Data Analysis Information & Data Base Systems Computer Networks for Scientific Research Data & Storage Management GRID computing The XXI International Symposium on Nuclear Electronics and Computing (NEC'2007) Bulgaria, Varna, 10-17 September, 2007.
34
Conclusion:Conclusion: We ask for PAC agreement to proceed to the extension until 2010 of the theme: extension until 2010 of the theme: “Information, Computer and Network Support of the JINR's Activity” Field of Research: 05 - Networking, computing and computational physics 05 - Networking, computing and computational physics We ask for PAC agreement to proceed to the extension until 2010 of the theme: extension until 2010 of the theme: “Information, Computer and Network Support of the JINR's Activity” Field of Research: 05 - Networking, computing and computational physics 05 - Networking, computing and computational physics
35
Thank you for your attention !
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.