Participation of JINR in CERN- INTAS project (03-52-4297) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.

Slides:



Advertisements
Similar presentations
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 3: Operating Systems Computer Science: An Overview Tenth Edition.
Advertisements

Operating System Structures
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
ROOT: A Data Mining Tool from CERN Arun Tripathi and Ravi Kumar 2008 CAS Ratemaking Seminar on Ratemaking 17 March 2008 Cambridge, Massachusetts.
CHAPTER 2 OPERATING SYSTEM OVERVIEW 1. Operating System Operating System Definition A program that controls the execution of application programs and.
CIS 375—Web App Dev II Microsoft’s.NET. 2 Introduction to.NET Steve Ballmer (January 2000): Steve Ballmer "Delivering an Internet-based platform of Next.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
Active Monitoring in GRID environments using Mobile Agent technology Orazio Tomarchio Andrea Calvagna Dipartimento di Ingegneria Informatica e delle Telecomunicazioni.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Overall Goal of the Project  Develop full functionality of CMS Tier-2 centers  Embed the Tier-2 centers in the LHC-GRID  Provide well documented and.
OSes: 3. OS Structs 1 Operating Systems v Objectives –summarise OSes from several perspectives Certificate Program in Software Development CSE-TC and CSIM,
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Antonio Retico CERN, Geneva 19 Jan 2009 PPS in EGEEIII: Some Points.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
COMP1321 Digital Infrastructure Richard Henson March 2016.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Grid Deployment Technical Working Groups: Middleware selection AAA,security Resource scheduling Operations User Support GDB Grid Deployment Resource planning,
Using HLRmon for advanced visualization of resource usage Enrico Fattibene INFN - CNAF ISCG 2010 – Taipei March 11 th, 2010.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
WP18, High-speed data recording Krzysztof Wrona, European XFEL
2. OPERATING SYSTEM 2.1 Operating System Function
Belle II Physics Analysis Center at TIFR
Russian Regional Center for LHC Data Analysis
RDIG for ALICE today and in future
Introduction to Operating System (OS)
Universita’ di Torino and INFN – Torino
Real IBM C exam questions and answers
Wide Area Workload Management Work Package DATAGRID project
Chapter 3: Operating Systems
Chapter-1 Computer is an advanced electronic device that takes raw data as an input from the user and processes it under the control of a set of instructions.
The LHCb Computing Data Challenge DC06
Presentation transcript:

Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004

T3. Optimization of data access and transfer in LCG Objectives : The cluster of Tier2 centers will be created in Russia as an operational part of the LCG infrastructure. The distributed facilities will consist of several computing farms located in different places with different access speed to Tier1. The facilities for data storage will be differ from site to site. Moreover, overall facilities will not answer to the requirements of Russian physicists in a full scope. These conditions require a development of the algorithms and software to optimize the data migration paths between end-users and Russian Tier2 centers, and then with Tier1 sites.

T3. Optimization of data access and transfer in LCG The main goals are: 1) optimization of the use of Russia-LCG and regional communication links, 2) effective use of data storage and CPU resources of Russian Tier2 Cluster; 3) stable and effective access of end-users to data stored in the Tier2-Tier1 system.

T3. Optimization of data access and transfer in LCG Methodology : Analysis of current data transfer rates and prediction of a free space for data keeping. Accounting system on real rates of data transfer via different communication channels. It is supposed to develop a toolkit to optimize data migration paths and to provide a quick access to data with a possible data prefetching to a required place both on user demand and in semi-automatic way. While development the existing GRID/LCG software facilities will be used and it is proposed to develop the additional LCG tools as some kind of low level sensors for data channel monitoring and a free space for data keeping.

T3. Optimization of data access and transfer in LCG MILESTONES: 1) development of algorithms of monitoring (communication channels and free space), then development of algorithms of optimization of data migration paths and usage of an available space for data keeping, Feb July 2004; 2) monitoring software development, Aug Dec. 2004; 3) development of software for providing the data migration handling, Jan Dec

T3. Optimization of data access and transfer in LCG RESULTS EXPECTED: The main results of this task are expected to be: 1)the optimization of the usage of communication channels between Russian LCG segment and Tier1 at CERN and FZK, 2)the optimization of the usage of an available space for data keeping 3) the reduction of data access delays inside Russian Tier2.

T5. The use of the Windows platform for LCG tasks Objectives : Study of the VMware as a software for emulation of multi-computer clusters for the LCG purposes. Study of performance fall of such clusters. Study of VMware features to complete distributed computations in a batch mode. Porting some of the LCG software on the MS Windows.NET platform including a development of a set of ActiveX components. Providing the basic set of operations such as queuing of jobs, tracing the job execution process, requests to the GRID information systems and responding to external information requests, encapsulation of terminal access to different components of GRID under MS Windows. Methodology : Installation of the VMware with different operating systems. Installation of software required for distributed computations at computing clusters under VMware. Software development for the Windows users taking into account the peculiarities of the software for GRID systems.

T5.1.Use of the VMware for construction of VO's on different platforms and OS versions Objectives : Studies of a possibility of the use of the VMware in a context of software testing and development to emulate computer clusters for the LCG purposes. Study of VMware ability to provide cross-platform distributed computations. Study of a performance decrease of computing clusters under the VMware. Methodology : Installation of the VMware with different operating systems. Installation of software required for distributed computations at computing clusters under VMware.

T5.1.Use of the VMware for construction of VO's on different platforms and OS versions MILESTONES: 1) installation of the software required, Feb March 2004; 2) development and testing of software to emulate multi-computer clusters, Apr Sept.2004; 3) the usage and analysis of efficiency of clusters under VMware for distributed computations, Sept Oct.2005.

T5.1.Use of the VMware for construction of VO's on different platforms and OS versions RESULTS EXPECTED: Building of Linux computations clusters on a top of Windows computers (under VMware) would be a great benefit for the Russian Tier2 by obtaining an additional power for distributed computations in a batch mode.

T5.2. Develop some interfaces from MS Windows platform to LCG middleware software Objectives : Develop some interfaces from MS Windows platform to LCG middleware software including a development of a set of ActiveX components. The basic set of operations should be provided, such as queuing of jobs, tracing the job execution process, requests to the GRID information systems and responding to external information requests, encapsulation of terminal access to different components of GRID under MS Windows etc. Study the Microsoft.NET Web Services as a tool for implementation Open Grid Services Infrastructure. Methodology : Software development for the Windows users taking into account the peculiarities of the software for GRID systems.

Russian LCG Portal (

Monitoring Facilities

JINR ResourcesNow CPU (kSPI95) Disk Space (TB) Tape (TB)