St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
DataGrid is a project funded by the European Union 22 September 2003 – n° 1 EDG WP4 Fabric Management: Fabric Monitoring and Fault Tolerance
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
VMware vCenter Server Module 4.
Statistics of CAF usage, Interaction with the GRID Marco MEONI CERN - Offline Week –
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
© 2003, Cisco Systems, Inc. All rights reserved. CSIDS 4.0—11-1 Chapter 11 Enterprise IDS Management.
© 2003, Cisco Systems, Inc. All rights reserved. CSIDS 4.0—16-1 Chapter 16 Enterprise Intrusion Detection System Monitoring and Reporting.
Sarkis Mkoyan *Yerevan Physics Institute. 2 Alikhanyan Brothers St., YerPhI Network Overview.
Grid Information Systems. Two grid information problems Two problems  Monitoring  Discovery We can use similar techniques for both.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
IBM WebSphere Architectural Overview. Content Management ● Controlled by Java – Servlet – Enterprise Java Beans (EJB) – Java Server Pages (JSP) ● Base.
Windows 2000 Advanced Server and Clustering Prepared by: Tetsu Nagayama Russ Smith Dale Pena.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
A monitoring tool for a GRID operation center Sergio Andreozzi (INFN CNAF), Sergio Fantinel (INFN Padova), David Rebatto (INFN Milano), Gennaro Tortone.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
Kurt Mueller San Diego Supercomputer Center NPACI HotPage Updates.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
February 20, 2006 Nodal Architecture Overview Jeyant Tamby 20 Feb 2006.
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
03/09/2007http://pcalimonitor.cern.ch/1 Monitoring in ALICE Costin Grigoras 03/09/2007 WLCG Meeting, CHEP.
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
VOX Project Status T. Levshina. 5/7/2003LCG SEC meetings2 Goals, team and collaborators Purpose: To facilitate the remote participation of US based physicists.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
Sergio Fantinel, INFN LNL/PD
MC data production, reconstruction and analysis - lessons from PDC’04
Simulation use cases for T2 in ALICE
Alice Software Demonstration
Managing Services with VMM and App Controller
Presentation transcript:

St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev St.Petersburg State University, Russia Contents SPbSU computing and communication center in Petrodvorets (structure, capabilities, communications, general activity). The progress obtained in summer 2004 in St.Petersburg in DC for ALICE. Future plans Alice week off-line day

SPbSU Informational-computing center history Due to the historical reasons Saint Petersburg State University consists from 2 space- divided part. One of them located in the central part of the St. Petersburg. Other in the Petrodvorets – 40 kilometers from it. That’s why and also due to the fact that many other educational centers of the St. Petersburg are located in the central part, the optical channel from Petrodvorets to central part of St. Petersburg was created (during ).

SPbSU Informational-computing center External net - channels

SPbSU Informational-computing center years Computers (summary) From them: servers Local nets (summary) From them: virtual Magistral networks From them: optical The network equipment «Cisco» From them : routers switchs

Dynamics of performance of SPbSU computational center (MFlops)

SPbSU Informational-computing center net structure(2003) 8

SPbSU Informational-computing center clusters photos 11

SPbSU Informational-computing center Software evolution 1999 – os freeBSD – OPEN PBS as users job scheduling system – os redHat 6.2, systems of the quantum-chemicals calculations: CRYSTAL 95 and GAMESS design and development of the Portal of the High Performance Computing (WEBWS) (due to our legend it is called so from words “web work space”) 2002 first cluster for studing grid-technologies and grid- applications 2003 participating in alien project (site participating in data challenge

SPbSU Informational-computing center Software evolution 2003-… 2003 – collaboration with IBM started Due to collaboration with IBM we changed many parts of the informational-computing center: New net-monitoring system New storage system with SAN (storage area network) New portal development technologies - portlets and websphere

SPbSU Informational-computing center net monitoring status Tivoli NetView Tivoli Enterprise Console Tivoli Data Warehouse Tivoli Decision Support DB2 Reports, statistics Structure, monitoring visualization events

SPbSU Informational-computing center storage system status HACMP (High Availability Cluster Multi- Processing) Monitoring and management of the net - Tivoli SAN Manager Management of the storage elements: IBM Total Storage Manager, IBM Total Storage Specialist, Brocade Advanced Web Tools Archivating, reserve coping and restoring system TSM - Tivoli Storage Manager RDBMS DB2 UDB (8.1) RDBMS DB2 UDB (8.1) Content Management System Content Management System CM 8.1 CM 8.1

SPbSU Informational-computing center storage photos

Portal of the High Performance Computing (WEBWS)

SPbSU Informational-computing center. Portal of the High Performance Computing (WEBWS) WEBWS consists from 3 main part : 1. Informational part – monitoring of the computational resources based on Ganglia (open source software product), monitoring users tasks queries 2. Work space – users work space for development and starting tasks 3. Administrative part

WEBWS logical structure Informational part Administrative part WEBWS (work space)

WEBWS logical structure Internet ADM DBWEBWS DB Clusters

WEBWS logical structure Authorization system Interface WEBWS DB PBS Server WEBWS Server

WEBWS modules User InfoSession Info WEBWS Server Info User Projects Info WS module Crystal module …

WEBWS monitoring part - ganglia

SPbSU Informational-computing center. Some plans for the future Continue collaboration with IBM Continue development WEBWS … Continue Alice data challenge Parton String Model in parallel mode and physics performance analysis for ALICE Participation in Mammogrid

SPbSU in Data Challenge : Globus Toolkit 2.4 was installed, tests started July 2003: AliEn was installed ( P. Saiz) July 2004: start of tests jobs in grid- cluster “alice”.

Cluster alice. alice.spbu.ru, Alien-Services alice07.spbu.ru alice03.spbu.ru alice02.spbu.ru alice08.spbu.ru alice06.spbu.ru alice05.spbu.ru alice04.spbu.ru work nodes alice09.spbu.ru SE, CE(pbs-server)

Configuration of cluster in July 2004 alice: 512 MB RAM, PIII 1x733 CPU alice09: 256 MB RAM, Celeron 1x1200 CPU Alice02-08: 512 MB RAM (512 MB swap), PIII 2x600 CPU, 2x4.5 GB SCSI HDD

Configuration of cluster in September 2004 (upgraded) alice: 512 MB RAM, PIII 1x733 CPU alice09: 256 MB RAM,40 GB TB HDD, Celeron 1x1200 CPU Alice02-08: 1 GB RAM (4 GB swap), PIII 2x600 CPU (only one CPU is used), 40 GB IDE HDD.

Available disks space

Running jobs on SPbSU CE from to (min 1job max 7 job)

Started jobs on SPbSU CE from

Ganglia monitoring of alice cluster.

Problems and questions Information on started jobs and running are not correlated? No correlation with our scripts for monitoring of started and running jobs? We will study these problems later.

Plans for Alice DC SPbSU is planning to continue DC2004 participation with resources: alice: 512 MB RAM, 40 GB., PIII 1x733 CPU alice09: 256 MB RAM,40 GB TB HDD, Celeron 1x1200 CPU Alice02-08: 1 GB RAM (4 GB swap),PIII 2x600 CPU (two CPUs are used), 40 GB IDE HDD.