Monitoring in DIRAC environment for the BES-III experiment Presented by Igor Pelevanyuk Authors: Sergey BELOV, Igor PELEVANYUK, Alexander UZHINSKIY, Alexey.

Slides:



Advertisements
Similar presentations
Building Portals to access Grid Middleware National Technical University of Athens Konstantinos Dolkas, On behalf of Andreas Menychtas.
Advertisements

Status of BESIII Distributed Computing BESIII Workshop, Mar 2015 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
Prime’ Senior Project. Presentation Outline What is Our Project? Problem Definition What does our system do? How does the system work? Implementation.
Web Applications Development Using Coldbox Platform Eddie Johnston.
P-GRADE and WS-PGRADE portals supporting desktop grids and clouds Peter Kacsuk MTA SZTAKI
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova.
The Open Grid Service Architecture (OGSA) Standard for Grid Computing Prepared by: Haoliang Robin Yu.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
DIRAC API DIRAC Project. Overview  DIRAC API  Why APIs are important?  Why advanced users prefer APIs?  How it is done?  What is local mode what.
Pilots 2.0: DIRAC pilots for all the skies Federico Stagni, A.McNab, C.Luzzi, A.Tsaregorodtsev On behalf of the DIRAC consortium and the LHCb collaboration.
Web Based Applications
GRID job tracking and monitoring Dmitry Rogozin Laboratory of Particle Physics, JINR 07/08/ /09/2006.
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
DIRAC Web User Interface A.Casajus (Universitat de Barcelona) M.Sapunov (CPPM Marseille) On behalf of the LHCb DIRAC Team.
Raffaele Di Fazio Connecting to the Clouds Cloud Brokers and OCCI.
BESIII distributed computing and VMDIRAC
The EDGI project receives Community research funding 1 EDGI Brings Desktop Grids To Distributed Computing Interoperability Etienne URBAH
INFSO-RI Module 01 ETICS Overview Alberto Di Meglio.
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Putting it all together Dynamic Data Base Access Norman White Stern School of Business.
INFSO-RI Module 01 ETICS Overview Etics Online Tutorial Marian ŻUREK Baltic Grid II Summer School Vilnius, 2-3 July 2009.
Javascript Cog Kit By Zhenhua Guo. Grid Applications Currently, most grid related applications are written as separate software. –server side: Globus,
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
Module 10 Administering and Configuring SharePoint Search.
Tarball server (for Condor installation) Site Headnode Worker Nodes Schedd glidein - special purpose Condor pool master DB Panda Server Pilot Factory -
RUBRIC IP1 Ruben Botero Web Design III. The different approaches to accessing data in a database through client-side scripting languages. – On the client.
FRANEC and BaSTI grid integration Massimo Sponza INAF - Osservatorio Astronomico di Trieste.
Lightweight construction of rich scientific applications Daniel Harężlak(1), Marek Kasztelnik(1), Maciej Pawlik(1), Bartosz Wilk(1) and Marian Bubak(1,
INFSO-RI Enabling Grids for E-sciencE Ganga 4 – The Ganga Evolution Andrew Maier.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
EMI INFSO-RI ARC tools for revision and nightly functional tests Jozef Cernak, Marek Kocan, Eva Cernakova (P. J. Safarik University in Kosice, Kosice,
Accounting Update John Gordon and Stuart Pullinger January 2014 GDB.
JavaScript Dynamic Active Web Pages Client Side Scripting.
SPI NIGHTLIES Alex Hodgkins. SPI nightlies  Build and test various software projects each night  Provide a nightlies summary page that displays all.
Display Page (HTML/CSS)
EGI Technical Forum Amsterdam, 16 September 2010 Sylvain Reynaud.
OpenNebula: Experience at SZTAKI Peter Kacsuk, Sandor Acs, Mark Gergely, Jozsef Kovacs MTA SZTAKI EGI CF Helsinki.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
INFSO-RI Enabling Grids for E-sciencE Ganga 4 Technical Overview Jakub T. Moscicki, CERN.
The GridPP DIRAC project DIRAC for non-LHC communities.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
SHIWA Simulation Platform (SSP) Gabor Terstyanszky, University of Westminster EGI Community Forum Munnich March 2012 SHIWA is supported by the FP7.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Experience in CMS with the Common Analysis Framework Ian Fisk & Maria Girone 1.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
1 Globe adapted from wikipedia/commons/f/fa/ Globe.svg IDGF-SP International Desktop Grid Federation - Support Project SZTAKI.
Core and Framework DIRAC Workshop October Marseille.
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
A Presentation Presentation On JSP On JSP & Online Shopping Cart Online Shopping Cart.
DIRAC Distributed Computing Services A. Tsaregorodtsev, CPPM-IN2P3-CNRS FCPPL Meeting, 29 March 2013, Nanjing.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
Open Science Grid Configuring RSV OSG Resource & Service Validation Thomas Wang Grid Operations Center (OSG-GOC) Indiana University.
Multi-community e-Science service connecting grids & clouds R. Graciani 1, V. Méndez 2, T. Fifield 3, A. Tsaregordtsev 4 1 University of Barcelona 2 University.
The LGI Pilot job portal EGI Technical Forum 20 September 2011 Jan Just Keijser Willem van Engen Mark Somers.
Distributed computing and Cloud Shandong University (JiNan) BESIII CGEM Cloud computing Summer School July 18~ July 23, 2016 Xiaomei Zhang 1.
Status of BESIII Distributed Computing
Section 6.3 Server-side Scripting
Belle II Physics Analysis Center at TIFR
The Open Grid Service Architecture (OGSA) Standard for Grid Computing
DIRAC services.
JINR Tier-1 service monitoring system: Ideas and Design
CRAB and local batch submission
Submit BOSS Jobs on Distributed Computing System
Discussions on group meeting
VMDIRAC status Vanessa HAMAR CC-IN2P3.
Module 01 ETICS Overview ETICS Online Tutorials
Status and plans for bookkeeping system and production tools
Introduction to the SHIWA Simulation Platform EGI User Forum,
Presentation transcript:

Monitoring in DIRAC environment for the BES-III experiment Presented by Igor Pelevanyuk Authors: Sergey BELOV, Igor PELEVANYUK, Alexander UZHINSKIY, Alexey ZHEMCHUGOV JINR, Dubna, 30 January 2015

What is BES-III? The BES-III experiment in Beijing is a world best facility to test Standard Model and QCD with high precision in taucharm domain. The current maximum data rate is about 40 MB/s. Total amount of data: ~400TB Around jobs executed since

What is DIRAC? DIRAC provides all the necessary components to build ad- hoc distributed computing infrastructures interconnecting resources of different types, allowing interoperability and simplifying interfaces. Languages: Python, JavaScript 3

It supports: ❖ gLite: EGI, GISELA, etc ❖ VDT: OSG ❖ ARC: NDGF sites, RAL, … ❖ Clouds: OpenStack, OpenNebula, CloudStack, EC2, OCCI ❖ Volunteer resources: BOINC, European Desktop Grid Initiative (EDGI) Other support could be requested by users 4

DIRAC architecture “bricks” 5 Service Client Agent DB Web app HandlerView.js Script

DIRAC architecture “bricks” 6 Service Client Agent DB Web app HandlerView.js Script Data Access Object Listener Connector Periodically running code ControllerView

The DIRAC weaknesses ●It is easy to install but configuration could be complex ●The default Data Management system is not good enough to cover BES-III requirements ●The capability of monitoring remote sites and grid functionality are scarce. 7

Monitoring system Sources of information: ●Periodical functional tests and their logs ●Workload Management System database Both have strengths and weaknesses. Things to show: ●Status/Availability ●Functional tests ●“General health” 8

Web - ExtJS

Features Monitoring system are able to: Detect new resources and test it Stop testing of deleted resources Show information in comprehensive form on the web page Be visible only for authorized users(admins) Be extended and improved 10

Conclusions ●The core of monitoring system is done from scratch for BES-III DIRAC installation ●It is not difficult to add new functionality in DIRAC itself ●Still receiving new requirements from BES-III 11

Thank you for your attention 12

Monitoring System for BES-III 13 Service Client Agent DB Web apps HandlerView.js Update, Select, Delete Set result of test Submit test job Check test job Delete test job Client sideServer side Test Job

How monitoring works? 14