E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI)

Slides:



Advertisements
Similar presentations
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Networking and GRID Infrastructure in Georgia Prof. Ramaz Kvatadze Executive Director Georgian Research and Educational Networking Association - GRENA.
E-Infrastructure for Science in Georgia Prof. Ramaz Kvatadze Georgian Research and Educational Networking Association – GRENA
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
COMSATS Institute of Information Technology, Islamabad PK-CIIT Grid Operations in Pakistan COMSATS Ali Zahir Site Admin / Faculty Member ALICE T1/2 March.
1 ANSL site of LHC and ALICE Computing Grids. Deployment and Operation. Narine Manukyan ALICE team of A.I. Alikhanian National Scientific Laboratory
Sargis Mkoyan *Yerevan Physics Institute. 2 Alikhanyan Brothers St., IT and Computing at AANL.
Status of T3 GRID site infrastructure in Egypt Ashraf Mohamed Kasem Department of physics Ain Shams University.
Cluster currently consists of: 1 Dell PowerEdge Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
INFSO-RI Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
Research, E-infrastructure and Future Internet in Armenia Yuri Shoukourian National Academy of Sciences of RA INET Armenia conference 8-9 October 2013,
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Main title ERANET - HEP Group info (if required) Your name ….
Main title HEP in Greece Group info (if required) Your name ….
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
COMSATS Institute of Information Technology, Islamabad PK-CIIT Grid Operations in Pakistan COMSATS Dr. Saif-ur-Rehman Muhammad Waqar Asia Tier Center Forum.
Distributed Computing in Armenia: Current Status and Perspectives H. Astsatryan, V. Sahakyan, Yu. Shoukourian First ATLAS-South Caucasus Software/Computing.
Sargis Mkoyan *Yerevan Physics Institute. 2 Alikhanyan Brothers St., IT and Computing at AANL: Changes During Last Years And Plans.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Catherine Gater EGEE 5 th User Forum
Construction of Computational Segment at TSU HEPI Erekle Magradze Zurab Modebadze.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Ukrainian Academic Grid Initiative (UAGI) Status and outlook G. Zinovjev Bogolyubov Institute for Theoretical Physics Kiev, Ukraine.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
IPCEI on High performance computing and big data enabled application: a pilot for the European Data Infrastructure Antonio Zoccoli INFN & University of.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Update on CHEP from the Computing Speaker Committee G. Carlino (INFN Napoli) on behalf of the CSC ICB, October
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
E-Infrastructure for Science in South Caucasus Prof. Ramaz Kvatadze Georgian Research and Educational Networking Association – GRENA
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
NDGF and the distributed Tier-I Center Michael Grønager, PhD Technical Coordinator, NDGF dCahce Tutorial Copenhagen, March the 27th, 2007.
Storage discovery in AliEn
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Accessing the VI-SEEM infrastructure
The Beijing Tier 2: status and plans
Update on Plan for KISTI-GSDC
Clouds of JINR, University of Sofia and INRNE Join Together
UTFSM computer cluster
Simulation use cases for T2 in ALICE
PK-CIIT Grid Operations in Pakistan
Presentation transcript:

E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI) History (Background) Site information Monitoring and job statistics Conclusion and Issues

ATLAS ALICE AM-04-YERPHI INFN-BOLOGNA-T3 INFN-GENOVA UTD-HEP E-Infrastructure hierarchy provides fast interconnection and advanced services provides a distributed environment for sharing computing power, storage instruments and databases through middleware

E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI) History (Background) Site information Monitoring and job statistics Conclusion and Issues

Started to develop and realize since 1994 by the Institute for Informatics and Automation Problems (IIAP NAS RA). ASNET-AM serves as the foundation for advanced computing applications in Armenia. Links up academic, scientific, research and educational organizations. Provides advanced network services for 60 organizations in the major cities of Armenia, such as Yerevan, Ashtarak, Byurakan, Abovian, Gyumri. Connection for ASNET-AM is provided by GEANT and by the channel rented from the local telecom companies (Arminco, ADC). ASNET AM Network

ASNET AM Topology

Agreement of Establishment of Armenian Grid Joint Research Unit was signed in September 2007 Main goals: To establish Armenian Infrastructure presence in international Grid infrastructures To provide operations and security services To promote the uptake of Grid technologies in Armenia, the interconnection of existing and future resources, and the deployment of new applications To support researching Grid and Global Computing Armenian National Grid Initiative

Computational Resources Topology OrganizationCores IIAP NAS RA 176 Yerevan State University 176 State Engineering University 48 IRPHE NAS RA 48 AANL 48 Total 496 Armenian National Grid Initiative

Core Services wms.grid.am lfc.grid.am bdii.grid.am voms.grid.am ce.iiap- cluster.grid.am se.iiap- cluster.grid.am ce.irphe- cluster.grid.am se.irphe- cluster.grid.am ce.yerphi- cluster.grid.am se.yerphi- cluster.grid.am ce.ysu- cluster.grid.am se.ysu- cluster.grid.am ce.ysu- cluster2.grid.am se.ysu- cluster2.grid.am ce.seua- cluster.grid.am se.ysu2- cluster.grid.am Access point Core Services Armenian National Grid Initiative

E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI) History (Background) Site information Monitoring and job statistics Conclusion and Issues

WLCG ATLAS Site Deployment in AANL 2007 – the AANL site has been certified as “production site” of WLCG due to low quality of network connection( small bandwidth and frequent outages) site was put in suspended mode 2008 – Developing of a national Grid infrastructure-ArmGrid ArmGrid project is funded by Armenian government and International funding organizations (ISTC, FP7) 2009 – The "Black Sea Interconnection" was activated to link the Academic and Research Networks of South Caucasian countries (Armenia, Georgia and Azerbaijan) to the European Geant-2 network.This opens up new possibilities for ATLAS collaborators at AANL 2010 – First ATLAS-SouthCaucasus Software/Computing Workshop & Tutorial. It fosters to establish contacts between ATLAS collaborators and computing people in South Caucasian countries. Workshop helps to better understand ADC requirements and configuration principles 2011 – September: ATLAS Computing visit to the AANL discussions between representatives of the ADC and AANL were very useful in order to make progress on the establishment of AM-04-YERPHI as a ATLAS grid center. – October 20-th : Site status as ATLAS GRID site was approved by ICB History (Background)

Site information o Computational resources o Model: Dell PE1950 III Additional Quad-Core Xeon o CPU: 6 nodes x 2 cpus per node X 4 cors per cpus= 48 cors o HDD:160 GB o RAM: 8 GB o For local analysis o CPU: 6 nodes x 2 cpus per node x 2 cors per cpus= 24 o Storage Capacity o 50TB o Site Core Services o MAUI/Torque PBS o SRM v1, v2 o Supported VOs: o ATLAS o ALICE o ArmGrid

Site Information Name o AM-04-YERPHI Functionality o Grid Analysis (brokeroff), low priority production and local analysis o Tier3gs Cloud association o NL cloud Regional support o JINR Voms group o atlas/am Technical support o 2 sys admins (shared: 0.3 FTE)

DPM 10TB (nfs) o ATLASSCRATCHDISK 2T o ATLASLOCALGROUPDISK 7.00T o ATLASPRODDISK G ATLAS VO Support Frontier/ Squid cluster xrootd cluster Site information

Running Jobs Commissioning Upgrade to sl6, EMI2 Monitoring and job statistics Network Hardware components replacement work

Job Failure by Category and Exit Code Monitoring and job statistics maui and queue conf. should be optimized communication problem sw application, cvmfs (communication) problem

Efficiency and Wall Clock Consumption Monitoring and job statistics Good efficiency of testing and MC production jobs

Data Transfers SRM errors. Transfer failure for big files. Succeeded after big number of attempts 1G files transferring finished successfully. Problems with bigger file transferring

E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI) History (Background) Site information Monitoring and job statistics Conclusion and issues

AM-04-YERPHI site is operational now.  As site administrators become more experienced, problems are resolved faster. Conclusion and issues AM-04-YERPHI

Conclusion and issues  Continuous monitoring of the infrastructure by the system administrators ensures early error detection. Diagnostics help to identify problems.  Many configuration problems had been fixed during commissioning, maintenance, but the job scheduling configuration could still be improved. Ensuring a reliable network is critical. Issues which still need addressing include Reliable connectivity and rapid transport of data being used in the grid environment Related work focused on strengthening fault-tolerance.