Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"

Slides:



Advertisements
Similar presentations
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Advertisements

Connect. Communicate. Collaborate Click to edit Master title style MODULE 1: perfSONAR TECHNICAL OVERVIEW.
Integrating Network and Transfer Metrics to Optimize Transfer Efficiency and Experiment Workflows Shawn McKee, Marian Babik for the WLCG Network and Transfer.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Chapter 7 Configuring & Managing Distributed File System
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Sarkis Mkoyan *Yerevan Physics Institute. 2 Alikhanyan Brothers St., YerPhI Network Overview.
Steven Newhouse, Head of Technical Services European Bioinformatics Institute: ICT Challenges.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Pilot Test-bed Operations and Support Work.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
1 ANSL site of LHC and ALICE Computing Grids. Deployment and Operation. Narine Manukyan ALICE team of A.I. Alikhanian National Scientific Laboratory
E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI)
Sargis Mkoyan *Yerevan Physics Institute. 2 Alikhanyan Brothers St., IT and Computing at AANL.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Nov. 18, EGEE and gLite are registered trademarks EGEE-III, Regional, and National.
ALICE data access WLCG data WG revival 4 October 2013.
DISTRIBUTED COMPUTING
EGI-InSPIRE Steven Newhouse Interim EGI.eu Director EGI-InSPIRE Project Director.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
INFSO-RI Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
SEE-GRID-SCI Regional Grid Infrastructure: Resource for e-Science Regional eInfrastructure development and results IT’10, Zabljak,
Monitoring the Grid at local, national, and Global levels Pete Gronbech GridPP Project Manager ACAT - Brunel Sept 2011.
Research, E-infrastructure and Future Internet in Armenia Yuri Shoukourian National Academy of Sciences of RA INET Armenia conference 8-9 October 2013,
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
FP6−2004−Infrastructures−6-SSA EUChinaGrid Infrastructure Giuseppe Andronico - INFN Catania Concertation Meeting – Budapest,
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Sargis Mkoyan *Yerevan Physics Institute. 2 Alikhanyan Brothers St., IT and Computing at AANL: Changes During Last Years And Plans.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Catherine Gater EGEE 5 th User Forum
Interactive European Grid Environment for HEP Application with Real Time Requirements Lukasz Dutka 1, Krzysztof Korcyl 2, Krzysztof Zielinski 1,3, Jacek.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
Julia Andreeva on behalf of the MND section MND review.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Monitoring of the LHC Computing Activities Key Results from the Services.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
INFSO-RI Enabling Grids for E-sciencE NRENs & Grids Workshop Relations between EGEE & NRENs Mathieu Goutelle (CNRS UREC) EGEE-SA2.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Deliverable DSA1.4 Jules Wolfrat ARM-9 –
Dr. Isabel Campos Plasencia (IFCA-CSIC) Spanish NGI Coordinator ES-GRID The Spanish National Grid Initiative.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
European Middleware Initiative (EMI) Alberto Di Meglio (CERN) Project Director.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America The EELA Grid Infrastructure Roberto Barbera.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
SAM Status Update Piotr Nyczyk LCG Management Board CERN, 5 June 2007.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
E-Infrastructure for Science in South Caucasus Prof. Ramaz Kvatadze Georgian Research and Educational Networking Association – GRENA
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
Campana (CERN-IT/SDC), McKee (Michigan) 16 October 2013 Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Efi.uchicago.edu ci.uchicago.edu Sharing Network Resources Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago Federated Storage.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI A pan-European Research Infrastructure supporting the digital European Research.
Bob Jones EGEE Technical Director
Clouds , Grids and Clusters
Grid infrastructure development: current state
Clouds of JINR, University of Sofia and INRNE Join Together
Network Requirements Javier Orellana
Simulation use cases for T2 in ALICE
Presentation transcript:

Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education" 1

E-Infrastructures in Armenia −International Projects AANL grid site – Network topology – Bandwidth allocation – Site overview – ATLAS VO support – ALICE VO support – Monitoring and job statistics Network diagnostic with perfSonar −perfSONAR deployment at AANL Issues and Perspectives 2

E-Infrastructures in Armenia Armenian e-infrastructure is a complex national IT infrastructure consists of both communication and distributed computing infrastructures. The infrastructure is operated by the Institute for Informatics and Automation Problems (IIAP), which is the leading ICT research and technology development institute of the National Academy of Sciences of the Republic of Armenia (NAS RA). 3

E-Infrastructure Components Types of Infrastructure Middlewares Collaboration Software tools 4

E-Infrastructures: Collaboration Réseaux IP Européens Network Coordination Centre Trans-European Research and Education Networking Association 5

International Projects in E-Infrastructures Integrating Armenia into ERA: information and Communication Technologies (AM, FR, HU, BEL) ( ) European Grid Initiative: Integrated Sustainable Pan-European Infrastructure for Researchers in Europe ( ) GN3+ (Multi-gigabit European research and education network and associated services) ( ) High-Performance Computing Infrastructure for South East Europe’s Research Communities (14 countries) ( ) FP7 State Target Project entitled Deployment of Armenian National Grid Infrastructure ( ) STATE Deploying ARmenian distributed Processing capacities for Environmental GEOspatial data ( ) SNF Experimental Deployment of an Integrated Grid and Cloud Enabled Environment in BSEC Countries on the Base of g-Eclipse ( ) BSEC 6

E-Infrastructures in Armenia −International Projects AANL grid site – Network topology – Bandwidth allocation – Site overview – ATLAS VO support – ALICE VO support – Monitoring and job statistics Network diagnostic with perfSonar −perfSONAR deployment at AANL Issues and Perspectives 7

AANL network topology LAN includes 10 buildings at Yerphi and Aragats,Nor-Hamberd Stations computer network consists of over 400 workstations., 8

AANL GRID bandwidth allocation IP Group "Grid" members and policies: GRID Cluster Grid oriented servers and services ALICE,ATLAS Collaboration Workstations Access to ASNET and Educational Network 1000 Mbps Access to Outside Armenia 6 Mbps No session Limits 9 36 Mbps 45 Mbps 100 Mbps Network is provided by GEANT and by the channel rented from the local telecom companies (Arminco, ADC).

Site overview: services 10

Site overview: Gstat monitoring 11

LCG – Armenia T3 contribution Computational resources CPU: 6 nodes x 2 cpus per node X 4 cors per cpus= 48 cors HDD:160 GB RAM: 8 GB Storage Capacity 50TB Supported VOs: ATLAS ALICE 2007: the AM-04-YERPHI site has been certified as “production site” of WLCG 12

Allocated Storage 10TB (nfs) o ATLASSCRATCHDISK 2T o ATLASLOCALGROUPDISK 7.00T o ATLASPRODDISK G ATLAS VO Support Frontier/ Squid cluster xrootd cluster perfSONAR node October 20-th 2011 : Site status as ATLAS GRID site was approved by ICB Clusters 13

ATLAS VO Support Monitoring and job statistics Security hole issue 14

Job Failure by Category and Exit Code ATLAS VO Support Maui scheduler job problems : optimization needed Pilot error 15 Probably communications and acl problems

ATLAS VO Support Good efficiency of testing and MC production jobs Wall Clock Efficiency 16

Alice VO support Monitoring and job statistics 17

Alice VO support Monitoring and job statistics 18

E-Infrastructures in Armenia −International Projects AANL grid site – Network topology – Bandwidth allocation – Site overview – ATLAS VO support – ALICE VO support – Monitoring and job statistics Network diagnostic with perfSonar −perfSONAR deployment at AANL Issues and Perspectives 19

Network diagnostic with perfSonar perfSONAR stands for PERformance Service Oriented Network monitoring Architecture perfSONAR is designed to be:  Decentralised and Scalable Large number of networks and services, large volume of data Each domain can set its own security policy  Secure Will not put participating networks at risk of attack or congest them Main purpose aiding network problem diagnosis and speeding repairs identifying structural bottlenecks in need of remediation providing a standard measurement of various network performance related metrics over time as well as “on-demand” tests. 20

perfSONAR deployment at AANL Current network problem: issue with big file transferring(~10 Gb) - connection timeouts and connectivity problem (packet loss) perfSonar diagnostic will use to better optimize decisions in workflow and data management – tune timeout value, number of streams, etc optimize data distribution (DDM) based upon a better understanding of the network between sites Current status:  installed and configured o waiting for DNS alias for perfSonar node 21

E-Infrastructures in Armenia −International Projects AANL grid site – Network topology – Bandwidth allocation – Site overview – ATLAS VO support – ALICE VO support – Monitoring and job statistics Network diagnostic with perfSonar −perfSONAR deployment at AANL Issues and Perspectives 22

Issues and Perspectives We are facing problems  Each problem is particularly tricky and time consuming » better communication needed with VO supporters » with service experts A reliable network is critical  it is necessary for stable operations of the Grid and other high network activities  perfSONAR deployment could help  to monitor network conditions in a full-mesh topology(i.e. between any combination of 2 sites)  to monitor network performance  CERN experts’ help could make it easier to solve end-to-end problems  perfSonar diagnostic will use to better optimize FTS transferring: timeout and stream numbers still need to be revised 23

AM-04-YERPHI 24