THE GLUE DOMAIN DEPLOYMENT The middleware layer supporting the domain-based INFN Grid network monitoring activity is powered by GlueDomains [2]. The GlueDomains.

Slides:



Advertisements
Similar presentations
Welcome to Middleware Joseph Amrithraj
Advertisements

Database Architectures and the Web
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Network+ Guide to Networks, Fourth Edition
Dinker Batra CLUSTERING Categories of Clusters. Dinker Batra Introduction A computer cluster is a group of linked computers, working together closely.
Chapter 3 Database Architectures and the Web Pearson Education © 2009.
Distributed components
1 A Comparison of Load Balancing Techniques for Scalable Web Servers Haakon Bryhni, University of Oslo Espen Klovning and Øivind Kure, Telenor Reserch.
Jacob Boston Josh Pfeifer. Definition of HyperText Transfer Protocol How HTTP works How Websites work GoDaddy.com OSI Model Networking.
UMIACS PAWN, LPE, and GRASP data grids Mike Smorul.
Network+ Guide to Networks, Fourth Edition Chapter 1 An Introduction to Networking.
Hands-On Microsoft Windows Server 2008 Chapter 8 Managing Windows Server 2008 Network Services.
Chapter 3 Database Architectures and the Web Pearson Education © 2009.
ALICE DATA ACCESS MODEL Outline ALICE data access model - PtP Network Workshop 2  ALICE data model  Some figures.
Network+ Guide to Networks, Fourth Edition Chapter 1 An Introduction to Networking.
Networks and Telecommunications
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
Database Architectures and the Web Session 5
Layered Protocol. 2 Types of Networks by Logical Connectivity Peer to Peer and Client-Server Peer-to-peer Networks  Every computer can communicate directly.
ALICE data access WLCG data WG revival 4 October 2013.
ATLAS DQ2 Deletion Service D.A. Oleynik, A.S. Petrosyan, V. Garonne, S. Campana (on behalf of the ATLAS Collaboration)
DISTRIBUTED COMPUTING
9/5/2012ISC329 Isabelle Bichindaritz1 Web Database Environment.
Tiziana FerrariNetwork metrics usage for optimization of the Grid1 DataGrid Project Work Package 7 Written by Tiziana Ferrari Presented by Richard Hughes-Jones.
The Data Grid: Towards an Architecture for the Distributed Management and Analysis of Large Scientific Dataset Caitlin Minteer & Kelly Clynes.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Frankfurt (Germany), 6-9 June 2011 Smart Grid Protection in China Wu Guopei Guangzhou Power Supply Bureau Guangdong Power Grid, China.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
CYBERINFRASTRUCTURE FOR THE GEOSCIENCES Data Replication Service Sandeep Chandra GEON Systems Group San Diego Supercomputer Center.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
CLRC and the European DataGrid Middleware Information and Monitoring Services The current information service is built on the hierarchical database OpenLDAP.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
CGW 04, Stripped replication for the grid environment as a web service1 Stripped replication for the Grid environment as a web service Marek Ciglan, Ondrej.
Website: Answering Continuous Queries Using Views Over Data Streams Alasdair J G Gray Werner.
A GRID solution for Gravitational Waves Signal Analysis from Coalescing Binaries: preliminary algorithms and tests F. Acernese 1,2, F. Barone 2,3, R. De.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Data Consolidation: A Task Scheduling and Data Migration Technique for Grid Networks Author: P. Kokkinos, K. Christodoulopoulos, A. Kretsis, and E. Varvarigos.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
70-294: MCSE Guide to Microsoft Windows Server 2003 Active Directory, Enhanced Chapter 6: Active Directory Physical Design.
Architecture of monitoring elements for the network element modeling in a Grid infrastructure Authors: Augusto Ciuffoletti Tiziana Ferrari Antonia Ghiselli.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Mario Reale – GARR NetJobs: Network Monitoring Using Grid Jobs.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES The Common Solutions Strategy of the Experiment Support group.
II EGEE conference Den Haag November, ROC-CIC status in Italy
1 Network related topics Bartosz Belter, Wojbor Bogacki, Marcin Garstka, Maciej Głowiak, Radosław Krzywania, Roman Łapacz FABRIC meeting Poznań, 25 September.
1 The S.Co.P.E. Project and its model of procurement G. Russo, University of Naples Prof. Guido Russo.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Campana (CERN-IT/SDC), McKee (Michigan) 16 October 2013 Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
PART1 Data collection methodology and NM paradigms 1.
Federating Data in the ALICE Experiment
Understanding Web Server Programming
Chapter 12: Architecture
Database Architectures and the Web
A testbed for the SuperB computing model
Pasquale Migliozzi INFN Napoli
Mohammad Malli Chadi Barakat, Walid Dabbous Alcatel meeting
Sergio Fantinel, INFN LNL/PD
GridICE monitoring for the EGEE infrastructure
Network Requirements Javier Orellana
Database Architectures and the Web
a VO-oriented perspective
University of Technology
Objective Understand web-based digital media production methods, software, and hardware. Course Weight : 10%
Chapter 12: Physical Architecture Layer Design
Network+ Guide to Networks, Fourth Edition
Requirements of Computing in Network
Presentation transcript:

THE GLUE DOMAIN DEPLOYMENT The middleware layer supporting the domain-based INFN Grid network monitoring activity is powered by GlueDomains [2]. The GlueDomains service is managed by a dedicated central Server node, located in the main Campus Grid site. The Server periodically checks and configures a number of Theodolites, deployed across five sites in the SCoPE infrastructure. The activity of Theodolites consist of running autonomuosly a number of active network performance probes, as shown in figure. THE SCoPE PROJECT The S.Co.P.E. Project [1] aims to the implementation of an open and general purpose Grid infrastructure linking together the departments of University Federico II distributed in Naples on a metropolitan scale. Scientific and technological research, implementation of innovative concepts in strategic research fields. NETWORK- AWARE GRID Network performances can affect dramatically job computation time, especially when processing remote bulk dataset and during data replication activities. The Grid infrastructure of main Grid deployments works below the best effort threshold: the network is considered pure facility and middleware components act without taking into account network parameters. An active network-aware approach needs a strict integration between many partners: the Grid resource management logic the requesting application the network entities that offer the connectivity services The goal is to take job scheduling and resource allocation decisions based on network performance. THE NETWORK-AWARE SERVICES The measurement provided by GlueDomains are used by network aware services to estimate closeness between nodes. The closeness criteria is uses a simple model that estimates a network cost using network measurements. According with the DIANA approach [3]: where RTT, MaxBand, Loss and Jitter are rispectively the values of Round Trip Time, MaxBand estimates, packet loss rate and Jitter during the period of observation. CHEP March 2009, Prague, Czech Republic A. Ciuffoletti c, L. Merola a, F. Palmieri a, S. Pardi b, G. Russo a a University of Napoli Federico II – Napoli, Italy b INFN-Napoli Section - Italy c University of Pisa – Largo B. Pontecorvo – Pisa, Italy Theodolites activity is performed by the Storage Elements and the monitoring topology is a full mesh. Round-trip Time, Packet Loss Rate and One-Way Jitter performance measurements are periodically measured for each domain-to-domain path. The data produced by Theodolites are periodically published through the GridICE web interface which is also used to check theodolite operation. A replica optimization service has been implemented using the above network cost estimationas. A light-weight Web service interface is used to select closest replica of a dataset used by jobs in a Computing or Storage Element, addressed using their Site URL. The web service interface uses the nusoap library, that provides a set of PHP classes to create and consume web services based on SOAP, WSDL and HTTP. CAMPUS-GRID MEDICINE CSI ENGINEERING ASTRONOMICAL OBSERVATORY Fiber Optic Already Connected Work in Progress The figure on the top, show the results of some tests performed with the replica optizimation service implemented. Regarding the network cost trend we have made 12 hours of observation of the cost betwen two sites, the results show an average netcost value of 6.3 with 0.8 standard deviation, during the normal network activity. On the other side, during the different phases of the heavy network workload we measured a significant growth of the netcost value with its maximum in the order of 107. In figure A we showed the netcost behaviour varying in time under normal and heavy load, calculated between the UNINA-SCOPE-ASTRO and INFN-NAPOLI-VIRGO sites’ theodolites. The second experiment consists in downloading in the storage element of the INFN-NAPOLI-VIRGO site, a set of GB sized files that are replicated in the UNINA-SCOPE-ASTRO, UNINA-SCOPE- GSC, UNINA-SCOPE-CEINGE and UNINA-SCOPE-CENTRO sites. The files are registered in the SCoPE logical file catalogue. The tests have been split in two phases: – Replication of 100 files on the INFN-NAPOLI-VIRGO site by using the lcg-utils tools and the logical file name. – Replication of 100 files on the INFN-NAPOLI-VIRGO site aided by the network-aware replica optimization service We measured an average transfer time on 100 files of 154 seconds for each file by using the lcg-utils versus 145 seconds by obtained by using the replica optimization services. In fig. B we show the replica optimization services effects in reducing the data transfer time. A B