MDS2 alpha Testing Phase

Slides:



Advertisements
Similar presentations
1 Parallel Scientific Computing: Algorithms and Tools Lecture #2 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Advertisements

DataGrid is a project funded by the European Commission under contract IST WP2 – R2.1 Overview of WP2 middleware as present in EDG 2.1 release.
November 29, 2005Christopher Tuttle1 Linear Scan Register Allocation Massimiliano Poletto (MIT) and Vivek Sarkar (IBM Watson)
Chapter 3.1 : Memory Management
Report on the INFN-GRID Globus evaluation Massimo Sgaravatto INFN Padova for the INFN Globus group
Profiles Research Networking Software Users Group Meeting July 16, 2010.
Status of Globus activities within INFN (update) Massimo Sgaravatto INFN Padova for the INFN Globus group
INFN experience with Globus GIS A. Cavalli - F. Semeria INFN Grid Information Services workshop CERN, March 2001.
1 Chapter 3.1 : Memory Management Storage hierarchy Storage hierarchy Important memory terms Important memory terms Earlier memory allocation schemes Earlier.
Linear Scan Register Allocation POLETTO ET AL. PRESENTED BY MUHAMMAD HUZAIFA (MOST) SLIDES BORROWED FROM CHRISTOPHER TUTTLE 1.
Domain Name System (DNS)
Integration of DataGrid Software for Testbed 1 Goal: combine the DataGrid middleware and the Globus core services. Phases: Preparation for Integration.
Identifying Reversible Functions From an ROBDD Adam MacDonald.
Overview of the NorduGrid Information System Balázs Kónya 3 rd NorduGrid Workshop 23 May, 2002, Helsinki.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
GRID The GRID distribution toolkit at INFN Flavia Donno (INFN Pisa) Andrea Sciaba` (INFN Pisa) Zhen Xie (INFN Pisa) presented by Massimo Sgaravatto (INFN.
A. Cavalli - F. Semeria INFN Experience With Globus GIS 1 A. Cavalli - F. Semeria INFN First INFN Grid Workshop Catania, 9-11 April 2001 INFN Experience.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Information System on gLite middleware Vincent.
DataGrid Workshop Oxford, July 2-5 INFN Testbed status report Luciano Gaido 1 DataGrid Workshop INFN Testbed status report L. Gaido Oxford July,
TESTING LEVELS Unit Testing Integration Testing System Testing Acceptance Testing.
The Replica Location Service The Globus Project™ And The DataGrid Project Copyright (c) 2002 University of Chicago and The University of Southern California.
1 Internet Network Services. 2 Module - Internet Network Services ♦ Overview This module focuses on configuring and customizing the servers on the network.
Towards a Global Service Registry for the World-Wide LHC Computing Grid Maria ALANDES, Laurence FIELD, Alessandro DI GIROLAMO CERN IT Department CHEP 2013.
GRID Zhen Xie, INFN-Pisa, on DataGrid WP6 meeting1 Globus Installation Toolkit Zhen Xie On behalf of grid-release team INFN-Pisa.
Alliance Alliance Performance Status - CREQ Régis ELLING July 2011.
WP3 RGMA Deployment Laurence Field / RAL Steve Fisher / RAL.
National Institute of Advanced Industrial Science and Technology ApGrid: Asia Pacific Partnership for Grid Computing - Introduction of testbed development.
Status of NorduGrid testbed DataGrid Workshop, Oxford 2 nd – 5 th of July Anders Waananen.
Report on the INFN-GRID Globus evaluation Massimo Sgaravatto INFN Padova for the INFN Globus group
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America gLite Information System Claudio Cherubino.
Physics Analysis inside the Oracle DB Progress report 10 Octobre 2013.
GraDS MacroGrid Carl Kesselman USC/Information Sciences Institute.
GIIS Implementation and Requirements F. Semeria INFN European Datagrid Conference Amsterdam, 7 March 2001.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
April 4, 2002Atlas Testbed Workshop ATLAS Hierarchical MDS Server Patrick McGuigan.
Testing CernVM-FS scalability at RAL Tier1 Ian Collier RAL Tier1 Fabric Team WLCG GDB - September
FESR Trinacria Grid Virtual Laboratory gLite Information System Muoio Annamaria INFN - Catania gLite 3.0 Tutorial Trigrid Catania,
نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache Memory.
Virtual Memory By CS147 Maheshpriya Venkata. Agenda Review Cache Memory Virtual Memory Paging Segmentation Configuration Of Virtual Memory Cache Memory.
Maria Alandes Pradillo, CERN Training on GLUE 2 information validation EGI Technical Forum September 2013.
J Jensen / WP5 /RAL UCL 4/5 March 2004 GridPP / DataGrid wrap-up Mass Storage Management J Jensen
Massimo Sgaravatto INFN Padova
Status of Task Forces Ian Bird GDB 8 May 2003.
Job monitoring and accounting data visualization
Status and Plans for InCA
WLCG Accounting Task Force Update Julia Andreeva CERN IT on behalf of the WLCG Accounting Task Force GDB
WP1 WMS release 2: status and open issues
Installation and configuration of a top BDII
gLite Information System(s)
BDII Performance Tests
Memory Management 6/20/ :27 PM
The Information System in gLite
Sergio Fantinel, INFN LNL/PD
AMI – Status November Solveig Albrand Jerome Fulachier
Evolution in Memory Management Techniques
Introduction to Rectilinear Motion
gLite Information System(s)
UNITY TEAM PROJECT TOPICS: [1]. Unity Collaborate
The Globus Toolkit™: Information Services
Computer Architecture
Report on GLUE activities 5th EU-DataGRID Conference
Virtual Memory Hardware
Introduction to Name and Directory Services
LDAP monitoring status
DGAS Today and tomorrow
Information and Monitoring System
gLite Information System
1st language words & bits 2nd language words & bits
Information Services Claudio Cherubino INFN Catania Bologna
Presentation transcript:

MDS2 alpha Testing Phase alessandro.cavalli@cnaf.infn.it INFN Testbed-IS team DataGrid Workshop Oxford, 2-5 July 2001 A. Cavalli MDS2 TESTING PHASE

Introduction The 1st phase: The 2nd phase: test with 1st alpha release; done in collaboration with a WP3 member; response times were similar to MDS-2.0 but… different topology from MDS-2.0 tests. The 2nd phase: test with 2nd alpha release; tested also scalability (GIIS hierarchy); compared MDS-2.0 & ‘alpha release’ response times on the same topology. A. Cavalli MDS2 TESTING PHASE

(performances reported in the graphs) Test Topology FIRST PHASE A GIIS with a local GRIS + two other GRISes SECOND PHASE (performances reported in the graphs) A TOP GIIS with 2 site GIISes (& GRIS) + 1 additional GRIS A. Cavalli MDS2 TESTING PHASE

FREE CPU & EXPIRED CACHE MDS 2.0 MDS 2 alpha A. Cavalli MDS2 TESTING PHASE

FREE CPU & CACHE GOOD MDS 2.0 MDS 2 alpha A. Cavalli MDS2 TESTING PHASE

LOADED CPU & EXPIRED CACHE MDS 2.0 the scale is different from other slides MDS 2 alpha A. Cavalli MDS2 TESTING PHASE

LOADED CPU & CACHE GOOD MDS 2.0 MDS 2 alpha A. Cavalli MDS2 TESTING PHASE

Tests conclusions MDS 2 alpha GIIS seems to be a little bit slower than MDS 2.0. The GRIS is faster, even if it’s busy, but still not as fast as we would like. As shown, you can deploy a GIIS hierarchy like the one implemented with MDS 2.0 Search timeouts seem to be better managed. Basic root objects are always cached (also in MDS 2.0 with latest backend). A. Cavalli MDS2 TESTING PHASE

Bug-fixes Inconsistency in the GRIS filesystem information provider (attribute “hn” instead of “hostname”). Wrong TTL variable for cache managing in GIIS backend (“get_CACHE_TTL” instead of “TTL”). A. Cavalli MDS2 TESTING PHASE

Open Issues SECURITY: still some problems retrieving info with “anonymousbind” disabled in the GIIS configuration. BACKEND: the current GIIS backend returns duplicated root objects in some cases. VOs : we managed to register twice an MDS-2.0 GRIS on different GIISes. Now we hope that it will be more friendly with this new release. A. Cavalli MDS2 TESTING PHASE