Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute.

Slides:



Advertisements
Similar presentations
EGC 2005, CrossGrid technical achievements, Amsterdam, Feb. 16th, 2005 WP2-3 New Generation Environment for Grid Interactive MPI Applications M igrating.
Advertisements

Standa Vaněček The potential of Integrated Modelling and the OpenMI Standa Vaněček DHI, Chairman of the OATC.
A Computation Management Agent for Multi-Institutional Grids
A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
AcrossGrids Conference – Santiago 13 of February 2003 First Prototype of the CrossGrid Testbed Jorge Gomes (LIP) On behalf of X# WP4.
GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid: Interactive Applications, Tool Environment, New Grid Services, and Testbed Marian Bubak.
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Cracow Grid Workshop November 5-6 Support System of Virtual Organization for Flood Forecasting L. Hluchy, J. Astalos, V.D. Tran, M. Dobrucky and G.T. Nguyen.
The CrossGrid Project Marcel Kunze, FZK representing the X#-Collaboration.
Massimo Cafaro GridLab Review GridLab WP10 Information Services Massimo Cafaro CACT/ISUFI University of Lecce, Italy.
The CrossGrid project Juha Alatalo Timo Koivusalo.
27-29 September 2002CrossGrid Workshop LINZ1 USE CASES (Task 3.5 Test and Integration) Santiago González de la Hoz CrossGrid Workshop at Linz,
Task 3.5 Tests and Integration ( Wp3 kick-off meeting, Poznan, 29 th -30 th January 2002 Santiago González de la.
5 th EU DataGrid Conference, Budapest, September 2002 The European CrossGrid Project Marcel Kunze Abteilung Grid-Computing und e-Science Forschungszentrum.
TAT CrossGrid Yearly Review, Brussels, March 12, 2003 CrossGrid After the First Year: A Technical Overview Marian Bubak, Maciej Malawski, and Katarzyna.
Dagstuhl Seminar 02341: Performance Analysis and Distributed Computing, August 18 – 23, 2002, Germany Monitoring of Interactive Grid Applications Marian.
“IST in the FP6”, Warszawa, November 2002 IST 5FP Success Stories of the Institute of Informatics of the Slovak Academy of Sciences, Bratislava,
CrossGrid Task 3.3 Grid Monitoring Trinity College Dublin (TCD) Brian Coghlan Paris MAR-2002.
Cracow Grid Workshop, November 5-6, 2001 Overview of the CrossGrid Project Marian Bubak Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland.
TAT Cracow Grid Workshop, October 27 – 29, 2003 Marian Bubak, Michal Turala and the CrossGrid Collaboration CrossGrid in Its Halfway:
Workload Management Massimo Sgaravatto INFN Padova.
M.Kunze, NEC2003, Varna The European CrossGrid Project Marcel Kunze Institute for Scientific Computing (IWR) Forschungszentrum Karlsruhe GmbH
5 March 2002 DG PARIS Jesus Marco CSIC IFCA(Santander) Development of GRID environment for interactive applications J.Marco (CSIC) DATAGRID WP6 MEETING.
EU 2nd Year Review – Jan – WP9 WP9 Earth Observation Applications Demonstration Pedro Goncalves :
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
INFSO-RI Enabling Grids for E-sciencE FloodGrid application Ladislav Hluchy, Viet D. Tran Institute of Informatics, SAS Slovakia.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Advanced Grid-Enabled System for Online Application Monitoring Main Service Manager is a central component, one per each.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
1 School of Computer, National University of Defense Technology A Profile on the Grid Data Engine (GridDaEn) Xiao Nong
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Jarek Nabrzyski, Ariel Oleksiak Comparison of Grid Middleware in European Grid Projects Jarek Nabrzyski, Ariel Oleksiak Poznań Supercomputing and Networking.
1 Development of GRID environment for interactive applications Jesús Marco de Lucas Instituto de Física de Cantabria,
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
OMIS Approach to Grid Application Monitoring Bartosz Baliś Marian Bubak Włodzimierz Funika Roland Wismueller.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
High Level Architecture (HLA)  used for building interactive simulations  connects geographically distributed nodes  time management (for time- and.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
TERENA 2003, May 21, Zagreb TERENA Networking Conference, 2003 MOBILE WORK ENVIRONMENT FOR GRID USERS. TESTBED Miroslaw Kupczyk Rafal.
George Tsouloupas University of Cyprus Task 2.3 GridBench ● 1 st Year Targets ● Background ● Prototype ● Problems and Issues ● What's Next.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
The Knowledge-based Workflow System for Grid Applications Ladislav Hluchý, Viet Tran, Ondrej Habala II SAS, Slovakia
Research Infrastructures Information Day Brussels, March 25, 2003 Victor Alessandrini IDRIS - CNRS.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
CERN, DataGrid PTB, April 10, 2002 CrossGrid – DataGrid Collaboration (Framework) Marian Bubak and Bob Jones.
Interactive European Grid Environment for HEP Application with Real Time Requirements Lukasz Dutka 1, Krzysztof Korcyl 2, Krzysztof Zielinski 1,3, Jacek.
High Level Architecture (HLA)  used for building interactive simulations  connects geographically distributed nodes  time management (for time- and.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Ariel Garcia DataGrid WP6, Heidelberg, 26 th September 2003 Ariel García CrossGrid testbed status Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
→ MIPRO Conference,Opatija, 31 May -3 June 2005 Grid-based Virtual Organization for Flood Prediction Miroslav Dobrucký Institute of Informatics, SAS Slovakia,
BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 CrossGrid Architecture Marian Bubak and TAT Institute of Computer Science & ACC CYFRONET AGH, Cracow,
CERN The GridSTART EU accompany measure Fabrizio Gagliardi CERN
Marian Bubak 1,2, Włodzimierz Funika 1,2, Roland Wismüller 3, Tomasz Arodź 1,2, Marcin Kurdziel 1,2 1 Institute of Computer Science, AGH, Kraków, Poland.
CERN, April 9, 2002 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zajac X# TAT Institute of Computer Science.
K-WfGrid: Grid Workflows with Knowledge Ladislav Hluchy II SAS, Slovakia.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
DataGrid France 12 Feb – WP9 – n° 1 WP9 Earth Observation Applications.
Bob Jones EGEE Technical Director
EO Applications Parallel Session
University of Technology
PROCESS - H2020 Project Work Package WP6 JRA3
Grid Application Programming Environment
Presentation transcript:

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Overview –X# and other # projects –Collaboration and Objectives –Applications and their requirements –New grid services –Tools for X# applications development –X# architecture –Work-packages –Collaboration with other # projects –Conclusions

Workshop CESGA - HPC’ A Coruna, May 30, 2002 A new IST Grid project space (Kyriakos Baxevanidis) GRIDLAB GRIA EGSO DATATAG CROSSGRID DATAGRID Applications GRIP EUROGRID DAMIEN Middleware & Tools Underlying Infrastructures Science Industry / business - Links with European National efforts - Links with US projects (GriPhyN, PPDG, iVDGL,…)

Workshop CESGA - HPC’ A Coruna, May 30, 2002 CrossGrid Collaboration Poland: Cyfronet & INP Cracow PSNC Poznan ICM & IPJ Warsaw Portugal: LIP Lisbon Spain: CSIC Santander Valencia & RedIris UAB Barcelona USC Santiago & CESGA Ireland: TCD Dublin Italy: DATAMAT Netherlands: UvA Amsterdam Germany: FZK Karlsruhe TUM Munich USTU Stuttgart Slovakia: II SAS Bratislava Greece: Algosystems Demo Athens AuTh Thessaloniki Cyprus: UCY Nikosia Austria: U.Linz

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Main Objectives –New category of Grid enabled applications computing and data intensive distributed near real time response (a person in a loop) layered –New programming tools –Grid more user friendly, secure and efficient –Interoperability with other Grids –Implementation of standards

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Layered Structure of X# Interactive and Data Intensive Applications (WP1)  I nteractive simulation and visualization of a biomedical system  Flooding crisis team support  Distributed data analysis in HEP  Weather forecast and air pollution modeling Grid Application Programming Environment (WP2)  MPI code debugging and verification  Metrics and benchmarks  Interactive and semiautomatic performance evaluation tools Grid Visualization Kernel Data Mining New CrossGrid Services (WP3) Globus Middleware Fabric Infrastructure (Testbed WP4) DataGrid GriPhyN... Services HLA  Portals and roaming access  Grid resource management  Grid monitoring  Optimization of data access

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Biomedical Application –Input: 3-D model of arteries –Simulation: LB of blood flow –Results: in a virtual reality –User: analyses results in near real-time, interacts, changes the structure of arteries

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Interaction in Biomedical Application

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Biomedical Application Use Case

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Asynchronous Execution of Biomedical Application

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Current architecture of biomedical application

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Modules of the Biomedical Application –Medical scanners - data acquisition system –Software for segmentation – to get 3-D images –Database with medical images and metadata –Blood flow simulator with interaction capability –History database –Visualization for several interactive 3-D platforms –Interactive measurement module –Interaction module –User interface for coupling visualization, simulation, steering

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Flooding Crisis Team Support Storage systems databases surface automatic meteorological and hydrological stations systems for acquisition and processing of satellite information meteorological radars External sources of information  Global and regional centers GTS  EUMETSAT and NOAA  Hydrological services of other countries Data sources meteorological models hydrological models hydraulic models High performance computers Grid infrastructure Flood crisis teams  meteorologists  hydrologists  hydraulic engineers Users  river authorities  energy  insurance companies  navigation  media  public

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Simulation Flood Cascade Data sources Meteorological simulation Hydrological simulation Hydraulic simulation Portal

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Basic Characteristics of Flood Simulation –Meteorological intensive simulation (1.5 h/simulation) – maybe HPC large input/output data sets (50MB~150MB /event) high availability of resources (24/365) –Hydrological Parametric simulations - HTC Each sub-catchment may require different models (heterogeneous simulation) –Hydraulic Many 1-D simulations - HTC 2-D hydraulic simulations need HPC

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Complementarity with DataGrid HEP application package: Crossgrid will develop interactive final user application for physics analysis, will make use of the products of non-interactive simulation & data- processing preceeding stages of Datagrid Apart from the file-level service that will be offered by Datagrid, CrossGrid will offer an object-level service to optimise the use of distributed databases: - Two possible implementations (will be tested in running experiments): –Three-tier model accesing OODBMS or O/R DBMS –More specific HEP solution like ROOT. User friendly due to specific portal tools Distributed Data Analysis in HEP

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Several challenging points: –Access to large distributed databases in the Grid. –Development of distributed data-mining techniques. –Definition of a layered application structure. –Integration of user-friendly interactive access. Focus on LHC experiments (ALICE, ATLAS, CMS and LHCb) Distributed Data Analysis in HEP

Weather Forecast and Air Pollution Modeling –Distributed/parallel codes on Grid Coupled Ocean/Atmosphere Mesoscale Prediction System STEM-II Air Pollution Code –Integration of distributed databases –Data mining applied to downscaling weather forecast

Workshop CESGA - HPC’ A Coruna, May 30, 2002 COAMPS Coupled Ocean/Atmosphere Mesoscale Prediction System: Atmospheric Components Complex Data Quality Control Analysis: Multivariate Optimum Interpolation Analysis (MVOI) of Winds and Heights Univariate Analyses of Temperature and Moisture OI Analysis of Sea Surface Temperature Initialization: Variational Hydrostatic Constraint on Analysis Increments Digital Filter Atmospheric Model: Numerics: Nonhydrostatic, Scheme C, Nested Grids, Sigma-z, Flexible Lateral BCs Physics: PBL, Convection, Explicit Moist Physics, Radiation, Surface Layer Features: Globally Relocatable (5 Map Projections) User-Defined Grid Resolutions, Dimensions, and Number of Nested Grids 6 or 12 Hour Incremental Data Assimilation Cycle Can be Used for Idealized or Real-Time Applications Single Configuration Managed System for All Applications Operational at FNMOC: 7 Areas, Twice Daily, using 81/27/9 km or 81/27 km grids Forecasts to 72 hours Operational at all Navy Regional Centers (w/GUI Interface)

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Air Pollution Model – STEM-II –Species: 56 chemical, 16 long-lived, 40 short-lived, 28 radicals (OH, HO 2 ) –Chemical mechanisms: 176 gas-phase reactions 31 aqueous-phase reactions. 12 aqueous-phase solution equilibria. –Equations are integrated with locally 1-D finite element method (LOD-FEM) –Transport equations are solved with Petrov-Crank-Nicolson- Galerkin (FEM) –Chemistry & mass transfer terms are integrated with semi- implicit Euler and pseudo-analytic methods

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Key Features of X# Applications –Data Data generators and data bases geographically distributed Selected on demand –Processing Needs large processing capacity; both HPC & HTC Interactive –Presentation Complex data require versatile 3D visualisation Support interaction and feedback to other components

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Problems to be Solved –How to build interactive Grid environment ? (Globus is more batch-oriented than interactive- oriented; performance issue) –How to use with Globus and DataGrid SW, how to define interfaces ?

Workshop CESGA - HPC’ A Coruna, May 30, 2002 User Interaction Services Nimrod User Interaction Services Resource Broker Scheduler (3.2) Scheduler (3.2) GIS / MDS (Globus) GIS / MDS (Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) Condor-G Advance reservation Start interactive application Steer the simulation: cancel, restart

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Roaming Access Applications Portals (3.1) Portals (3.1) Roaming Access Server (3.1) Scheduler (3.2) Scheduler (3.2) GIS / MDS (Globus) GIS / MDS (Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) Remote Access Server: user profiles, authentication, authorization, job submission Migrating Desktop Application portal

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Grid Monitoring –OMIS-based application monitoring system –Jiro-based service for monitoring of the Grid infrastructure –Additional service for non-invasive monitoring

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Monitoring of Grid Applications –Monitor = obtain information on or manipulate target application e.g. read status of application’s processes, suspend application, read / write memory, etc. –Monitoring module needed by tools Debuggers Performance analyzers Visualizers...

Workshop CESGA - HPC’ A Coruna, May 30, 2002 OMIS Approach to Grid Monitoring –Application oriented on-line data collected immediately delivered to tools normally no storing for later processing –Data collection based on run-time instrumentation enables dynamic choosing of data to be collected reduced monitoring overhead –Standardized interface between tools and the monitoring system – OMIS

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Monitoring – autonomous system Separate monitoring system Tool / Monitor interface – OMIS

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Grid-enabled OMIS-compliant Monitoring System – OCM-G –Scalable distributed decentralized –Efficient local buffers –  Three types of components local monitors (LM) service managers (SM) application monitors (AM)

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Service Managers and Local Monitors –Service Managers one or more in the system request distribution reply collection –Local Monitors one per node handle local objects actual execution of requests

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Application monitors –Embedded in applications –Handle some actions locally buffering data filtering of instrumentation monitoring requests –E.g. REQ: read variable a, REP: value of a asynchronous no OS mechanisms involved

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Optimization of Grid Data Access –Different storage systems and applications’ requirements –Optimization by selection of data handlers –Service consists of Component-expert system Data-access estimator GridFTP plugin

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Optimization of Grid Data Access Applications Portals (3.1) Portals (3.1) Optimization of Grid Data Access (3.4) Scheduling Agents (3.2) Scheduling Agents (3.2) Replica Manager (DataGrid / Globus) Replica Manager (DataGrid / Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) GridFTP

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Modules of Tool Environment Grid Monitoring (Task 3.3) Performance Prediction Component High Level Analysis Component User Interface and Visualization Component Performance Measurement Component Benchmarks (Task 2.3) Applications (WP1) executing on Grid testbed Application source code G-PM RMD PMD Legend RMD – raw monitoring data PMD – performance measurement data data flow manual information transfer

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Tools for Application Development Applications Portals (3.1) Portals (3.1) G-PM Performance Measurement Tools (2.4) G-PM Performance Measurement Tools (2.4) MPI Debugging and Verification (2.2) MPI Debugging and Verification (2.2) Metrics and Benchmarks (2.4) Metrics and Benchmarks (2.4) Grid Monitoring (3.3) (OCM-G, RGMA) Grid Monitoring (3.3) (OCM-G, RGMA)

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Building Blocks of the CrossGrid CrossGrid DataGrid GLOBUS EXTERNAL To be developed in X# From DataGrid Globus Toolkit Other

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Overview of the CrossGrid Architecture Supporting Tools 1.4 Meteo Pollution 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.2 Flooding 1.1 BioMed 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Resource Manager Instruments ( Satelites, Radars) Instruments ( Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services 1.1 User Interaction Services

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Components for Biomedical Application Supporting Tools 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 1.2 Flooding 1.1 BioMed 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager 1.1 User Interaction Services 1.1 User Interaction Services Resource Manager Instruments ( Satelites, Radars)

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Components for Flooding Crisis Team Support Supporting Tools 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.2 Flooding 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Resource Manager Instruments (Medical Scaners, Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Components for Distributed Data Analysis in HEP Supporting Tools 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Instruments ( Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Components for Weather Forecasting/Pollution Modeling Supporting Tools 1.4 Meteo Pollution 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Instruments ( Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Rules for X# SW Development –Iterative improvement: development, testing on testbed, evaluation, improvement –Modularity –Open source approach –SW well documented –Collaboration with other # projects

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Project Phases M 1 - 3: requirements definition and merging M : first development phase: design, 1st prototypes, refinement of requirements M : second development phase: integration of components, 2nd prototypes M : third development phase: complete integration, final code versions M : final phase: demonstration and documentation

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Tasks 1.0 Co-ordination and management (Peter M.A. Sloot, UvA) 1.1 Interactive simulation and visualisation of a biomedical system (G. Dick van Albada, Uva) 1.2 Flooding crisis team support (Ladislav Hluchy, II SAS) 1.3 Distributed data analysis in HEP (C. Martinez-Rivero, CSIC) 1.4 Weather forecast and air pollution modelling (Bogumil Jakubiak, ICM) WP1 – CrossGrid Application Development

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Tasks 2.0 Co-ordination and management (Holger Marten, FZK) 2.1 Tools requirement definition (Roland Wismueller, TUM) 2.2 MPI code debugging and verification (Matthias Mueller, USTUTT) 2.3 Metrics and benchmarks (Marios Dikaiakos, UCY) 2.4 Interactive and semiautomatic performance evaluation tools (Wlodek Funika, Cyfronet) 2.5 Integration, testing and refinement (Roland Wismueller, TUM) WP2 - Grid Application Programming Environments

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Tasks 3.0 Co-ordination and management (Norbert Meyer, PSNC) 3.1 Portals and roaming access (Miroslaw Kupczyk, PSNC) 3.2 Grid resource management (Miquel A. Senar, UAB) 3.3 Grid monitoring (Brian Coghlan, TCD) 3.4 Optimisation of data access (Jacek Kitowski, Cyfronet) 3.5 Tests and integration (Santiago Gonzalez, CSIC) WP3 – New Grid Services and Tools

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Partners in WP4 WP4 lead by CSIC (Spain) WP4 - International Testbed Organization Auth Thessaloniki U v Amsterdam FZK Karlsruhe TCD Dublin U A Barcelona LIP Lisbon CSIC Valencia CSIC Madrid USC Santiago CSIC Santander DEMO AthensUCY Nikosia CYFRONET Cracow II SAS Bratislava PSNC Poznan ICM & IPJ Warsaw

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Tasks 4.0 Coordination and management (Jesus Marco, CSIC, Santander) –Coordination with WP1,2,3 –Collaborative tools –Integration Team 4.1 Testbed setup & incremental evolution (Rafael Marco, CSIC, Santander) –Define installation –Deploy testbed releases –Trace security issues WP4 - International Testbed Organization Testbed site responsibles: –CYFRONET (Krakow) A.Ozieblo –ICM(Warsaw) W.Wislicki –IPJ (Warsaw) K.Nawrocki –UvA (Amsterdam) D.van Albada –FZK (Karlsruhe) M.Kunze –IISAS (Bratislava) J.Astalos –PSNC(Poznan) P.Wolniewicz –UCY (Cyprus) M.Dikaiakos –TCD (Dublin) B.Coghlan –CSIC (Santander/Valencia) S.Gonzalez –UAB (Barcelona) G.Merino –USC (Santiago) A.Gomez –UAM (Madrid) J.del Peso –Demo (Athenas) C.Markou –AuTh (Thessaloniki) D.Sampsonidis –LIP (Lisbon) J.Martins

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Tasks 4.2 Integration with DataGrid (Marcel Kunze, FZK) –Coordination of testbed setup –Exchange knowledge –Participate in WP meetings 4.3 Infrastructure support (Josep Salt, CSIC, Valencia) –Fabric management –HelpDesk –Provide Installation Kit –Network support 4.4 Verification & quality control (Jorge Gomes, LIP) –Feedback –Improve stability of the testbed WP4 - International Testbed Organization

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Tasks 5.1 Project coordination and administration (Michal Turala, INP) 5.2 CrossGrid Architecture Team (Marian Bubak, Cyfronet) 5.3 Central dissemination (Yannis Perros, ALGO) WP5 – Project Management

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Architecture Team - Activity –Merging of requirements from WP1, WP2, WP3 –Specification of the X# architecture (i.e. new protocols, services, SDKs, APIs) –Establishing of standard operational procedures –Specification of the structure of deliverables –Improvement of X# architecture according to experience from SW development and testbed operation

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Person-months WPWP TitlePM FundedPM Total WP1 CrossGrid Applications Development WP2 Grid Application Programming Environment WP3 New Grid Services and Tools WP4 International Testbed Organization WP5 Project Management Total

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Collaboration with other # Projects –Objective – exchange of information software components –Partners DataGrid DataTag GridLab EUROGRID and GRIP –GRIDSTART –Participation in GGF

Workshop CESGA - HPC’ A Coruna, May 30, 2002 X# - EDG: Grid Architecture Similar layered structure, similar functionality of components –Interoperability of Grids –Reuse of Grid components –Joint proposals to GGF –Participation of chairmen of EDG ATF and X# AT in meetings and other activities

Workshop CESGA - HPC’ A Coruna, May 30, 2002 X# - EDG: Applications –Interactive applications Methodology Generic structure Grid services Data security for medical applications –HEP applications X# will extend functionality of EDG sw

Workshop CESGA - HPC’ A Coruna, May 30, 2002 X# - EDG: Testbed –Goal: Interoperability of EDG and X# testbeds –Joint Grid infrastructure for HEP applications –Already X# members from Spain, Germany and Portugal are taking part in EDG testbed –Collaboration of testbed support teams –Mutual recognition of Certification Authorities –Elaboration of common access/usage policy and procedures –Common installation/configuration procedures

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Summary –Layered structure of the all X# applications –Reuse of SW from DataGrid and other # projects –Globus as the bottom layer of the middleware –Heterogeneous computer and storage systems –Distributed development and testing of SW 12 partners in applications 14 partners in middleware 15 partners in testbeds

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Thanks to –Michal Turala –Peter M.A. Sloot –Roland Wismueller –Wlodek Funika –Marek Garbacz –Ladislav Hluchy –Bartosz Balis –Jacek Kitowski –Norbert Meyer –Jesus Marco

Workshop CESGA - HPC’ A Coruna, May 30, 2002 Thanks to CESGA for invitation to HPC’2002 For more about the X# Project see