Presentation is loading. Please wait.

Presentation is loading. Please wait.

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute.

Similar presentations


Presentation on theme: "Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute."— Presentation transcript:

1 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland www.eu-crossgrid.org

2 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Overview –X# and other # projects –Collaboration and Objectives –Applications and their requirements –New grid services –Tools for X# applications development –X# architecture –Work-packages –Collaboration with other # projects –Conclusions

3 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 A new IST Grid project space (Kyriakos Baxevanidis) GRIDLAB GRIA EGSO DATATAG CROSSGRID DATAGRID Applications GRIP EUROGRID DAMIEN Middleware & Tools Underlying Infrastructures Science Industry / business - Links with European National efforts - Links with US projects (GriPhyN, PPDG, iVDGL,…)

4 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 CrossGrid Collaboration Poland: Cyfronet & INP Cracow PSNC Poznan ICM & IPJ Warsaw Portugal: LIP Lisbon Spain: CSIC Santander Valencia & RedIris UAB Barcelona USC Santiago & CESGA Ireland: TCD Dublin Italy: DATAMAT Netherlands: UvA Amsterdam Germany: FZK Karlsruhe TUM Munich USTU Stuttgart Slovakia: II SAS Bratislava Greece: Algosystems Demo Athens AuTh Thessaloniki Cyprus: UCY Nikosia Austria: U.Linz

5 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Main Objectives –New category of Grid enabled applications computing and data intensive distributed near real time response (a person in a loop) layered –New programming tools –Grid more user friendly, secure and efficient –Interoperability with other Grids –Implementation of standards

6 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Layered Structure of X# Interactive and Data Intensive Applications (WP1)  I nteractive simulation and visualization of a biomedical system  Flooding crisis team support  Distributed data analysis in HEP  Weather forecast and air pollution modeling Grid Application Programming Environment (WP2)  MPI code debugging and verification  Metrics and benchmarks  Interactive and semiautomatic performance evaluation tools Grid Visualization Kernel Data Mining New CrossGrid Services (WP3) Globus Middleware Fabric Infrastructure (Testbed WP4) DataGrid GriPhyN... Services HLA  Portals and roaming access  Grid resource management  Grid monitoring  Optimization of data access

7 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Biomedical Application –Input: 3-D model of arteries –Simulation: LB of blood flow –Results: in a virtual reality –User: analyses results in near real-time, interacts, changes the structure of arteries

8 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Interaction in Biomedical Application

9 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Biomedical Application Use Case

10 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Asynchronous Execution of Biomedical Application

11 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Current architecture of biomedical application

12 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Modules of the Biomedical Application –Medical scanners - data acquisition system –Software for segmentation – to get 3-D images –Database with medical images and metadata –Blood flow simulator with interaction capability –History database –Visualization for several interactive 3-D platforms –Interactive measurement module –Interaction module –User interface for coupling visualization, simulation, steering

13 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Flooding Crisis Team Support Storage systems databases surface automatic meteorological and hydrological stations systems for acquisition and processing of satellite information meteorological radars External sources of information  Global and regional centers GTS  EUMETSAT and NOAA  Hydrological services of other countries Data sources meteorological models hydrological models hydraulic models High performance computers Grid infrastructure Flood crisis teams  meteorologists  hydrologists  hydraulic engineers Users  river authorities  energy  insurance companies  navigation  media  public

14 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Simulation Flood Cascade Data sources Meteorological simulation Hydrological simulation Hydraulic simulation Portal

15 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Basic Characteristics of Flood Simulation –Meteorological intensive simulation (1.5 h/simulation) – maybe HPC large input/output data sets (50MB~150MB /event) high availability of resources (24/365) –Hydrological Parametric simulations - HTC Each sub-catchment may require different models (heterogeneous simulation) –Hydraulic Many 1-D simulations - HTC 2-D hydraulic simulations need HPC

16 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Complementarity with DataGrid HEP application package: Crossgrid will develop interactive final user application for physics analysis, will make use of the products of non-interactive simulation & data- processing preceeding stages of Datagrid Apart from the file-level service that will be offered by Datagrid, CrossGrid will offer an object-level service to optimise the use of distributed databases: - Two possible implementations (will be tested in running experiments): –Three-tier model accesing OODBMS or O/R DBMS –More specific HEP solution like ROOT. User friendly due to specific portal tools Distributed Data Analysis in HEP

17 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Several challenging points: –Access to large distributed databases in the Grid. –Development of distributed data-mining techniques. –Definition of a layered application structure. –Integration of user-friendly interactive access. Focus on LHC experiments (ALICE, ATLAS, CMS and LHCb) Distributed Data Analysis in HEP

18 Weather Forecast and Air Pollution Modeling –Distributed/parallel codes on Grid Coupled Ocean/Atmosphere Mesoscale Prediction System STEM-II Air Pollution Code –Integration of distributed databases –Data mining applied to downscaling weather forecast

19 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 COAMPS Coupled Ocean/Atmosphere Mesoscale Prediction System: Atmospheric Components Complex Data Quality Control Analysis: Multivariate Optimum Interpolation Analysis (MVOI) of Winds and Heights Univariate Analyses of Temperature and Moisture OI Analysis of Sea Surface Temperature Initialization: Variational Hydrostatic Constraint on Analysis Increments Digital Filter Atmospheric Model: Numerics: Nonhydrostatic, Scheme C, Nested Grids, Sigma-z, Flexible Lateral BCs Physics: PBL, Convection, Explicit Moist Physics, Radiation, Surface Layer Features: Globally Relocatable (5 Map Projections) User-Defined Grid Resolutions, Dimensions, and Number of Nested Grids 6 or 12 Hour Incremental Data Assimilation Cycle Can be Used for Idealized or Real-Time Applications Single Configuration Managed System for All Applications Operational at FNMOC: 7 Areas, Twice Daily, using 81/27/9 km or 81/27 km grids Forecasts to 72 hours Operational at all Navy Regional Centers (w/GUI Interface)

20 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Air Pollution Model – STEM-II –Species: 56 chemical, 16 long-lived, 40 short-lived, 28 radicals (OH, HO 2 ) –Chemical mechanisms: 176 gas-phase reactions 31 aqueous-phase reactions. 12 aqueous-phase solution equilibria. –Equations are integrated with locally 1-D finite element method (LOD-FEM) –Transport equations are solved with Petrov-Crank-Nicolson- Galerkin (FEM) –Chemistry & mass transfer terms are integrated with semi- implicit Euler and pseudo-analytic methods

21 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Key Features of X# Applications –Data Data generators and data bases geographically distributed Selected on demand –Processing Needs large processing capacity; both HPC & HTC Interactive –Presentation Complex data require versatile 3D visualisation Support interaction and feedback to other components

22 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Problems to be Solved –How to build interactive Grid environment ? (Globus is more batch-oriented than interactive- oriented; performance issue) –How to use with Globus and DataGrid SW, how to define interfaces ?

23 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 User Interaction Services Nimrod User Interaction Services Resource Broker Scheduler (3.2) Scheduler (3.2) GIS / MDS (Globus) GIS / MDS (Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) Condor-G Advance reservation Start interactive application Steer the simulation: cancel, restart

24 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Roaming Access Applications Portals (3.1) Portals (3.1) Roaming Access Server (3.1) Scheduler (3.2) Scheduler (3.2) GIS / MDS (Globus) GIS / MDS (Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) Remote Access Server: user profiles, authentication, authorization, job submission Migrating Desktop Application portal

25 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Grid Monitoring –OMIS-based application monitoring system –Jiro-based service for monitoring of the Grid infrastructure –Additional service for non-invasive monitoring

26 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Monitoring of Grid Applications –Monitor = obtain information on or manipulate target application e.g. read status of application’s processes, suspend application, read / write memory, etc. –Monitoring module needed by tools Debuggers Performance analyzers Visualizers...

27 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 OMIS Approach to Grid Monitoring –Application oriented on-line data collected immediately delivered to tools normally no storing for later processing –Data collection based on run-time instrumentation enables dynamic choosing of data to be collected reduced monitoring overhead –Standardized interface between tools and the monitoring system – OMIS

28 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Monitoring – autonomous system Separate monitoring system Tool / Monitor interface – OMIS

29 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Grid-enabled OMIS-compliant Monitoring System – OCM-G –Scalable distributed decentralized –Efficient local buffers –  Three types of components local monitors (LM) service managers (SM) application monitors (AM)

30 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Service Managers and Local Monitors –Service Managers one or more in the system request distribution reply collection –Local Monitors one per node handle local objects actual execution of requests

31 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Application monitors –Embedded in applications –Handle some actions locally buffering data filtering of instrumentation monitoring requests –E.g. REQ: read variable a, REP: value of a asynchronous no OS mechanisms involved

32 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Optimization of Grid Data Access –Different storage systems and applications’ requirements –Optimization by selection of data handlers –Service consists of Component-expert system Data-access estimator GridFTP plugin

33 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Optimization of Grid Data Access Applications Portals (3.1) Portals (3.1) Optimization of Grid Data Access (3.4) Scheduling Agents (3.2) Scheduling Agents (3.2) Replica Manager (DataGrid / Globus) Replica Manager (DataGrid / Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) GridFTP

34 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Modules of Tool Environment Grid Monitoring (Task 3.3) Performance Prediction Component High Level Analysis Component User Interface and Visualization Component Performance Measurement Component Benchmarks (Task 2.3) Applications (WP1) executing on Grid testbed Application source code G-PM RMD PMD Legend RMD – raw monitoring data PMD – performance measurement data data flow manual information transfer

35 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Tools for Application Development Applications Portals (3.1) Portals (3.1) G-PM Performance Measurement Tools (2.4) G-PM Performance Measurement Tools (2.4) MPI Debugging and Verification (2.2) MPI Debugging and Verification (2.2) Metrics and Benchmarks (2.4) Metrics and Benchmarks (2.4) Grid Monitoring (3.3) (OCM-G, RGMA) Grid Monitoring (3.3) (OCM-G, RGMA)

36 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Building Blocks of the CrossGrid CrossGrid DataGrid GLOBUS EXTERNAL To be developed in X# From DataGrid Globus Toolkit Other

37 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Overview of the CrossGrid Architecture Supporting Tools 1.4 Meteo Pollution 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.2 Flooding 1.1 BioMed 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Resource Manager Instruments ( Satelites, Radars) Instruments ( Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services 1.1 User Interaction Services

38 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Components for Biomedical Application Supporting Tools 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 1.2 Flooding 1.1 BioMed 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager 1.1 User Interaction Services 1.1 User Interaction Services Resource Manager Instruments ( Satelites, Radars)

39 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Components for Flooding Crisis Team Support Supporting Tools 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.2 Flooding 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Resource Manager Instruments (Medical Scaners, Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services

40 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Components for Distributed Data Analysis in HEP Supporting Tools 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Instruments ( Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services

41 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Components for Weather Forecasting/Pollution Modeling Supporting Tools 1.4 Meteo Pollution 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.1 BioMed Applications Generic Services 1.3 Interactive Session Services 1.3 Interactive Session Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Instruments ( Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services

42 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Rules for X# SW Development –Iterative improvement: development, testing on testbed, evaluation, improvement –Modularity –Open source approach –SW well documented –Collaboration with other # projects

43 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Project Phases M 1 - 3: requirements definition and merging M 4 - 12: first development phase: design, 1st prototypes, refinement of requirements M 13 -24: second development phase: integration of components, 2nd prototypes M 25 -32: third development phase: complete integration, final code versions M 33 -36: final phase: demonstration and documentation

44 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Tasks 1.0 Co-ordination and management (Peter M.A. Sloot, UvA) 1.1 Interactive simulation and visualisation of a biomedical system (G. Dick van Albada, Uva) 1.2 Flooding crisis team support (Ladislav Hluchy, II SAS) 1.3 Distributed data analysis in HEP (C. Martinez-Rivero, CSIC) 1.4 Weather forecast and air pollution modelling (Bogumil Jakubiak, ICM) WP1 – CrossGrid Application Development

45 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Tasks 2.0 Co-ordination and management (Holger Marten, FZK) 2.1 Tools requirement definition (Roland Wismueller, TUM) 2.2 MPI code debugging and verification (Matthias Mueller, USTUTT) 2.3 Metrics and benchmarks (Marios Dikaiakos, UCY) 2.4 Interactive and semiautomatic performance evaluation tools (Wlodek Funika, Cyfronet) 2.5 Integration, testing and refinement (Roland Wismueller, TUM) WP2 - Grid Application Programming Environments

46 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Tasks 3.0 Co-ordination and management (Norbert Meyer, PSNC) 3.1 Portals and roaming access (Miroslaw Kupczyk, PSNC) 3.2 Grid resource management (Miquel A. Senar, UAB) 3.3 Grid monitoring (Brian Coghlan, TCD) 3.4 Optimisation of data access (Jacek Kitowski, Cyfronet) 3.5 Tests and integration (Santiago Gonzalez, CSIC) WP3 – New Grid Services and Tools

47 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Partners in WP4 WP4 lead by CSIC (Spain) WP4 - International Testbed Organization Auth Thessaloniki U v Amsterdam FZK Karlsruhe TCD Dublin U A Barcelona LIP Lisbon CSIC Valencia CSIC Madrid USC Santiago CSIC Santander DEMO AthensUCY Nikosia CYFRONET Cracow II SAS Bratislava PSNC Poznan ICM & IPJ Warsaw

48 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Tasks 4.0 Coordination and management (Jesus Marco, CSIC, Santander) –Coordination with WP1,2,3 –Collaborative tools –Integration Team 4.1 Testbed setup & incremental evolution (Rafael Marco, CSIC, Santander) –Define installation –Deploy testbed releases –Trace security issues WP4 - International Testbed Organization Testbed site responsibles: –CYFRONET (Krakow) A.Ozieblo –ICM(Warsaw) W.Wislicki –IPJ (Warsaw) K.Nawrocki –UvA (Amsterdam) D.van Albada –FZK (Karlsruhe) M.Kunze –IISAS (Bratislava) J.Astalos –PSNC(Poznan) P.Wolniewicz –UCY (Cyprus) M.Dikaiakos –TCD (Dublin) B.Coghlan –CSIC (Santander/Valencia) S.Gonzalez –UAB (Barcelona) G.Merino –USC (Santiago) A.Gomez –UAM (Madrid) J.del Peso –Demo (Athenas) C.Markou –AuTh (Thessaloniki) D.Sampsonidis –LIP (Lisbon) J.Martins

49 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Tasks 4.2 Integration with DataGrid (Marcel Kunze, FZK) –Coordination of testbed setup –Exchange knowledge –Participate in WP meetings 4.3 Infrastructure support (Josep Salt, CSIC, Valencia) –Fabric management –HelpDesk –Provide Installation Kit –Network support 4.4 Verification & quality control (Jorge Gomes, LIP) –Feedback –Improve stability of the testbed WP4 - International Testbed Organization

50 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Tasks 5.1 Project coordination and administration (Michal Turala, INP) 5.2 CrossGrid Architecture Team (Marian Bubak, Cyfronet) 5.3 Central dissemination (Yannis Perros, ALGO) WP5 – Project Management

51 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Architecture Team - Activity –Merging of requirements from WP1, WP2, WP3 –Specification of the X# architecture (i.e. new protocols, services, SDKs, APIs) –Establishing of standard operational procedures –Specification of the structure of deliverables –Improvement of X# architecture according to experience from SW development and testbed operation

52 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Person-months WPWP TitlePM FundedPM Total WP1 CrossGrid Applications Development 365 537 WP2 Grid Application Programming Environment 156 233 WP3 New Grid Services and Tools 258 421 WP4 International Testbed Organization 435 567 WP5 Project Management 102 168 Total13161926

53 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Collaboration with other # Projects –Objective – exchange of information software components –Partners DataGrid DataTag GridLab EUROGRID and GRIP –GRIDSTART –Participation in GGF

54 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 X# - EDG: Grid Architecture Similar layered structure, similar functionality of components –Interoperability of Grids –Reuse of Grid components –Joint proposals to GGF –Participation of chairmen of EDG ATF and X# AT in meetings and other activities

55 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 X# - EDG: Applications –Interactive applications Methodology Generic structure Grid services Data security for medical applications –HEP applications X# will extend functionality of EDG sw

56 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 X# - EDG: Testbed –Goal: Interoperability of EDG and X# testbeds –Joint Grid infrastructure for HEP applications –Already X# members from Spain, Germany and Portugal are taking part in EDG testbed –Collaboration of testbed support teams –Mutual recognition of Certification Authorities –Elaboration of common access/usage policy and procedures –Common installation/configuration procedures

57 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Summary –Layered structure of the all X# applications –Reuse of SW from DataGrid and other # projects –Globus as the bottom layer of the middleware –Heterogeneous computer and storage systems –Distributed development and testing of SW 12 partners in applications 14 partners in middleware 15 partners in testbeds

58 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Thanks to –Michal Turala –Peter M.A. Sloot –Roland Wismueller –Wlodek Funika –Marek Garbacz –Ladislav Hluchy –Bartosz Balis –Jacek Kitowski –Norbert Meyer –Jesus Marco

59 Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Thanks to CESGA for invitation to HPC’2002 For more about the X# Project see www.eu-crossgrid.org


Download ppt "Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute."

Similar presentations


Ads by Google