GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid: Interactive Applications, Tool Environment, New Grid Services, and Testbed Marian Bubak.

Slides:



Advertisements
Similar presentations
EGC 2005, CrossGrid technical achievements, Amsterdam, Feb. 16th, 2005 WP2-3 New Generation Environment for Grid Interactive MPI Applications M igrating.
Advertisements

A Computation Management Agent for Multi-Institutional Grids
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
AcrossGrids Conference – Santiago 13 of February 2003 First Prototype of the CrossGrid Testbed Jorge Gomes (LIP) On behalf of X# WP4.
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Cracow Grid Workshop November 5-6 Support System of Virtual Organization for Flood Forecasting L. Hluchy, J. Astalos, V.D. Tran, M. Dobrucky and G.T. Nguyen.
CERN Krakow 2001 F. Gagliardi - CERN/IT 1 RTD efforts in Europe by Kyriakos Baxevanidis Foster cohesion, interoperability, cross- fertilization of knowledge,
The CrossGrid Project Marcel Kunze, FZK representing the X#-Collaboration.
The CrossGrid project Juha Alatalo Timo Koivusalo.
27-29 September 2002CrossGrid Workshop LINZ1 USE CASES (Task 3.5 Test and Integration) Santiago González de la Hoz CrossGrid Workshop at Linz,
Task 3.5 Tests and Integration ( Wp3 kick-off meeting, Poznan, 29 th -30 th January 2002 Santiago González de la.
5 th EU DataGrid Conference, Budapest, September 2002 The European CrossGrid Project Marcel Kunze Abteilung Grid-Computing und e-Science Forschungszentrum.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
TAT CrossGrid Yearly Review, Brussels, March 12, 2003 CrossGrid After the First Year: A Technical Overview Marian Bubak, Maciej Malawski, and Katarzyna.
Workshop CESGA - HPC’ A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute.
Dagstuhl Seminar 02341: Performance Analysis and Distributed Computing, August 18 – 23, 2002, Germany Monitoring of Interactive Grid Applications Marian.
“IST in the FP6”, Warszawa, November 2002 IST 5FP Success Stories of the Institute of Informatics of the Slovak Academy of Sciences, Bratislava,
CrossGrid Task 3.3 Grid Monitoring Trinity College Dublin (TCD) Brian Coghlan Paris MAR-2002.
Cracow Grid Workshop, November 5-6, 2001 Overview of the CrossGrid Project Marian Bubak Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland.
TAT Cracow Grid Workshop, October 27 – 29, 2003 Marian Bubak, Michal Turala and the CrossGrid Collaboration CrossGrid in Its Halfway:
Workload Management Massimo Sgaravatto INFN Padova.
M.Kunze, NEC2003, Varna The European CrossGrid Project Marcel Kunze Institute for Scientific Computing (IWR) Forschungszentrum Karlsruhe GmbH
5 March 2002 DG PARIS Jesus Marco CSIC IFCA(Santander) Development of GRID environment for interactive applications J.Marco (CSIC) DATAGRID WP6 MEETING.
EU 2nd Year Review – Jan – WP9 WP9 Earth Observation Applications Demonstration Pedro Goncalves :
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
INFSO-RI Enabling Grids for E-sciencE FloodGrid application Ladislav Hluchy, Viet D. Tran Institute of Informatics, SAS Slovakia.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Advanced Grid-Enabled System for Online Application Monitoring Main Service Manager is a central component, one per each.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Crossgrid kick-off meeting, Cracow, March 2002 Santiago González de la Hoz, IFIC1 Task 3.5 Test and Integration (
INFSO-RI Enabling Grids for E-sciencE Logging and Bookkeeping and Job Provenance Services Ludek Matyska (CESNET) on behalf of the.
Computational grids and grids projects DSS,
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Jarek Nabrzyski, Ariel Oleksiak Comparison of Grid Middleware in European Grid Projects Jarek Nabrzyski, Ariel Oleksiak Poznań Supercomputing and Networking.
1 Development of GRID environment for interactive applications Jesús Marco de Lucas Instituto de Física de Cantabria,
The PROGRESS Grid Service Provider Maciej Bogdański Portals & Portlets 2003 Edinburgh, July 14th-17th.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
PROGRESS: ICCS'2003 GRID SERVICE PROVIDER: How to improve flexibility of grid user interfaces? Michał Kosiedowski.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
TERENA 2003, May 21, Zagreb TERENA Networking Conference, 2003 MOBILE WORK ENVIRONMENT FOR GRID USERS. TESTBED Miroslaw Kupczyk Rafal.
George Tsouloupas University of Cyprus Task 2.3 GridBench ● 1 st Year Targets ● Background ● Prototype ● Problems and Issues ● What's Next.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
The Knowledge-based Workflow System for Grid Applications Ladislav Hluchý, Viet Tran, Ondrej Habala II SAS, Slovakia
Research Infrastructures Information Day Brussels, March 25, 2003 Victor Alessandrini IDRIS - CNRS.
7. Grid Computing Systems and Resource Management
CERN, DataGrid PTB, April 10, 2002 CrossGrid – DataGrid Collaboration (Framework) Marian Bubak and Bob Jones.
Interactive European Grid Environment for HEP Application with Real Time Requirements Lukasz Dutka 1, Krzysztof Korcyl 2, Krzysztof Zielinski 1,3, Jacek.
EC Review – 01/03/2002 – WP9 – Earth Observation Applications – n° 1 WP9 Earth Observation Applications 1st Annual Review Report to the EU ESA, KNMI, IPSL,
Ariel Garcia DataGrid WP6, Heidelberg, 26 th September 2003 Ariel García CrossGrid testbed status Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft.
→ MIPRO Conference,Opatija, 31 May -3 June 2005 Grid-based Virtual Organization for Flood Prediction Miroslav Dobrucký Institute of Informatics, SAS Slovakia,
BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 CrossGrid Architecture Marian Bubak and TAT Institute of Computer Science & ACC CYFRONET AGH, Cracow,
CERN The GridSTART EU accompany measure Fabrizio Gagliardi CERN
Jesus Marco DataGrid WP6, Barcelona, 12 th May 2003 WP4 Status of the CrossGrid Testbed EDG WP6 Meeting, Barcelona Jesús Marco Instituto.
Marian Bubak 1,2, Włodzimierz Funika 1,2, Roland Wismüller 3, Tomasz Arodź 1,2, Marcin Kurdziel 1,2 1 Institute of Computer Science, AGH, Kraków, Poland.
PROGRESS: GEW'2003 Using Resources of Multiple Grids with the Grid Service Provider Michał Kosiedowski.
CERN, April 9, 2002 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zajac X# TAT Institute of Computer Science.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
DataTAG is a project funded by the European Union International School on Grid Computing, 23 Jul 2003 – n o 1 GridICE The eyes of the grid PART I. Introduction.
K-WfGrid: Grid Workflows with Knowledge Ladislav Hluchy II SAS, Slovakia.
DataGrid France 12 Feb – WP9 – n° 1 WP9 Earth Observation Applications.
Workload Management Workpackage
EO Applications Parallel Session
GGF OGSA-WG, Data Use Cases Peter Kunszt Middleware Activity, Data Management Cluster EGEE is a project funded by the European.
Report on GLUE activities 5th EU-DataGRID Conference
Wide Area Workload Management Work Package DATAGRID project
Grid Application Programming Environment
Presentation transcript:

GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid: Interactive Applications, Tool Environment, New Grid Services, and Testbed Marian Bubak X# TAT Institute of Computer Science & ACC CYFRONET AGH, Cracow, Poland

GridLab Conference, Zakopane, Poland, September 13, 2002 Overview –Applications and their requirements –X# architecture –Tools for X# applications development –New grid services –Structure of the X# Project –Status and future

GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid in a Nutshell Interactive and Data Intensive Applications  I nteractive simulation and visualization of a biomedical system  Flooding crisis team support  Distributed data analysis in HEP  Weather forecast and air pollution modeling Grid Application Programming Environment  MPI code debugging and verification  Metrics and benchmarks  Interactive and semiautomatic performance evaluation tools Grid Visualization Kernel Data Mining New CrossGrid Services Globus Middleware Fabric DataGrid... Services HLA  Portals and roaming access  Grid resource management  Grid monitoring  Optimization of data access

GridLab Conference, Zakopane, Poland, September 13, 2002 Biomedical Application –Input: 3-D model of arteries –Simulation: LB of blood flow –Results: in a virtual reality –User: analyses results in near real-time, interacts, changes the structure of arteries

GridLab Conference, Zakopane, Poland, September 13, 2002 VR-Interaction

GridLab Conference, Zakopane, Poland, September 13, 2002 Steering in the Biomedical Application CT / MRI scan Medical DB Segmentation Medical DB LB flow simulation VE WD PC PDA Visualization Interaction HDB 10 simulations/day 60 GB 20 MB/s

GridLab Conference, Zakopane, Poland, September 13, 2002 Modules of the Biomedical Application 1.Medical scanners - data acquisition system 2.Software for segmentation – to get 3-D images 3.Database with medical images and metadata 4.Blood flow simulator with interaction capability 5.History database 6.Visualization for several interactive 3-D platforms 7.Interactive measurement module 8.Interaction module 9.User interface for coupling visualization, simulation, steering

GridLab Conference, Zakopane, Poland, September 13, 2002 Interactive Steering in the Biomedical Application CT / MRI scan Medical DB Segmentation Medical DB LB flow simulation VE WD PC PDA Visualization Interaction HDB the user can adjust simulation parameters while the simulation is running

GridLab Conference, Zakopane, Poland, September 13, 2002 Biomedical Application Use Case (1/3) –Obtaining an MRI scan for the patient –Image segmentation (clear picture of important blood vessels, location of aneurisms and blockages) –Generation of a computational mesh for a LB simulation –Start of a simulation of normal blood flow in the vessels CT / MRI scan Medical DB Segmentation Medical DB LB flow simulation

GridLab Conference, Zakopane, Poland, September 13, 2002 Biomedical Application Use Case (2/3) –Generation of alternative computational meshes (several bypass designs) based on results from the previous step –Allocation of appropriate Grid resources (one cluster for each computational mesh) –Initialization of the blood flow simulations for the bypasses The physician can monitor the progress of the simulations through his portal Automatic completion notification, (e.g. through SMS messages.

GridLab Conference, Zakopane, Poland, September 13, 2002 Biomedical Application Use Case (3/3) –Online presentation of simulation results via a 3D environment –Adding small modifications to the proposed structure (i.e. changes in angles or positions) –Immediate initiation of the resulting changes in the blood flow The progress of the simulation and the estimated time of convergence should be available for inspection. LB flow simulation VE WD PC PDA Visualization Interaction

GridLab Conference, Zakopane, Poland, September 13, 2002 Asynchronous Execution of Biomedical Application

GridLab Conference, Zakopane, Poland, September 13, 2002 Flooding Crisis Team Support Storage systems databases surface automatic meteorological and hydrological stations systems for acquisition and processing of satellite information meteorological radars External sources of information  Global and regional centers GTS  EUMETSAT and NOAA  Hydrological services of other countries Data sources meteorological models hydrological models hydraulic models High performance computers Grid infrastructure Flood crisis teams  meteorologists  hydrologists  hydraulic engineers Users  river authorities  energy  insurance companies  navigation  media  public

GridLab Conference, Zakopane, Poland, September 13, 2002 Cascade of Flood Simulations Data sources Meteorological simulations Hydraulic simulations Hydrological simulations Users Output visualization

GridLab Conference, Zakopane, Poland, September 13, 2002 Basic Characteristics of Flood Simulation –Meteorological intensive simulation (1.5 h/simulation) – maybe HPC large input/output data sets (50MB~150MB /event) high availability of resources (24/365) –Hydrological Parametric simulations - HTC Each sub-catchment may require different models (heterogeneous simulation) –Hydraulic Many 1-D simulations - HTC 2-D hydraulic simulations need HPC

GridLab Conference, Zakopane, Poland, September 13, 2002 Váh River Pilot Site Váh River Catchment Area: 19700km 2, 1/3 of Slovakia (Inflow point) Nosice Strečno (Outflow point) Pilot Site Catchment Area: 2500km 2 (above Strečno: 5500km 2 )

GridLab Conference, Zakopane, Poland, September 13, 2002 Typical Results - Flow and Water Depth

GridLab Conference, Zakopane, Poland, September 13, 2002 Distributed Data Analysis in HEP –Objectives Distributed data access Distributed data mining techniques with neural networks –Issues Typical interactive requests will run on o(TB) distributed data Transfer/replication times for the whole data of order of one hour Data transfers once and in advance of the interactive session. Allocation, installation and set up the corresponding database servers before the interactive session starts

Weather Forecast and Air Pollution Modeling –Distributed/parallel codes on Grid Coupled Ocean/Atmosphere Mesoscale Prediction System STEM-II Air Pollution Code –Integration of distributed databases –Data mining applied to downscaling weather forecast

GridLab Conference, Zakopane, Poland, September 13, 2002 COAMPS Coupled Ocean/Atmosphere Mesoscale Prediction System: Atmospheric Components Complex Data Quality Control Analysis: Multivariate Optimum Interpolation Analysis (MVOI) of Winds and Heights Univariate Analyses of Temperature and Moisture OI Analysis of Sea Surface Temperature Initialization: Variational Hydrostatic Constraint on Analysis Increments Digital Filter Atmospheric Model: Numerics: Nonhydrostatic, Scheme C, Nested Grids, Sigma-z, Flexible Lateral BCs Physics: PBL, Convection, Explicit Moist Physics, Radiation, Surface Layer Features: Globally Relocatable (5 Map Projections) User-Defined Grid Resolutions, Dimensions, and Number of Nested Grids 6 or 12 Hour Incremental Data Assimilation Cycle Can be Used for Idealized or Real-Time Applications Single Configuration Managed System for All Applications Operational at FNMOC: 7 Areas, Twice Daily, using 81/27/9 km or 81/27 km grids Forecasts to 72 hours Operational at all Navy Regional Centers (w/GUI Interface)

GridLab Conference, Zakopane, Poland, September 13, 2002 Air Pollution Model – STEM-II –Species: 56 chemical, 16 long-lived, 40 short-lived, 28 radicals (OH, HO 2 ) –Chemical mechanisms: 176 gas-phase reactions 31 aqueous-phase reactions. 12 aqueous-phase solution equilibria. –Equations are integrated with locally 1-D finite element method (LOD-FEM) –Transport equations are solved with Petrov-Crank-Nicolson- Galerkin (FEM) –Chemistry & mass transfer terms are integrated with semi- implicit Euler and pseudo-analytic methods

GridLab Conference, Zakopane, Poland, September 13, 2002 Key Features of X# Applications –Data Data generators and data bases geographically distributed Selected on demand –Processing Needs large processing capacity; both HPC & HTC Interactive –Presentation Complex data require versatile 3D visualisation Support interaction and feedback to other components

GridLab Conference, Zakopane, Poland, September 13, 2002 Overview of the CrossGrid Architecture Supporting Tools 1.4 Meteo Pollution 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.2 Flooding 1.1 BioMed 1.1 BioMed Applications Generic Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Resource Manager Instruments ( Satelites, Radars) Instruments ( Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services 1.1 User Interaction Services

GridLab Conference, Zakopane, Poland, September 13, 2002 Tool Environment Grid Monitoring (Task 3.3) Performance Prediction Component High Level Analysis Component User Interface and Visualization Component Performance Measurement Component Benchmarks (Task 2.3) Applications (WP1) executing on Grid testbed Application source code G-PM RMD PMD Legend RMD – raw monitoring data PMD – performance measurement data data flow manual information transfer

GridLab Conference, Zakopane, Poland, September 13, 2002 MPI Verification –A tool that verifies the correctness of parallel, distributed Grid applications using the MPI paradigm. –To make end-user applications portable, reproducible, reliable on any platform of the Grid. –The technical basis: MPI profiling interface which allows a detailed analysis of the MPI application

GridLab Conference, Zakopane, Poland, September 13, 2002 Benchmark Categories –Micro-benchmarks For identifying basic performance properties of Grid services, sites, and constellations To test a single performance aspect, through “stress testing” of a simple operation invoked in isolation The metrics captured represent computing power (flops), memory capacity and throughput, I/O performance, network... –Micro-kernels “Stress-test” several performance aspects of a system at once Generic HPC/HTC kernels, including general and often-used kernels in Grid environments –Application kernels Characteristic of representative CG applications Capturing higher-level metrics, e.g. completion time, throughput, speedup.

GridLab Conference, Zakopane, Poland, September 13, 2002 Performance Measurement Tool G-PM –Components: performance measurement component (PMC), component for high level analysis (HLAC), component for performance prediction (PPC) based on analytical performance models of application kernels, user interface and visualization component UIVC.

GridLab Conference, Zakopane, Poland, September 13, 2002 For Interactive X# Applications... –Resource allocation should be done in near-real time (a challenge for the resource broker & scheduling agents). –The resource reservation (i.e. by prioritizing jobs) –Network bandwidth reservation (?) –Near-real time synchronization between visualization and simulation should be achieved in both directions: user to simulation and simulation to user (rollback etc) –Fault tolerance –Post-execution cleanup

GridLab Conference, Zakopane, Poland, September 13, 2002 User Interaction Service Condor-G -G -G - G Nimrod User Interaction Services User Interaction Services User Interaction Services User Interaction Service Resource Broker Resource Broker Resource Broker Resource Broker Scheduler (3.2) Running Simulation 1 Running Simulation 2 Running Simulation 3 User Interaction Services User Interaction Services User Interaction Services Service Factory Visualisation In VE CM CM for Sim 1 CM for Sim 2 CM Control module Pure moduleUIS service CM for Sim 3 user site UIS connections Other connections

GridLab Conference, Zakopane, Poland, September 13, 2002 Tools Environment and Grid Monitoring Applications Portals (3.1) Portals (3.1) G-PM Performance Measurement Tools (2.4) G-PM Performance Measurement Tools (2.4) MPI Debugging and Verification (2.2) MPI Debugging and Verification (2.2) Metrics and Benchmarks (2.4) Metrics and Benchmarks (2.4) Grid Monitoring (3.3) (OCM-G, RGMA) Grid Monitoring (3.3) (OCM-G, RGMA) Application programming environment requires information from the Grid about current status of applications and it should be able to manipulate them

GridLab Conference, Zakopane, Poland, September 13, 2002 Monitoring of Grid Applications –Monitor = obtain information on or manipulate target application –e.g. read status of application’s processes, suspend application, read / write memory, etc. –Monitoring module needed by tools –Debuggers –Performance analyzers –Visualizers –...

GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid Monitoring System

GridLab Conference, Zakopane, Poland, September 13, 2002 Very Short Overview of OMIS –Target system view hierarchical set of objects nodes, processes, threads For the Grid: new objects – sites objects identified by tokens, e.g. n_1, p_1, etc. –Three types of services information services manipulation services event services

GridLab Conference, Zakopane, Poland, September 13, 2002 OMIS Services –Information services obtain information on target system e.g. node_get_info = obtain information on nodes in the target system –Manipulation services perform manipulations on the target system e.g. thread_stop = stop specified threads –Event services detect events in the target system e.g. thread_started_libcall = detect invocations of specified functions –Information + manipulation services = actions

GridLab Conference, Zakopane, Poland, September 13, 2002 Components of OCM-G –Service Managers one per site in the system permanent request distribution reply collection –Local Monitors one per [node, user] pair transient (created or destroyed when needed) handle local objects actual execution of requests

GridLab Conference, Zakopane, Poland, September 13, 2002 Monitoring Environment –OCM-G Components Service Managers Local Monitors –Application processes –Tool(s) –External name service Component discovery

GridLab Conference, Zakopane, Poland, September 13, 2002 Security Issues –OCM-G components handle multiple users, tools and applications possibility to issue a fake request (e.g., posing as a different user) authentication and authorization needed –LMs are allowed for manipulations unauthorized user can do anything

GridLab Conference, Zakopane, Poland, September 13, 2002 Portals and Roaming Access Applications Portals (3.1) Portals (3.1) Roaming Access Server (3.1) Scheduler (3.2) Scheduler (3.2) GIS / MDS (Globus) GIS / MDS (Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) –Allow access user environment from remote computers –Independent of the system version and hardware –Run applications, manage data files, store personal settings Remote Access Server user profiles authentication, authorization job submission Migrating Desktop Application portal

GridLab Conference, Zakopane, Poland, September 13, 2002 Optimization of Grid Data Access Applications Portals (3.1) Portals (3.1) Optimization of Grid Data Access (3.4) Scheduling Agents (3.2) Scheduling Agents (3.2) Replica Manager (DataGrid / Globus) Replica Manager (DataGrid / Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) GridFTP Service consists of Component-expert system Data-access estimator GridFTP plugin –Different storage systems and applications’ requirements –Optimization by selection of data handlers

GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid Collaboration Poland: Cyfronet & INP Cracow PSNC Poznan ICM & IPJ Warsaw Portugal: LIP Lisbon Spain: CSIC Santander Valencia & RedIris UAB Barcelona USC Santiago & CESGA Ireland: TCD Dublin Italy: DATAMAT Netherlands: UvA Amsterdam Germany: FZK Karlsruhe TUM Munich USTU Stuttgart Slovakia: II SAS Bratislava Greece: Algosystems Demo Athens AuTh Thessaloniki Cyprus: UCY Nikosia Austria: U.Linz

GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 1.0 Co-ordination and management (Peter M.A. Sloot, UvA) 1.1 Interactive simulation and visualisation of a biomedical system (G. Dick van Albada, Uva) 1.2 Flooding crisis team support (Ladislav Hluchy, II SAS) 1.3 Distributed data analysis in HEP (C. Martinez-Rivero, CSIC) 1.4 Weather forecast and air pollution modelling (Bogumil Jakubiak, ICM) WP1 – CrossGrid Application Development

GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 2.0 Co-ordination and management (Holger Marten, FZK) 2.1 Tools requirement definition (Roland Wismueller, TUM) 2.2 MPI code debugging and verification (Matthias Mueller, USTUTT) 2.3 Metrics and benchmarks (Marios Dikaiakos, UCY) 2.4 Interactive and semiautomatic performance evaluation tools (Wlodek Funika, Cyfronet) 2.5 Integration, testing and refinement (Roland Wismueller, TUM) WP2 - Grid Application Programming Environments

GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 3.0 Co-ordination and management (Norbert Meyer, PSNC) 3.1 Portals and roaming access (Miroslaw Kupczyk, PSNC) 3.2 Grid resource management (Miquel A. Senar, UAB) 3.3 Grid monitoring (Brian Coghlan, TCD) 3.4 Optimisation of data access (Jacek Kitowski, Cyfronet) 3.5 Tests and integration (Santiago Gonzalez, CSIC) WP3 – New Grid Services and Tools

GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 4.0 Coordination and management (Jesus Marco, CSIC, Santander) –Coordination with WP1,2,3 –Collaborative tools –Integration Team 4.1 Testbed setup & incremental evolution (Rafael Marco, CSIC, Santander) –Define installation –Deploy testbed releases –Trace security issues WP4 - International Testbed Organization Testbed site responsibles: –CYFRONET (Krakow) A.Ozieblo –ICM(Warsaw) W.Wislicki –IPJ (Warsaw) K.Nawrocki –UvA (Amsterdam) D.van Albada –FZK (Karlsruhe) M.Kunze –IISAS (Bratislava) J.Astalos –PSNC(Poznan) P.Wolniewicz –UCY (Cyprus) M.Dikaiakos –TCD (Dublin) B.Coghlan –CSIC (Santander/Valencia) S.Gonzalez –UAB (Barcelona) G.Merino –USC (Santiago) A.Gomez –UAM (Madrid) J.del Peso –Demo (Athenas) C.Markou –AuTh (Thessaloniki) D.Sampsonidis –LIP (Lisbon) J.Martins

GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 4.2 Integration with DataGrid (Marcel Kunze, FZK) –Coordination of testbed setup –Exchange knowledge –Participate in WP meetings 4.3 Infrastructure support (Josep Salt, CSIC, Valencia) –Fabric management –HelpDesk –Provide Installation Kit –Network support 4.4 Verification & quality control (Jorge Gomes, LIP) –Feedback –Improve stability of the testbed WP4 - International Testbed Organization

GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid Testbed Map UCY NikosiaDEMO Athens Auth Thessaloniki CYFRONET Cracow ICM & IPJ Warsaw PSNC Poznan CSIC IFIC Valencia UAB Barcelona CSIC-UC IFCA Santander CSIC RedIris Madrid LIP Lisbon USC Santiago TCD Dublin UvA Amsterdam FZK Karlsruhe II SAS Bratislava Géant

GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 5.1 Project coordination and administration (Michal Turala, INP) 5.2 CrossGrid Architecture Team (Marian Bubak, Cyfronet) 5.3 Central dissemination (Yannis Perros, ALGO) WP5 – Project Management

GridLab Conference, Zakopane, Poland, September 13, 2002 EU Funded Grid Project Space (Kyriakos Baxevanidis) GRIDLAB GRIA EGSO DATATAG CROSSGRID DATAGRID Applications GRIP EUROGRID DAMIEN Middleware & Tools Underlying Infrastructures Science Industry / business - Links with European National efforts - Links with US projects (GriPhyN, PPDG, iVDGL,…)

GridLab Conference, Zakopane, Poland, September 13, 2002 Project Phases M 1 - 3: requirements definition and merging M : first development phase: design, 1st prototypes, refinement of requirements M : second development phase: integration of components, 2nd prototypes M : third development phase: complete integration, final code versions M : final phase: demonstration and documentation

GridLab Conference, Zakopane, Poland, September 13, 2002 Rules for X# SW Development –Iterative improvement: –development, testing on testbed, evaluation, improvement –Modularity –Open source approach –SW well documented –Collaboration with other # projects

GridLab Conference, Zakopane, Poland, September 13, 2002 Collaboration with other # Projects –Objective – exchange of –information –software components –Partners –DataGrid –DataTag –Others from GRIDSTART (of course, with GridLab) –Participation in GGF

GridLab Conference, Zakopane, Poland, September 13, 2002 Status after M6 –Software Requirements Specifications together with use cases –CrossGrid Architecture defined –Detailed Design documents for tools and the new Grid services (OO approach, UML) –Analysis of security issues and the first proposal of solutions –Detailed description of the test and integration procedures –Testbed first experience Sites: LIP, FZK, CSIC+USC, PSNC, AuTH+Demo Basic: EDG release 1.2 Applications: EDG HEP simulations (Atlas,CMS) first distributed prototypes using MPI: NN distributed training Evolutionary Algorithms

GridLab Conference, Zakopane, Poland, September 13, 2002 Near Future –Participation in production testbed with DataGrid All sites will be ready to join by end of September Common DEMO at IST 2002, Copenhagen, November 4 th -6 th –Collaboration with DataGrid in specific points (e.g. user support and helpdesk software) –CrossGrid Workshop, Linz (w/ EuroPVM/MPI 2002), September 28 th -29 th –Conference Across Grids together with R&I Forum Santiago de Compostella, Spain, February 9 th -14 th,2003 With Proceedings (reviewed papers)

GridLab Conference, Zakopane, Poland, September 13, 2002 Linz CrossGrid Workshop Sep 28th-29th –Evaluate the current status of all tasks –Discuss interfaces and functionality –Understand what we may expect as first prototypes –Coordinate the operation of the X# testbed –Agree about common rules for software development (SOP) –Start to organize the first CrossGrid EU review –Meet with EU DataGrid representatives –Discuss the technology for the future (OGSA) Details at

GridLab Conference, Zakopane, Poland, September 13, 2002 Summary –Layered structure of the all X# applications –Reuse of SW from DataGrid and other # projects –Globus as the bottom layer of the middleware –Heterogeneous computer and storage systems –Distributed development and testing of SW –12 partners in applications –14 partners in middleware –15 partners in testbeds –In total – 21 partnes –First 6 months – successful

GridLab Conference, Zakopane, Poland, September 13, 2002 Thanks to –Michal Turala –Kasia Zajac –Maciek Malawski –Marek Garbacz –Peter M.A. Sloot –Roland Wismueller –Wlodek Funika –Ladislav Hluchy –Bartosz Balis –Jacek Kitowski –Norbert Meyer –Jesus Marco –Marcel Kunze

GridLab Conference, Zakopane, Poland, September 13,