Presentation is loading. Please wait.

Presentation is loading. Please wait.

GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid: Interactive Applications, Tool Environment, New Grid Services, and Testbed Marian Bubak.

Similar presentations


Presentation on theme: "GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid: Interactive Applications, Tool Environment, New Grid Services, and Testbed Marian Bubak."— Presentation transcript:

1 GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid: Interactive Applications, Tool Environment, New Grid Services, and Testbed Marian Bubak X# TAT Institute of Computer Science & ACC CYFRONET AGH, Cracow, Poland www.eu-crossgrid.org

2 GridLab Conference, Zakopane, Poland, September 13, 2002 Overview –Applications and their requirements –X# architecture –Tools for X# applications development –New grid services –Structure of the X# Project –Status and future

3 GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid in a Nutshell Interactive and Data Intensive Applications  I nteractive simulation and visualization of a biomedical system  Flooding crisis team support  Distributed data analysis in HEP  Weather forecast and air pollution modeling Grid Application Programming Environment  MPI code debugging and verification  Metrics and benchmarks  Interactive and semiautomatic performance evaluation tools Grid Visualization Kernel Data Mining New CrossGrid Services Globus Middleware Fabric DataGrid... Services HLA  Portals and roaming access  Grid resource management  Grid monitoring  Optimization of data access

4 GridLab Conference, Zakopane, Poland, September 13, 2002 Biomedical Application –Input: 3-D model of arteries –Simulation: LB of blood flow –Results: in a virtual reality –User: analyses results in near real-time, interacts, changes the structure of arteries

5 GridLab Conference, Zakopane, Poland, September 13, 2002 VR-Interaction

6 GridLab Conference, Zakopane, Poland, September 13, 2002 Steering in the Biomedical Application CT / MRI scan Medical DB Segmentation Medical DB LB flow simulation VE WD PC PDA Visualization Interaction HDB 10 simulations/day 60 GB 20 MB/s

7 GridLab Conference, Zakopane, Poland, September 13, 2002 Modules of the Biomedical Application 1.Medical scanners - data acquisition system 2.Software for segmentation – to get 3-D images 3.Database with medical images and metadata 4.Blood flow simulator with interaction capability 5.History database 6.Visualization for several interactive 3-D platforms 7.Interactive measurement module 8.Interaction module 9.User interface for coupling visualization, simulation, steering

8 GridLab Conference, Zakopane, Poland, September 13, 2002 Interactive Steering in the Biomedical Application CT / MRI scan Medical DB Segmentation Medical DB LB flow simulation VE WD PC PDA Visualization Interaction HDB the user can adjust simulation parameters while the simulation is running

9 GridLab Conference, Zakopane, Poland, September 13, 2002 Biomedical Application Use Case (1/3) –Obtaining an MRI scan for the patient –Image segmentation (clear picture of important blood vessels, location of aneurisms and blockages) –Generation of a computational mesh for a LB simulation –Start of a simulation of normal blood flow in the vessels CT / MRI scan Medical DB Segmentation Medical DB LB flow simulation

10 GridLab Conference, Zakopane, Poland, September 13, 2002 Biomedical Application Use Case (2/3) –Generation of alternative computational meshes (several bypass designs) based on results from the previous step –Allocation of appropriate Grid resources (one cluster for each computational mesh) –Initialization of the blood flow simulations for the bypasses The physician can monitor the progress of the simulations through his portal Automatic completion notification, (e.g. through SMS messages.

11 GridLab Conference, Zakopane, Poland, September 13, 2002 Biomedical Application Use Case (3/3) –Online presentation of simulation results via a 3D environment –Adding small modifications to the proposed structure (i.e. changes in angles or positions) –Immediate initiation of the resulting changes in the blood flow The progress of the simulation and the estimated time of convergence should be available for inspection. LB flow simulation VE WD PC PDA Visualization Interaction

12 GridLab Conference, Zakopane, Poland, September 13, 2002 Asynchronous Execution of Biomedical Application

13 GridLab Conference, Zakopane, Poland, September 13, 2002 Flooding Crisis Team Support Storage systems databases surface automatic meteorological and hydrological stations systems for acquisition and processing of satellite information meteorological radars External sources of information  Global and regional centers GTS  EUMETSAT and NOAA  Hydrological services of other countries Data sources meteorological models hydrological models hydraulic models High performance computers Grid infrastructure Flood crisis teams  meteorologists  hydrologists  hydraulic engineers Users  river authorities  energy  insurance companies  navigation  media  public

14 GridLab Conference, Zakopane, Poland, September 13, 2002 Cascade of Flood Simulations Data sources Meteorological simulations Hydraulic simulations Hydrological simulations Users Output visualization

15 GridLab Conference, Zakopane, Poland, September 13, 2002 Basic Characteristics of Flood Simulation –Meteorological intensive simulation (1.5 h/simulation) – maybe HPC large input/output data sets (50MB~150MB /event) high availability of resources (24/365) –Hydrological Parametric simulations - HTC Each sub-catchment may require different models (heterogeneous simulation) –Hydraulic Many 1-D simulations - HTC 2-D hydraulic simulations need HPC

16 GridLab Conference, Zakopane, Poland, September 13, 2002 Váh River Pilot Site Váh River Catchment Area: 19700km 2, 1/3 of Slovakia (Inflow point) Nosice Strečno (Outflow point) Pilot Site Catchment Area: 2500km 2 (above Strečno: 5500km 2 )

17 GridLab Conference, Zakopane, Poland, September 13, 2002 Typical Results - Flow and Water Depth

18 GridLab Conference, Zakopane, Poland, September 13, 2002 Distributed Data Analysis in HEP –Objectives Distributed data access Distributed data mining techniques with neural networks –Issues Typical interactive requests will run on o(TB) distributed data Transfer/replication times for the whole data of order of one hour Data transfers once and in advance of the interactive session. Allocation, installation and set up the corresponding database servers before the interactive session starts

19 Weather Forecast and Air Pollution Modeling –Distributed/parallel codes on Grid Coupled Ocean/Atmosphere Mesoscale Prediction System STEM-II Air Pollution Code –Integration of distributed databases –Data mining applied to downscaling weather forecast

20 GridLab Conference, Zakopane, Poland, September 13, 2002 COAMPS Coupled Ocean/Atmosphere Mesoscale Prediction System: Atmospheric Components Complex Data Quality Control Analysis: Multivariate Optimum Interpolation Analysis (MVOI) of Winds and Heights Univariate Analyses of Temperature and Moisture OI Analysis of Sea Surface Temperature Initialization: Variational Hydrostatic Constraint on Analysis Increments Digital Filter Atmospheric Model: Numerics: Nonhydrostatic, Scheme C, Nested Grids, Sigma-z, Flexible Lateral BCs Physics: PBL, Convection, Explicit Moist Physics, Radiation, Surface Layer Features: Globally Relocatable (5 Map Projections) User-Defined Grid Resolutions, Dimensions, and Number of Nested Grids 6 or 12 Hour Incremental Data Assimilation Cycle Can be Used for Idealized or Real-Time Applications Single Configuration Managed System for All Applications Operational at FNMOC: 7 Areas, Twice Daily, using 81/27/9 km or 81/27 km grids Forecasts to 72 hours Operational at all Navy Regional Centers (w/GUI Interface)

21 GridLab Conference, Zakopane, Poland, September 13, 2002 Air Pollution Model – STEM-II –Species: 56 chemical, 16 long-lived, 40 short-lived, 28 radicals (OH, HO 2 ) –Chemical mechanisms: 176 gas-phase reactions 31 aqueous-phase reactions. 12 aqueous-phase solution equilibria. –Equations are integrated with locally 1-D finite element method (LOD-FEM) –Transport equations are solved with Petrov-Crank-Nicolson- Galerkin (FEM) –Chemistry & mass transfer terms are integrated with semi- implicit Euler and pseudo-analytic methods

22 GridLab Conference, Zakopane, Poland, September 13, 2002 Key Features of X# Applications –Data Data generators and data bases geographically distributed Selected on demand –Processing Needs large processing capacity; both HPC & HTC Interactive –Presentation Complex data require versatile 3D visualisation Support interaction and feedback to other components

23 GridLab Conference, Zakopane, Poland, September 13, 2002 Overview of the CrossGrid Architecture Supporting Tools 1.4 Meteo Pollution 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.2 Flooding 1.1 BioMed 1.1 BioMed Applications Generic Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Resource Manager Instruments ( Satelites, Radars) Instruments ( Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services 1.1 User Interaction Services

24 GridLab Conference, Zakopane, Poland, September 13, 2002 Tool Environment Grid Monitoring (Task 3.3) Performance Prediction Component High Level Analysis Component User Interface and Visualization Component Performance Measurement Component Benchmarks (Task 2.3) Applications (WP1) executing on Grid testbed Application source code G-PM RMD PMD Legend RMD – raw monitoring data PMD – performance measurement data data flow manual information transfer

25 GridLab Conference, Zakopane, Poland, September 13, 2002 MPI Verification –A tool that verifies the correctness of parallel, distributed Grid applications using the MPI paradigm. –To make end-user applications portable, reproducible, reliable on any platform of the Grid. –The technical basis: MPI profiling interface which allows a detailed analysis of the MPI application

26 GridLab Conference, Zakopane, Poland, September 13, 2002 Benchmark Categories –Micro-benchmarks For identifying basic performance properties of Grid services, sites, and constellations To test a single performance aspect, through “stress testing” of a simple operation invoked in isolation The metrics captured represent computing power (flops), memory capacity and throughput, I/O performance, network... –Micro-kernels “Stress-test” several performance aspects of a system at once Generic HPC/HTC kernels, including general and often-used kernels in Grid environments –Application kernels Characteristic of representative CG applications Capturing higher-level metrics, e.g. completion time, throughput, speedup.

27 GridLab Conference, Zakopane, Poland, September 13, 2002 Performance Measurement Tool G-PM –Components: performance measurement component (PMC), component for high level analysis (HLAC), component for performance prediction (PPC) based on analytical performance models of application kernels, user interface and visualization component UIVC.

28 GridLab Conference, Zakopane, Poland, September 13, 2002 For Interactive X# Applications... –Resource allocation should be done in near-real time (a challenge for the resource broker & scheduling agents). –The resource reservation (i.e. by prioritizing jobs) –Network bandwidth reservation (?) –Near-real time synchronization between visualization and simulation should be achieved in both directions: user to simulation and simulation to user (rollback etc) –Fault tolerance –Post-execution cleanup

29 GridLab Conference, Zakopane, Poland, September 13, 2002 User Interaction Service Condor-G -G -G - G Nimrod User Interaction Services User Interaction Services User Interaction Services User Interaction Service Resource Broker Resource Broker Resource Broker Resource Broker Scheduler (3.2) Running Simulation 1 Running Simulation 2 Running Simulation 3 User Interaction Services User Interaction Services User Interaction Services Service Factory Visualisation In VE CM CM for Sim 1 CM for Sim 2 CM Control module Pure moduleUIS service CM for Sim 3 user site UIS connections Other connections

30 GridLab Conference, Zakopane, Poland, September 13, 2002 Tools Environment and Grid Monitoring Applications Portals (3.1) Portals (3.1) G-PM Performance Measurement Tools (2.4) G-PM Performance Measurement Tools (2.4) MPI Debugging and Verification (2.2) MPI Debugging and Verification (2.2) Metrics and Benchmarks (2.4) Metrics and Benchmarks (2.4) Grid Monitoring (3.3) (OCM-G, RGMA) Grid Monitoring (3.3) (OCM-G, RGMA) Application programming environment requires information from the Grid about current status of applications and it should be able to manipulate them

31 GridLab Conference, Zakopane, Poland, September 13, 2002 Monitoring of Grid Applications –Monitor = obtain information on or manipulate target application –e.g. read status of application’s processes, suspend application, read / write memory, etc. –Monitoring module needed by tools –Debuggers –Performance analyzers –Visualizers –...

32 GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid Monitoring System

33 GridLab Conference, Zakopane, Poland, September 13, 2002 Very Short Overview of OMIS –Target system view hierarchical set of objects nodes, processes, threads For the Grid: new objects – sites objects identified by tokens, e.g. n_1, p_1, etc. –Three types of services information services manipulation services event services

34 GridLab Conference, Zakopane, Poland, September 13, 2002 OMIS Services –Information services obtain information on target system e.g. node_get_info = obtain information on nodes in the target system –Manipulation services perform manipulations on the target system e.g. thread_stop = stop specified threads –Event services detect events in the target system e.g. thread_started_libcall = detect invocations of specified functions –Information + manipulation services = actions

35 GridLab Conference, Zakopane, Poland, September 13, 2002 Components of OCM-G –Service Managers one per site in the system permanent request distribution reply collection –Local Monitors one per [node, user] pair transient (created or destroyed when needed) handle local objects actual execution of requests

36 GridLab Conference, Zakopane, Poland, September 13, 2002 Monitoring Environment –OCM-G Components Service Managers Local Monitors –Application processes –Tool(s) –External name service Component discovery

37 GridLab Conference, Zakopane, Poland, September 13, 2002 Security Issues –OCM-G components handle multiple users, tools and applications possibility to issue a fake request (e.g., posing as a different user) authentication and authorization needed –LMs are allowed for manipulations unauthorized user can do anything

38 GridLab Conference, Zakopane, Poland, September 13, 2002 Portals and Roaming Access Applications Portals (3.1) Portals (3.1) Roaming Access Server (3.1) Scheduler (3.2) Scheduler (3.2) GIS / MDS (Globus) GIS / MDS (Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) –Allow access user environment from remote computers –Independent of the system version and hardware –Run applications, manage data files, store personal settings Remote Access Server user profiles authentication, authorization job submission Migrating Desktop Application portal

39 GridLab Conference, Zakopane, Poland, September 13, 2002 Optimization of Grid Data Access Applications Portals (3.1) Portals (3.1) Optimization of Grid Data Access (3.4) Scheduling Agents (3.2) Scheduling Agents (3.2) Replica Manager (DataGrid / Globus) Replica Manager (DataGrid / Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) GridFTP Service consists of Component-expert system Data-access estimator GridFTP plugin –Different storage systems and applications’ requirements –Optimization by selection of data handlers

40 GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid Collaboration Poland: Cyfronet & INP Cracow PSNC Poznan ICM & IPJ Warsaw Portugal: LIP Lisbon Spain: CSIC Santander Valencia & RedIris UAB Barcelona USC Santiago & CESGA Ireland: TCD Dublin Italy: DATAMAT Netherlands: UvA Amsterdam Germany: FZK Karlsruhe TUM Munich USTU Stuttgart Slovakia: II SAS Bratislava Greece: Algosystems Demo Athens AuTh Thessaloniki Cyprus: UCY Nikosia Austria: U.Linz

41 GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 1.0 Co-ordination and management (Peter M.A. Sloot, UvA) 1.1 Interactive simulation and visualisation of a biomedical system (G. Dick van Albada, Uva) 1.2 Flooding crisis team support (Ladislav Hluchy, II SAS) 1.3 Distributed data analysis in HEP (C. Martinez-Rivero, CSIC) 1.4 Weather forecast and air pollution modelling (Bogumil Jakubiak, ICM) WP1 – CrossGrid Application Development

42 GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 2.0 Co-ordination and management (Holger Marten, FZK) 2.1 Tools requirement definition (Roland Wismueller, TUM) 2.2 MPI code debugging and verification (Matthias Mueller, USTUTT) 2.3 Metrics and benchmarks (Marios Dikaiakos, UCY) 2.4 Interactive and semiautomatic performance evaluation tools (Wlodek Funika, Cyfronet) 2.5 Integration, testing and refinement (Roland Wismueller, TUM) WP2 - Grid Application Programming Environments

43 GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 3.0 Co-ordination and management (Norbert Meyer, PSNC) 3.1 Portals and roaming access (Miroslaw Kupczyk, PSNC) 3.2 Grid resource management (Miquel A. Senar, UAB) 3.3 Grid monitoring (Brian Coghlan, TCD) 3.4 Optimisation of data access (Jacek Kitowski, Cyfronet) 3.5 Tests and integration (Santiago Gonzalez, CSIC) WP3 – New Grid Services and Tools

44 GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 4.0 Coordination and management (Jesus Marco, CSIC, Santander) –Coordination with WP1,2,3 –Collaborative tools –Integration Team 4.1 Testbed setup & incremental evolution (Rafael Marco, CSIC, Santander) –Define installation –Deploy testbed releases –Trace security issues WP4 - International Testbed Organization Testbed site responsibles: –CYFRONET (Krakow) A.Ozieblo –ICM(Warsaw) W.Wislicki –IPJ (Warsaw) K.Nawrocki –UvA (Amsterdam) D.van Albada –FZK (Karlsruhe) M.Kunze –IISAS (Bratislava) J.Astalos –PSNC(Poznan) P.Wolniewicz –UCY (Cyprus) M.Dikaiakos –TCD (Dublin) B.Coghlan –CSIC (Santander/Valencia) S.Gonzalez –UAB (Barcelona) G.Merino –USC (Santiago) A.Gomez –UAM (Madrid) J.del Peso –Demo (Athenas) C.Markou –AuTh (Thessaloniki) D.Sampsonidis –LIP (Lisbon) J.Martins

45 GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 4.2 Integration with DataGrid (Marcel Kunze, FZK) –Coordination of testbed setup –Exchange knowledge –Participate in WP meetings 4.3 Infrastructure support (Josep Salt, CSIC, Valencia) –Fabric management –HelpDesk –Provide Installation Kit –Network support 4.4 Verification & quality control (Jorge Gomes, LIP) –Feedback –Improve stability of the testbed WP4 - International Testbed Organization

46 GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid Testbed Map UCY NikosiaDEMO Athens Auth Thessaloniki CYFRONET Cracow ICM & IPJ Warsaw PSNC Poznan CSIC IFIC Valencia UAB Barcelona CSIC-UC IFCA Santander CSIC RedIris Madrid LIP Lisbon USC Santiago TCD Dublin UvA Amsterdam FZK Karlsruhe II SAS Bratislava Géant

47 GridLab Conference, Zakopane, Poland, September 13, 2002 Tasks 5.1 Project coordination and administration (Michal Turala, INP) 5.2 CrossGrid Architecture Team (Marian Bubak, Cyfronet) 5.3 Central dissemination (Yannis Perros, ALGO) WP5 – Project Management

48 GridLab Conference, Zakopane, Poland, September 13, 2002 EU Funded Grid Project Space (Kyriakos Baxevanidis) GRIDLAB GRIA EGSO DATATAG CROSSGRID DATAGRID Applications GRIP EUROGRID DAMIEN Middleware & Tools Underlying Infrastructures Science Industry / business - Links with European National efforts - Links with US projects (GriPhyN, PPDG, iVDGL,…)

49 GridLab Conference, Zakopane, Poland, September 13, 2002 Project Phases M 1 - 3: requirements definition and merging M 4 - 12: first development phase: design, 1st prototypes, refinement of requirements M 13 -24: second development phase: integration of components, 2nd prototypes M 25 -32: third development phase: complete integration, final code versions M 33 -36: final phase: demonstration and documentation

50 GridLab Conference, Zakopane, Poland, September 13, 2002 Rules for X# SW Development –Iterative improvement: –development, testing on testbed, evaluation, improvement –Modularity –Open source approach –SW well documented –Collaboration with other # projects

51 GridLab Conference, Zakopane, Poland, September 13, 2002 Collaboration with other # Projects –Objective – exchange of –information –software components –Partners –DataGrid –DataTag –Others from GRIDSTART (of course, with GridLab) –Participation in GGF

52 GridLab Conference, Zakopane, Poland, September 13, 2002 Status after M6 –Software Requirements Specifications together with use cases –CrossGrid Architecture defined –Detailed Design documents for tools and the new Grid services (OO approach, UML) –Analysis of security issues and the first proposal of solutions –Detailed description of the test and integration procedures –Testbed first experience Sites: LIP, FZK, CSIC+USC, PSNC, AuTH+Demo Basic: EDG release 1.2 Applications: EDG HEP simulations (Atlas,CMS) first distributed prototypes using MPI: NN distributed training Evolutionary Algorithms

53 GridLab Conference, Zakopane, Poland, September 13, 2002 Near Future –Participation in production testbed with DataGrid All sites will be ready to join by end of September Common DEMO at IST 2002, Copenhagen, November 4 th -6 th –Collaboration with DataGrid in specific points (e.g. user support and helpdesk software) –CrossGrid Workshop, Linz (w/ EuroPVM/MPI 2002), September 28 th -29 th –Conference Across Grids together with R&I Forum Santiago de Compostella, Spain, February 9 th -14 th,2003 With Proceedings (reviewed papers)

54 GridLab Conference, Zakopane, Poland, September 13, 2002 Linz CrossGrid Workshop Sep 28th-29th –Evaluate the current status of all tasks –Discuss interfaces and functionality –Understand what we may expect as first prototypes –Coordinate the operation of the X# testbed –Agree about common rules for software development (SOP) –Start to organize the first CrossGrid EU review –Meet with EU DataGrid representatives –Discuss the technology for the future (OGSA) Details at http://www.gup.uni-linz.ac.at/crossgrid/workshop/

55 GridLab Conference, Zakopane, Poland, September 13, 2002 Summary –Layered structure of the all X# applications –Reuse of SW from DataGrid and other # projects –Globus as the bottom layer of the middleware –Heterogeneous computer and storage systems –Distributed development and testing of SW –12 partners in applications –14 partners in middleware –15 partners in testbeds –In total – 21 partnes –First 6 months – successful

56 GridLab Conference, Zakopane, Poland, September 13, 2002 Thanks to –Michal Turala –Kasia Zajac –Maciek Malawski –Marek Garbacz –Peter M.A. Sloot –Roland Wismueller –Wlodek Funika –Ladislav Hluchy –Bartosz Balis –Jacek Kitowski –Norbert Meyer –Jesus Marco –Marcel Kunze

57 GridLab Conference, Zakopane, Poland, September 13, 2002 www.eu-crossgrid.org


Download ppt "GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid: Interactive Applications, Tool Environment, New Grid Services, and Testbed Marian Bubak."

Similar presentations


Ads by Google