CERN, April 9, 2002 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zajac X# TAT Institute of Computer Science.

Slides:



Advertisements
Similar presentations
The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
AcrossGrids Conference – Santiago 13 of February 2003 First Prototype of the CrossGrid Testbed Jorge Gomes (LIP) On behalf of X# WP4.
GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid: Interactive Applications, Tool Environment, New Grid Services, and Testbed Marian Bubak.
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
Cracow Grid Workshop November 5-6 Support System of Virtual Organization for Flood Forecasting L. Hluchy, J. Astalos, V.D. Tran, M. Dobrucky and G.T. Nguyen.
CERN Krakow 2001 F. Gagliardi - CERN/IT 1 RTD efforts in Europe by Kyriakos Baxevanidis Foster cohesion, interoperability, cross- fertilization of knowledge,
The CrossGrid Project Marcel Kunze, FZK representing the X#-Collaboration.
The CrossGrid project Juha Alatalo Timo Koivusalo.
27-29 September 2002CrossGrid Workshop LINZ1 USE CASES (Task 3.5 Test and Integration) Santiago González de la Hoz CrossGrid Workshop at Linz,
Task 3.5 Tests and Integration ( Wp3 kick-off meeting, Poznan, 29 th -30 th January 2002 Santiago González de la.
5 th EU DataGrid Conference, Budapest, September 2002 The European CrossGrid Project Marcel Kunze Abteilung Grid-Computing und e-Science Forschungszentrum.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
TAT CrossGrid Yearly Review, Brussels, March 12, 2003 CrossGrid After the First Year: A Technical Overview Marian Bubak, Maciej Malawski, and Katarzyna.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Workshop CESGA - HPC’ A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute.
Dagstuhl Seminar 02341: Performance Analysis and Distributed Computing, August 18 – 23, 2002, Germany Monitoring of Interactive Grid Applications Marian.
“IST in the FP6”, Warszawa, November 2002 IST 5FP Success Stories of the Institute of Informatics of the Slovak Academy of Sciences, Bratislava,
CrossGrid Task 3.3 Grid Monitoring Trinity College Dublin (TCD) Brian Coghlan Paris MAR-2002.
Cracow Grid Workshop, November 5-6, 2001 Overview of the CrossGrid Project Marian Bubak Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland.
TAT Cracow Grid Workshop, October 27 – 29, 2003 Marian Bubak, Michal Turala and the CrossGrid Collaboration CrossGrid in Its Halfway:
M.Kunze, NEC2003, Varna The European CrossGrid Project Marcel Kunze Institute for Scientific Computing (IWR) Forschungszentrum Karlsruhe GmbH
5 March 2002 DG PARIS Jesus Marco CSIC IFCA(Santander) Development of GRID environment for interactive applications J.Marco (CSIC) DATAGRID WP6 MEETING.
Advanced Data Mining and Integration Research for Europe ADMIRE – Framework 7 ICT ADMIRE Overview European Commission 7 th.
INFSO-RI Enabling Grids for E-sciencE FloodGrid application Ladislav Hluchy, Viet D. Tran Institute of Informatics, SAS Slovakia.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
DATAGRID Testbed release 0 Organization and working model F.Etienne, A.Ghiselli CNRS/IN2P3 – Marseille, INFN-CNAF Bologna DATAGRID Conference, 7-9 March.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Crossgrid kick-off meeting, Cracow, March 2002 Santiago González de la Hoz, IFIC1 Task 3.5 Test and Integration (
From GEANT to Grid empowered Research Infrastructures ANTONELLA KARLSON DG INFSO Research Infrastructures Grids Information Day 25 March 2003 From GEANT.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Jarek Nabrzyski, Ariel Oleksiak Comparison of Grid Middleware in European Grid Projects Jarek Nabrzyski, Ariel Oleksiak Poznań Supercomputing and Networking.
1 Development of GRID environment for interactive applications Jesús Marco de Lucas Instituto de Física de Cantabria,
Contact person: Prof. M. Niezgódka Prof. Piotr Bała ICM Interdisciplinary Centre for Mathematical and Computational Modelling Warsaw University,
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
High Level Architecture (HLA)  used for building interactive simulations  connects geographically distributed nodes  time management (for time- and.
TERENA 2003, May 21, Zagreb TERENA Networking Conference, 2003 MOBILE WORK ENVIRONMENT FOR GRID USERS. TESTBED Miroslaw Kupczyk Rafal.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
The Knowledge-based Workflow System for Grid Applications Ladislav Hluchý, Viet Tran, Ondrej Habala II SAS, Slovakia
Research Infrastructures Information Day Brussels, March 25, 2003 Victor Alessandrini IDRIS - CNRS.
Terena conference, June 2004, Rhodes, Greece Norbert Meyer The effective integration of scientific instruments in the Grid.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
29/1/2002A.Ghiselli, INFN-CNAF1 DataTAG / WP4 meeting Cern, 29 January 2002 Agenda  start at  Project introduction, Olivier Martin  WP4 introduction,
CERN, DataGrid PTB, April 10, 2002 CrossGrid – DataGrid Collaboration (Framework) Marian Bubak and Bob Jones.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Interactive European Grid Environment for HEP Application with Real Time Requirements Lukasz Dutka 1, Krzysztof Korcyl 2, Krzysztof Zielinski 1,3, Jacek.
EC Review – 01/03/2002 – WP9 – Earth Observation Applications – n° 1 WP9 Earth Observation Applications 1st Annual Review Report to the EU ESA, KNMI, IPSL,
High Level Architecture (HLA)  used for building interactive simulations  connects geographically distributed nodes  time management (for time- and.
Ariel Garcia DataGrid WP6, Heidelberg, 26 th September 2003 Ariel García CrossGrid testbed status Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft.
→ MIPRO Conference,Opatija, 31 May -3 June 2005 Grid-based Virtual Organization for Flood Prediction Miroslav Dobrucký Institute of Informatics, SAS Slovakia,
BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 CrossGrid Architecture Marian Bubak and TAT Institute of Computer Science & ACC CYFRONET AGH, Cracow,
CERN The GridSTART EU accompany measure Fabrizio Gagliardi CERN
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
K-WfGrid: Grid Workflows with Knowledge Ladislav Hluchy II SAS, Slovakia.
ETICS An Environment for Distributed Software Development in Aerospace Applications SpaceTransfer09 Hannover Messe, April 2009.
EGEE is a project funded by the European Union under contract IST Compchem VO's user support EGEE Workshop for VOs Karlsruhe (Germany) March.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
Grid Services for Digital Archive Tao-Sheng Chen Academia Sinica Computing Centre
Bob Jones EGEE Technical Director
Accessing the VI-SEEM infrastructure
University of Technology
PROCESS - H2020 Project Work Package WP6 JRA3
Grid Application Programming Environment
Presentation transcript:

CERN, April 9, 2002 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zajac X# TAT Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland

CERN, April 9, 2002 A new IST Grid project space (Kyriakos Baxevanidis) GRIDLAB GRIA EGSO DATATAG CROSSGRID DATAGRID Applications GRIP EUROGRID DAMIEN Middleware & Tools Underlying Infrastructures Science Industry / business - Links with European National efforts - Links with US projects (GriPhyN, PPDG, iVDGL,…)

CERN, April 9, 2002 CrossGrid Collaboration Poland: Cyfronet & INP Cracow PSNC Poznan ICM & IPJ Warsaw Portugal: LIP Lisbon Spain: CSIC Santander Valencia & RedIris UAB Barcelona USC Santiago & CESGA Ireland: TCD Dublin Italy: DATAMAT Netherlands: UvA Amsterdam Germany: FZK Karlsruhe TUM Munich USTU Stuttgart Slovakia: II SAS Bratislava Greece: Algosystems Demo Athens AuTh Thessaloniki Cyprus: UCY Nikosia Austria: U.Linz

CERN, April 9, 2002 Main Objectives –New category of Grid enabled applications computing and data intensive distributed near real time response (a person in a loop) layered –New programming tools –Grid more user friendly, secure and efficient –Interoperability with other Grids –Implementation of standards

CERN, April 9, 2002 Key Features of X# Applications –Data Data generators and data bases geographically distributed Selected on demand –Processing Needs large processing capacity; both HPC & HTC Interactive –Presentation Complex data require versatile 3D visualisation Support interaction and feedback to other components

CERN, April 9, 2002 Tasks 1.0 Co-ordination and management (Peter M.A. Sloot, UvA) 1.1 Interactive simulation and visualisation of a biomedical system (G. Dick van Albada, Uva) 1.2 Flooding crisis team support (Ladislav Hluchy, II SAS) 1.3 Distributed data analysis in HEP (C. Martinez-Rivero, CSIC) 1.4 Weather forecast and air pollution modelling (Bogumil Jakubiak, ICM) WP1 – CrossGrid Application Development

CERN, April 9, 2002 Tasks 2.0 Co-ordination and management (Holger Marten, FZK) 2.1 Tools requirement definition (Roland Wismueller, TUM) 2.2 MPI code debugging and verification (Matthias Mueller, USTUTT) 2.3 Metrics and benchmarks (Marios Dikaiakos, UCY) 2.4 Interactive and semiautomatic performance evaluation tools (Wlodek Funika, Cyfronet) 2.5 Integration, testing and refinement (Roland Wismueller, TUM) WP2 - Grid Application Programming Environments

CERN, April 9, 2002 WP2 - Components and relations to other WPs Analytical model Benchmarks (2.3) Grid monitoring (3.3) MPI verification (2.2) Performance measurement Visualization Automatic analysis Performance analysis (2.4) Application source code Application WP1 running on testbed WP4

CERN, April 9, 2002 Tasks 3.0 Co-ordination and management (Norbert Meyer, PSNC) 3.1 Portals and roaming access (Miroslaw Kupczyk, PSNC) 3.2 Grid resource management (Miquel A. Senar, UAB) 3.3 Grid monitoring (Brian Coghlan, TCD) 3.4 Optimisation of data access (Jacek Kitowski, Cyfronet) 3.5 Tests and integration (Santiago Gonzalez, CSIC) WP3 – New Grid Services and Tools

CERN, April 9, 2002 WP3 Portals (3.1) Roaming Access (3.1) Grid Resource Management (3.2) Grid Monitoring (3.3) Optimisation of Data Access (3.4) Tests and Integration (3.5) Applications WP1 End Users WP1, WP2, WP5 Testbed WP4 Performance evaluation tools (2.4)

CERN, April 9, 2002 Partners in WP4 WP4 lead by CSIC (Spain) WP4 - International Testbed Organization Auth Thessaloniki U v Amsterdam FZK Karlsruhe TCD Dublin U A Barcelona LIP Lisbon CSIC Valencia CSIC Madrid USC Santiago CSIC Santander DEMO AthensUCY Nikosia CYFRONET Cracow II SAS Bratislava PSNC Poznan ICM & IPJ Warsaw

CERN, April 9, 2002 Tasks 4.0 Coordination and management (Jesus Marco, CSIC, Santander) –Coordination with WP1,2,3 –Collaborative tools –Integration Team 4.1 Testbed setup & incremental evolution (Rafael Marco, CSIC, Santander) –Define installation –Deploy testbed releases –Trace security issues WP4 - International Testbed Organization Testbed site responsibles: –CYFRONET (Krakow) A.Ozieblo –ICM(Warsaw) W.Wislicki –IPJ (Warsaw) K.Nawrocki –UvA (Amsterdam) D.van Albada –FZK (Karlsruhe) M.Kunze –IISAS (Bratislava) J.Astalos –PSNC(Poznan) P.Wolniewicz –UCY (Cyprus) M.Dikaiakos –TCD (Dublin) B.Coghlan –CSIC (Santander/Valencia) S.Gonzalez –UAB (Barcelona) G.Merino –USC (Santiago) A.Gomez –UAM (Madrid) J.del Peso –Demo (Athenas) C.Markou –AuTh (Thessaloniki) D.Sampsonidis –LIP (Lisbon) J.Martins

CERN, April 9, 2002 Tasks 4.2 Integration with DataGrid (Marcel Kunze, FZK) –Coordination of testbed setup –Exchange knowledge –Participate in WP meetings 4.3 Infrastructure support (Josep Salt, CSIC, Valencia) –Fabric management –HelpDesk –Provide Installation Kit –Network support 4.4 Verification & quality control (Jorge Gomes, LIP) –Feedback –Improve stability of the testbed WP4 - International Testbed Organization

CERN, April 9, 2002 Tasks 5.1 Project coordination and administration (Michal Turala, INP) 5.2 CrossGrid Architecture Team (Marian Bubak, Cyfronet) 5.3 Central dissemination (Yannis Perros, ALGO) WP5 – Project Management

CERN, April 9, 2002 Project Phases M 1 - 3: requirements definition and merging M : first development phase: design, 1st prototypes, refinement of requirements M : second development phase: integration of components, 2nd prototypes M : third development phase: complete integration, final code versions M : final phase: demonstration and documentation

CERN, April 9, 2002 Person-months WPWP TitlePM FundedPM Total WP1 CrossGrid Applications Development WP2 Grid Application Programming Environment WP3 New Grid Services and Tools WP4 International Testbed Organization WP5 Project Management Total

CERN, April 9, 2002 Layered Structure of X# Interactive and Data Intensive Applications (WP1)  I nteractive simulation and visualization of a biomedical system  Flooding crisis team support  Distributed data analysis in HEP  Weather forecast and air pollution modeling Grid Application Programming Environment (WP2)  MPI code debugging and verification  Metrics and benchmarks  Interactive and semiautomatic performance evaluation tools Grid Visualization Kernel Data Mining New CrossGrid Services (WP3) Globus Middleware Fabric Infrastructure (Testbed WP4) DataGrid GriPhyN... Services HLA  Portals and roaming access  Grid resource management  Grid monitoring  Optimization of data access

CERN, April 9, 2002 Two important questions –How to build interactive Grid environment ? (Globus is more batch-oriented than interactive- oriented; performance issue) –How to use with Globus and DataGrid SW, how to define interfaces ?

CERN, April 9, 2002 Layer Approach According to the Global Grid Forum Grid Protocol Architecture Working Group – „computing/simulation grid” –Applications and Supporting Tools –Applications Development Support –Grid Common Services –Local Resources

CERN, April 9, 2002 Building Blocks CrossGrid DataGrid GLOBUS EXTERNAL To be developed in X# From DataGrid Globus Toolkit Other

CERN, April 9, 2002 Local Resources –Fabric CPU, Storage, Instruments, VR Systems –Local Resource Managers – make them „gridable” Resource Manager Resource Manager CPU Resource Manager Resource Manager Resource Manager Resource Manager Secondary Storage Resource Manager Resource Manager Scientific Instruments (Medical Scanners, Satelites, Radars) Resource Manager Resource Manager Detector Local High Level Trigger Detector Local High Level Trigger Resource Manager Resource Manager VR systems (Caves, immerse desks) VR systems (Caves, immerse desks) Resource Manager Resource Manager Visualization tools Optimization of Data Access Tertiary Storage Local Resources

CERN, April 9, 2002 Grid Services –Service = protocol + behavior (Foster: Anatomy...) –Protocol – rules for exchanging information (interoperability) –Behavior expected in response to protocol messages –Service definition permits variety of implementations Grid Common Services Grid Visualisation Kernel Data Mining on Grid Data Mining on Grid Interactive Distributed Data Access Globus Replica Manager Roaming Access Grid Resource Management Grid Resource Management Grid Monitoring Distributed Data Collection User Interaction Service DataGrid Replica Manager DataGrid Replica Manager Datagrid Job Manager GRAM GSI Replica Catalog GASS MDS GridFTP Globus-IO

CERN, April 9, 2002 Interaction in Biomedical Application

CERN, April 9, 2002 Biomedical Application Use Case

CERN, April 9, 2002 Step 1 –Action: An MRI scan ("Angiogram") is obtained for the patient. –Data: 3D image 512 * 512 * 128 pixels. –Resource: A 3D-visualisation system is reserved for use in step 3.

CERN, April 9, 2002 Step 2 –Action: The image is segmented so that a clear picture of the important blood vessels and the location of aneurisms and blockages is obtained. –Data: 10% of the original image.

CERN, April 9, 2002 Step 3 –Action: Using the segmented image, a computational grid for a LB simulation is generated –Action: A simulation of the normal pulsatile blood flow in the vessels is started. –Input from the physician: parameters like pressure drop (possibly time dependent) –Run time: several hours on a fast 16 node Beowulf cluster.

CERN, April 9, 2002 Step 4 –Resource: Interactive 3D-visualisation system –Interction: the physician studies the vascular structure and proposes several (3 to 5) bypass designs. They are used to generate alternative computational grids. –Estimated duration: order of 1 hour.

CERN, April 9, 2002 Step 5 –Action: The blood flow simulations for the bypasses are initialised using the new grids and the (partially) converged results from step 3. –Time: several hours on a fast 16 node Beowulf cluster for each simulation

CERN, April 9, 2002 Step 6 –Action: The physician can monitor the progress of the simulations through his portal. He will be informed automatically, e.g. through an SMS message of their completion. –Resource: The physician can use the advance information to reserve a 3D-visualisation environment for step 7.

CERN, April 9, 2002 Step 7 – human in the loop –Action: The results of the simulation are presented in the 3D-visualisation system –Input: stored history or the running simulation –Interaction: The physician can apply small modifications to the proposed bypass structure that should still allow a fast convergence of the blood-flow simulation –Time: Simulations of the resulting changes in the blood flow should be initiated immediately, so that the results will be available within minutes.

CERN, April 9, 2002 Asynchronous Execution of Biomedical Application

CERN, April 9, 2002 Current architecture of biomedical application

CERN, April 9, 2002 Biomedical Application Applications And Supporting Tools Applications Development Support Grid Common Services Grid Visualisation Kernel Data Mining on Grid Interactive Distributed Data Access Globus Replica Manager Roaming Access Grid Resource Management Grid Resource Management Grid Monitoring MPICH-G Distributed Data Collection User Interaction Service DataGrid Replica Menager Datagrid Job Manager GRAM GSI Replica Catalog GASS MDS GridFTP Globus-IO Resource Manager Resource Manager CPU Resource Manager Resource Manager Resource Manager Resource Manager Secondary Storage Resource Manager Resource Manager Scientific Instruments (Medical Scaners, Satelites, Radars) Resource Manager Detector Local High Level Trigger Resource Manager Resource Manager VR systems (Caves, immerse desks) VR systems (Caves, immerse desks) Resource Manager Resource Manager Visualization tools Optimization of Data Access Tertiary Storage Local Resources Biomedical Application Biomedical Application Portal Performance Analysis MPI Verification Metrics and Benchmarks HEP High LevelTrigger Flood Application HEP Interactive Distributed Data Access Application HEP Data Mining on Grid Application Weather Forecast application

CERN, April 9, 2002 Flooding Crisis Team Support Storage systems databases surface automatic meteorological and hydrological stations systems for acquisition and processing of satellite information meteorological radars External sources of information  Global and regional centers GTS  EUMETSAT and NOAA  Hydrological services of other countries Data sources meteorological models hydrological models hydraulic models High performance computers Grid infrastructure Flood crisis teams  meteorologists  hydrologists  hydraulic engineers Users  river authorities  energy  insurance companies  navigation  media  public

CERN, April 9, 2002 Simulation Flood Cascade Data sources Meteorological simulation Hydrological simulation Hydraulic simulation Portal

CERN, April 9, 2002 Meteorological Simulation (ALADIN Model) Global model ALADIN/LACE (Prague) Permanent storage (Vienna) ALADIN/SLOVAKIA (II SAS) Portal Temporary storage (SHMI) Virtual organization Global simulation results Boundary conditions for local model Model execution and control Transferring input data Transferring output data Results for users

CERN, April 9, 2002 Hydrological Simulation Precipitation forecasts (from meteorological simulation) CrossGrid testbed Portal Temporary storage Virtual organization Model repositories Topographical data repositories Hydro-meteorological data repositories Permanent storage

CERN, April 9, 2002 Hydraulic Simulation Discharges (from hydrological simulation) CrossGrid testbed Portal Temporary storage Virtual organization Model repositories Topographical data repositories Hydrological data repositories Permanent storage

CERN, April 9, 2002 Basic Characteristics of Flood Simulation –Meteorologica l intensive simulation (1.5 h/simulation) – maybe HPC large input/output data sets (50MB~150MB /event) high availability of resources (24/365) –Hydrological Parametric simulations - HTC Each sub-catchment may require different models (heterogeneous simulation) –Hydraulic Many 1-D simulations - HTC 2-D hydraulic simulations need HPC

CERN, April 9, 2002 Flooding Crisis Team Support Applications And Supporting Tools Applications Development Support Grid Common Services Grid Visualisation Kernel Data Mining on Grid Interactive Distributed Data Access Globus Replica Manager Roaming Access Grid Resource Management Grid Resource Management Grid Monitoring MPICH-G Distributed Data Collection User Interaction Service DataGrid Replica Manager DataGrid Replica Manager Datagrid Job Manager GRAM GSI Replica Catalog GASS MDS GridFTP Globus-IO Resource Manager Resource Manager CPU Resource Manager Resource Manager Resource Manager Resource Manager Secondary Storage Resource Manager Resource Manager Scientific Instruments (Medical Scaners, Satelites, Radars) Resource Manager Detector Local High Level Trigger Resource Manager Resource Manager VR systems (Caves, immerse desks) VR systems (Caves, immerse desks) Resource Manager Resource Manager Visualization tools Optimization of Data Access Tertiary Storage Local Resources Biomedical Application Portal Performance Analysis MPI Verification Metrics and Benchmarks HEP High LevelTrigger Flood Application Flood Application HEP Interactive Distributed Data Access Application HEP Data Mining on Grid Application Weather Forecast application

CERN, April 9, 2002 Complementarity with DataGrid HEP application package: Crossgrid will develop interactive final user application for physics analysis, will make use of the products of non-interactive simulation & data- processing preceeding stages of Datagrid Apart from the file-level service that will be offered by Datagrid, CrossGrid will offer an object-level service to optimise the use of distributed databases: - Two possible implementations (will be tested in running experiments): –Three-tier model accesing OODBMS or O/R DBMS –More specific HEP solution like ROOT. User friendly due to specific portal tools Distributed Data Analysis in HEP

CERN, April 9, 2002 Several challenging points: –Access to large distributed databases in the Grid. –Development of distributed data-mining techniques. –Definition of a layered application structure. –Integration of user-friendly interactive access. Focus on LHC experiments (ALICE, ATLAS, CMS and LHCb) Distributed Data Analysis in HEP

CERN, April 9, 2002 Distributed Data Analysis in HEP Applications And Supporting Tools Applications Development Support Grid Common Services Grid Visualisation Kernel Data Mining on Grid Data Mining on Grid Interactive Distributed Data Access Globus Replica Manager Roaming Access Grid resource Management Grid resource Management Grid Monitoring MPICH-G Distributed Data Collection User Interaction Service DataGrid Replica Manager DataGrid Replica Manager Datagrid Job Manager GRAM GSI Replica Catalog GASS MDS GridFTP Globus-IO Resource Manager Resource Manager CPU Resource Manager Resource Manager Resource Manager Resource Manager Secondary Storage Resource Manager Scientific Instruments (Medical Scaners, Satelites, Radars) Resource Manager Resource Manager Detector Local High Level Trigger Detector Local High Level Trigger Resource Manager VR systems (Caves, immerse desks) Resource Manager Visualization tools Optimization of Data Access Tertiary Storage Local Resources Biomedical Application Portal Performance Analysis MPI Verification Metrics and Benchmarks HEP High LevelTrigger Flood Application HEP Interactive Distributed Data Access Application HEP Data Mining on Grid Application HEP Data Mining on Grid Application Weather Forecast application

Weather Forecast and Air Pollution Modeling –Porting distributed/parallel codes on Grid Coupled Ocean/Atmosphere Mesoscale Prediction System STEM-II Air Pollution Code –Integration of distributed databases –Migration of data mining algorithms to Grid –Integration, testing and running on the X# testbed

CERN, April 9, 2002 COAMPS Coupled Ocean/Atmosphere Mesoscale Prediction System: Atmospheric Components Complex Data Quality Control Analysis: Multivariate Optimum Interpolation Analysis (MVOI) of Winds and Heights Univariate Analyses of Temperature and Moisture OI Analysis of Sea Surface Temperature Initialization: Variational Hydrostatic Constraint on Analysis Increments Digital Filter Atmospheric Model: Numerics: Nonhydrostatic, Scheme C, Nested Grids, Sigma-z, Flexible Lateral BCs Physics: PBL, Convection, Explicit Moist Physics, Radiation, Surface Layer Features: Globally Relocatable (5 Map Projections) User-Defined Grid Resolutions, Dimensions, and Number of Nested Grids 6 or 12 Hour Incremental Data Assimilation Cycle Can be Used for Idealized or Real-Time Applications Single Configuration Managed System for All Applications Operational at FNMOC: 7 Areas, Twice Daily, using 81/27/9 km or 81/27 km grids Forecasts to 72 hours Operational at all Navy Regional Centers (w/GUI Interface)

CERN, April 9, 2002 Air Pollution Model – STEM-II –Species: 56 chemical, 16 long-lived, 40 short-lived, 28 radicals (OH, HO 2 ) –Chemical mechanisms: 176 gas-phase reactions 31 aqueous-phase reactions. 12 aqueous-phase solution equilibria. –Equations are integrated with locally 1-D finite element method (LOD-FEM) –Transport equations are solved with Petrov-Crank-Nicolson- Galerkin (FEM) –Chemistry & mass transfer terms are integrated with semi- implicit Euler and pseudo-analytic methods

CERN, April 9, 2002 Weather Forecast and Air Pollution Modeling Applications And Supporting Tools Applications Development Support Grid Common Services Grid Visualisation Kernel Data Mining on Grid Data Mining on Grid Interactive Distributed Data Access Globus Replica Manager Roaming Access Grid Resource Management Grid Resource Management Grid Monitoring MPICH-G Distributed Data Collection User Interaction Service DataGrid Replica Manager DataGrid Replica Manager Datagrid Job Manager GRAM GSI Replica Catalog GASS MDS GridFTP Globus-IO Resource Manager Resource Manager CPU Resource Manager Resource Manager Resource Manager Resource Manager Secondary Storage Resource Manager Resource Manager Scientific Instruments (Medical Scaners, Satelites, Radars) Resource Manager Detector Local High Level Trigger Resource Manager VR systems (Caves, immerse desks) Resource Manager Resource Manager Visualization tools Optimization of Data Access Tertiary Storage Local Resources Biomedical Application Portal Performance Analysis MPI Verification Metrics and Benchmarks HEP High LevelTrigger Flood Application HEP Interactive Distributed Data Access Application HEP Data Mining on Grid Application Weather Forecast application Weather Forecast application

CERN, April 9, 2002 CrossGrid Architecture Applications And Supporting Tools Applications Development Support Grid Common Services Grid Visualisation Kernel Data Mining on Grid Data Mining on Grid Interactive Distributed Data Access Globus Replica Manager Roaming Access Grid Resource Management Grid Resource Management Grid Monitoring MPICH-G Distributed Data Collection User Interaction Service DataGrid Replica Manager DataGrid Replica Manager Datagrid Job Manager GRAM GSI Replica Catalog GASS MDS GridFTP Globus-IO Resource Manager Resource Manager CPU Resource Manager Resource Manager Resource Manager Resource Manager Secondary Storage Resource Manager Resource Manager Scientific Instruments (Medical Scaners, Satelites, Radars) Resource Manager Resource Manager Detector Local High Level Trigger Detector Local High Level Trigger Resource Manager Resource Manager VR systems (Caves, immerse desks) VR systems (Caves, immerse desks) Resource Manager Resource Manager Visualization tools Optimization of Data Access Tertiary Storage Local Resources Biomedical Application Biomedical Application Portal Performance Analysis MPI Verification Metrics and Benchmarks HEP High LevelTrigger Flood Application Flood Application HEP Interactive Distributed Data Access Application HEP Data Mining on Grid Application HEP Data Mining on Grid Application Weather Forecast application Weather Forecast application

CERN, April 9, 2002 Rules for X# SW Development –Iterative improvement: development, testing on testbed, evaluation, improvement –Modularity –Open source approach –SW well documented –Collaboration with other # projects

CERN, April 9, 2002 Evolutionary life-cycle model Phase between versions © Ian Somerville, „Software Engineering”

CERN, April 9, 2002 Architecture Team - Activity –Merging of requirements from WP1, WP2, WP3 –Specification of the X# architecture (i.e. new protocols, services, SDKs, APIs) –Establishing of standard operational procedures –Specification of the structure of deliverables –Improvement of X# architecture according to experience from SW development and testbed operation

CERN, April 9, 2002 Architecture Team - Organization –Technical Architecture Team (at Cyfronet) – elaboration of proposals Marian Bubak Marek Garbacz Maciej Malawski Katarzyna Zając –Representatives of WPs (persons responsible for integration in WPs) – evaluation of TAT proposals WP1 – Dick van Albada WP2 – Roland Wismueller WP3 – Santiago Gonzalez WP4 – Rafael Marco

CERN, April 9, 2002 Schedule –Pre-final versions of SRS – April 9 –X#TAT+ meeting with DG ATF and PTB – April 11-16, CERN –X#AT comments to SRS – April 17 –ICCS’2000 Amsterdam –Final versions of SRS (deliverables!) – April 25 –1 st def of X# Architecture – May 17-18, Cracow