BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 CrossGrid Architecture Marian Bubak and TAT Institute of Computer Science & ACC CYFRONET AGH, Cracow,

Slides:



Advertisements
Similar presentations
PIONIER 2003, Poznan, , PROGRESS Grid Access Environment for SUN Computing Cluster Poznań Supercomputing and Networking Center Cezary Mazurek.
Advertisements

LEAD Portal: a TeraGrid Gateway and Application Service Architecture Marcus Christie and Suresh Marru Indiana University LEAD Project (
EGC 2005, CrossGrid technical achievements, Amsterdam, Feb. 16th, 2005 WP2-3 New Generation Environment for Grid Interactive MPI Applications M igrating.
Workload management Owen Maroney, Imperial College London (with a little help from David Colling)
Database Architectures and the Web
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
Grid Resource Allocation Management (GRAM) GRAM provides the user to access the grid in order to run, terminate and monitor jobs remotely. The job request.
A Computation Management Agent for Multi-Institutional Grids
Seminar Grid Computing ‘05 Hui Li Sep 19, Overview Brief Introduction Presentations Projects Remarks.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GridLab Conference, Zakopane, Poland, September 13, 2002 CrossGrid: Interactive Applications, Tool Environment, New Grid Services, and Testbed Marian Bubak.
USING THE GLOBUS TOOLKIT This summary by: Asad Samar / CALTECH/CMS Ben Segal / CERN-IT FULL INFO AT:
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
Universität Dortmund Robotics Research Institute Information Technology Section Grid Metaschedulers An Overview and Up-to-date Solutions Christian.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
A Grid Resource Broker Supporting Advance Reservations and Benchmark- Based Resource Selection Erik Elmroth and Johan Tordsson Reporter : S.Y.Chen.
The CrossGrid Project Marcel Kunze, FZK representing the X#-Collaboration.
The CrossGrid project Juha Alatalo Timo Koivusalo.
5 th EU DataGrid Conference, Budapest, September 2002 The European CrossGrid Project Marcel Kunze Abteilung Grid-Computing und e-Science Forschungszentrum.
TAT CrossGrid Yearly Review, Brussels, March 12, 2003 CrossGrid After the First Year: A Technical Overview Marian Bubak, Maciej Malawski, and Katarzyna.
Workshop CESGA - HPC’ A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute.
Dagstuhl Seminar 02341: Performance Analysis and Distributed Computing, August 18 – 23, 2002, Germany Monitoring of Interactive Grid Applications Marian.
CrossGrid Task 3.3 Grid Monitoring Trinity College Dublin (TCD) Brian Coghlan Paris MAR-2002.
TAT Cracow Grid Workshop, October 27 – 29, 2003 Marian Bubak, Michal Turala and the CrossGrid Collaboration CrossGrid in Its Halfway:
Workload Management Massimo Sgaravatto INFN Padova.
1 GRID D. Royo, O. Ardaiz, L. Díaz de Cerio, R. Meseguer, A. Gallardo, K. Sanjeevan Computer Architecture Department Universitat Politècnica de Catalunya.
M.Kunze, NEC2003, Varna The European CrossGrid Project Marcel Kunze Institute for Scientific Computing (IWR) Forschungszentrum Karlsruhe GmbH
EU 2nd Year Review – Jan – WP9 WP9 Earth Observation Applications Demonstration Pedro Goncalves :
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
INFSO-RI Enabling Grids for E-sciencE FloodGrid application Ladislav Hluchy, Viet D. Tran Institute of Informatics, SAS Slovakia.
Virtual Organization Approach for Running HEP Applications in Grid Environment Łukasz Skitał 1, Łukasz Dutka 1, Renata Słota 2, Krzysztof Korcyl 3, Maciej.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Dynamic Firewalls and Service Deployment Models for Grid Environments Gian Luca Volpato, Christian Grimm RRZN – Leibniz Universität Hannover Cracow Grid.
Nimrod/G GRID Resource Broker and Computational Economy David Abramson, Rajkumar Buyya, Jon Giddy School of Computer Science and Software Engineering Monash.
DISTRIBUTED COMPUTING
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
INFSO-RI Enabling Grids for E-sciencE Logging and Bookkeeping and Job Provenance Services Ludek Matyska (CESNET) on behalf of the.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Computational grids and grids projects DSS,
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
The PROGRESS Grid Service Provider Maciej Bogdański Portals & Portlets 2003 Edinburgh, July 14th-17th.
Grid Workload Management Massimo Sgaravatto INFN Padova.
OMIS Approach to Grid Application Monitoring Bartosz Baliś Marian Bubak Włodzimierz Funika Roland Wismueller.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
TERENA 2003, May 21, Zagreb TERENA Networking Conference, 2003 MOBILE WORK ENVIRONMENT FOR GRID USERS. TESTBED Miroslaw Kupczyk Rafal.
George Tsouloupas University of Cyprus Task 2.3 GridBench ● 1 st Year Targets ● Background ● Prototype ● Problems and Issues ● What's Next.
GO-ESSP Workshop, LLNL, Livermore, CA, Jun 19-21, 2006, Center for ATmosphere sciences and Earthquake Researches Construction of e-science Environment.
CLRC and the European DataGrid Middleware Information and Monitoring Services The current information service is built on the hierarchical database OpenLDAP.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
Interactive Workflows Branislav Šimo, Ondrej Habala, Ladislav Hluchý Institute of Informatics, Slovak Academy of Sciences.
Kraków Kick-off meeting Migrating Desktop General concept Intuitive Grid-user’s work environment independent of a hardware.
CERN, DataGrid PTB, April 10, 2002 CrossGrid – DataGrid Collaboration (Framework) Marian Bubak and Bob Jones.
Introduction to Grid Computing and its components.
Ariel Garcia DataGrid WP6, Heidelberg, 26 th September 2003 Ariel García CrossGrid testbed status Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
PROGRESS: GEW'2003 Using Resources of Multiple Grids with the Grid Service Provider Michał Kosiedowski.
Migrating Desktop Uniform Access to the Grid Marcin Płóciennik Poznan Supercomputing and Networking Center Poznan, Poland EGEE’07, Budapest, Oct.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
Migrating Desktop Uniform Access to the Grid Marcin Płóciennik Poznan Supercomputing and Networking Center Poland EGEE’08 Conference, Istanbul, 24 Sep.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Grid-Ireland John Morrison, University College Cork (UCC) Brian Coghlan, Trinity College Dublin (TCD) Andy Shearer, NUI Galway (NUIG) Ron Perrott, Queens.
Workload Management Workpackage
Grid Computing: Running your Jobs around the World
University of Technology
PROCESS - H2020 Project Work Package WP6 JRA3
Grid Application Programming Environment
Presentation transcript:

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 CrossGrid Architecture Marian Bubak and TAT Institute of Computer Science & ACC CYFRONET AGH, Cracow, Poland

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Overview –Applications and their requirements –Tools for X# applications development –New grid services –X# architecture

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Biomedical Application –Input: 3-D model of arteries –Simulation: LB of blood flow –Results: in a virtual reality –User: analyses results in near real-time, interacts, changes the structure of arteries

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Steering in the Biomedical Application CT / MRI scan Medical DB Segmentation Medical DB LB flow simulation VE WD PC PDA Visualization Interaction HDB 10 simulations/day 60 GB 20 MB/s

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 VR-Interaction

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Asynchronous Execution of Biomedical Application

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Interaction in Biomedical Application

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Cascade of Flood Simulations Data sources Meteorological simulations Hydraulic simulations Hydrological simulations Users Output visualization

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Basic Characteristics of Flood Simulation –Meteorological intensive simulation (1.5 h/simulation) – HPC large input/output data sets (50MB~150MB /event) high availability of resources (24/365) –Hydrological Parametric simulations - HTC Each sub-catchment may require different models –Hydraulic Many 1-D simulations - HTC 2-D hydraulic simulations need HPC

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Distributed Data Analysis in HEP –Objectives Distributed data access Distributed data mining techniques with neural networks –Issues Typical interactive requests will run on o(TB) distributed data Transfer/replication times of order of 1h Data transfers once and in advance of the interactive session. Allocation, installation and set up the corresponding database servers before the interactive session starts

Weather Forecast and Air Pollution Modeling –Distributed/parallel codes on Grid Coupled Ocean/Atmosphere Mesoscale Prediction System STEM-II Air Pollution Code –Integration of distributed databases –Data mining applied to downscaling weather forecast

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Key Features of X# Applications –Data Data generators and data bases geographically distributed Selected on demand –Processing Needs large processing capacity; both HPC & HTC Interactive –Presentation Complex data require versatile 3D visualisation Support interaction and feedback to other components

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Modules of Tool Environment Grid Monitoring (Task 3.3) Performance Prediction Component High Level Analysis Component User Interface and Visualization Component Performance Measurement Component Benchmarks (Task 2.3) Applications (WP1) executing on Grid testbed Application source code G-PM RMD PMD Legend RMD – raw monitoring data PMD – performance measurement data data flow manual information transfer

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Tools Environment and Grid Services Applications Portals (3.1) Portals (3.1) G-PM Performance Measurement Tools (2.4) G-PM Performance Measurement Tools (2.4) MPI Debugging and Verification (2.2) MPI Debugging and Verification (2.2) Metrics and Benchmarks (2.4) Metrics and Benchmarks (2.4) Grid Monitoring (3.3) (OCM-G, RGMA) Grid Monitoring (3.3) (OCM-G, RGMA)

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 User Interaction Service (1/3) –Synchronization between the simulation and visualization –Control of data flow between various Grid components –A plug-in mechanism for the required components –Software interfaces to: Resource broker (both DataGrid and Globus) Condor-G system for High Throughput Computing Nimrod/G tool for advanced parameter study Grid monitoring service and Globus GIS/MDS

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 User Interaction Service (2/3) User Interaction Services User Interaction Services User Interaction Services User Interaction Service Resource Broker Resource Broker Resource Broker Resource Broker Scheduler User Interaction Services User Interaction Services User Interaction Services Service Factory Visualization/ Interaction In VE Running Simulation Agent Visualization Agent Interaction Agent Event Notification Mechanism Other connections Data transfer Network Bandwidth Reservation Job Submission Service Portal

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 User Interaction Service (3/3) –Factory - creates UIS instances –UIS - event channel based service –Running Simulation – simulation sw –Simulation Agent – orchestrates simulation sw –Visualization/Interaction – virtual environment sw –Visualization Agent – orchestrates visualization sw –Interaction Agent – orchestrates interaction sw

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Portals and Roaming Access Applications Portals (3.1) Portals (3.1) Roaming Access Server (3.1) Scheduler (3.2) Scheduler (3.2) GIS / MDS (Globus) GIS / MDS (Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) – Allow access user environment from remote computers – Independent of the system version and hardware – Run applications, manage data files, store personal settings Remote Access Server user profiles authentication, authorization job submission Migrating Desktop Application portal

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Monitoring System

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 OMIS Approach to Grid Monitoring –Application oriented –on-line –data collected immediately delivered to tools –normally no storing for later processing –Data collection based on run-time instrumentation –enables dynamic choosing of data to be collected –reduced monitoring overhead –Standardized interface between tools and the monitoring system – OMIS

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Service Managers and Monitors –Service Managers –one or more in the system –request distribution –reply collection –Local Monitors –one per node –handle local objects –actual execution of requests –Application Monitors –buffering data –filtering of instrumentation –monitoring requests

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Optimization of Grid Data Access Applications Portals (3.1) Portals (3.1) Optimization of Grid Data Access (3.4) Scheduling Agents (3.2) Scheduling Agents (3.2) Replica Manager (DataGrid / Globus) Replica Manager (DataGrid / Globus) Grid Monitoring (3.3) Grid Monitoring (3.3) GridFTP Service consists of Component-expert system Data-access estimator GridFTP plugin –Different storage systems and applications’ requirements –Optimization by selection of data handlers

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Building Blocks of the CrossGrid CrossGrid DataGrid GLOBUS EXTERNAL To be developed in X# From DataGrid Globus Toolkit Other

BOF at GGF5, Edinburgh, Scotland, July 21-24, 2002 Overview of the CrossGrid Architecture Supporting Tools 1.4 Meteo Pollution 1.4 Meteo Pollution 3.1 Portal & Migrating Desktop Applications Development Support 2.4 Performance Analysis 2.4 Performance Analysis 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.3 Metrics and Benchmarks App. Spec Services 1.1 Grid Visualisation Kernel 1.3 Data Mining on Grid (NN) 1.3 Data Mining on Grid (NN) 1.3 Interactive Distributed Data Access 3.1 Roaming Access 3.1 Roaming Access 3.2 Scheduling Agents 3.2 Scheduling Agents 3.3 Grid Monitoring 3.3 Grid Monitoring MPICH-G Fabric 1.1, 1.2 HLA and others 3.4 Optimization of Grid Data Access 3.4 Optimization of Grid Data Access 1.2 Flooding 1.2 Flooding 1.1 BioMed 1.1 BioMed Applications Generic Services GRAM GSI Replica Catalog GIS / MDS GridFTP Globus-IO DataGrid Replica Manager DataGrid Replica Manager DataGrid Job Submission Service Resource Manager (CE) Resource Manager (CE) CPU Resource Manager Resource Manager Resource Manager (SE) Resource Manager (SE) Secondary Storage Resource Manager Resource Manager Instruments ( Satelites, Radars) Instruments ( Satelites, Radars) 3.4 Optimization of Local Data Access 3.4 Optimization of Local Data Access Tertiary Storage Replica Catalog Globus Replica Manager Globus Replica Manager 1.1 User Interaction Services 1.1 User Interaction Services

BOF at GGF5, Edinburgh, Scotland, July 21-24,