Hungarian GRID Projects and Cluster Grid Initiative P. Kacsuk MTA SZTAKI

Slides:



Advertisements
Similar presentations
Legacy code support for commercial production Grids G.Terstyanszky, T. Kiss, T. Delaitre, S. Winter School of Informatics, University.
Advertisements

1 Development of Virtual Supercomputer Service using Academic Network
Beowulf Supercomputer System Lee, Jung won CS843.
P-GRADE and WS-PGRADE portals supporting desktop grids and clouds Peter Kacsuk MTA SZTAKI
Introduction to Grids and Grid applications Gergely Sipos MTA SZTAKI
MTA SZTAKI Hungarian Academy of Sciences Grid Computing Course Porto, January Introduction to Grid portals Gergely Sipos
Introduction to the Grid
Visual Solution to High Performance Computing Computer and Automation Research Institute Laboratory of Parallel and Distributed Systems
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
International Workshop APAN 24, Current State of Grid Computing Researches and Applications in Vietnam Nguyen Thanh Thuy 1, Nguyen Kim Khanh 1,
Workload Management Massimo Sgaravatto INFN Padova.
GRID technology by SZTAKI MTA SZTAKI Laboratory of Parallel and Distributed Systems
1 Int System Introduction to Systems and Networking Department Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology.
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
SCI-BUS is supported by the FP7 Capacities Programme under contract nr RI WS-PGRADE/gUSE Supporting e-Science communities in Europe Zoltan Farkas.
Grid Toolkits Globus, Condor, BOINC, Xgrid Young Suk Moon.
Chemistry GRID and its application for air pollution forecast Computer and Automation Research Institute of the Hungarian Academy of Sciences (MTA SZTAKI)
Hungarian Supercomputing GRID
SZTAKI Desktop Grid – a Hierarchical Desktop Grid System P. Kacsuk, A. Marosi, J. Kovacs, Z. Balaton, G. Gombas, G. Vida, A. Kornafeld MTA SZTAKI
Connecting OurGrid & GridSAM A Short Overview. Content Goals OurGrid: architecture overview OurGrid: short overview GridSAM: short overview GridSAM: example.
INFSO-RI SZTAKI’s Exploitation plan AHM meeting Budapest, 23 June 2009 Peter Kacsuk, Robert Lovas MTA SZTAKI.
1 Application of multiprocessor and GRID technology in medical image processing IKTA /2002.
Cluster Programming Technology and its Application in Meteorology Computer and Automation Research Institute Hungarian Academy of Sciences Hungarian Meteorological.
GridLab A Grid Application Toolkit and Testbed IST Jarek Nabrzyski GridLab Project Coordinator Poznań.
Computer and Automation Research Institute Hungarian Academy of Sciences Presentation and Analysis of Grid Performance Data Norbert Podhorszki and Peter.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Computer and Automation Research Institute Hungarian Academy of Sciences Automatic checkpoint of CONDOR-PVM applications by P-GRADE Jozsef Kovacs, Peter.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
From P-GRADE to SCI-BUS Peter Kacsuk, Zoltan Farkas and Miklos Kozlovszky MTA SZTAKI - Computer and Automation Research Institute of the Hungarian Academy.
Supercomputing Center CFD Grid Research in N*Grid Project KISTI Supercomputing Center Chun-ho Sung.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
INFSO-RI Enabling Grids for E-sciencE Supporting legacy code applications on EGEE VOs by GEMLCA and the P-GRADE portal P. Kacsuk*,
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
Condor: High-throughput Computing From Clusters to Grid Computing P. Kacsuk – M. Livny MTA SYTAKI – Univ. of Wisconsin-Madison
NGS Innovation Forum, Manchester4 th November 2008 Condor and the NGS John Kewley NGS Support Centre Manager.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Tools for collaboration How to share your duck tales…
Holding slide prior to starting show. A Portlet Interface for Computational Electromagnetics on the Grid Maria Lin and David Walker Cardiff University.
GRID ARCHITECTURE Chintan O.Patel. CS 551 Fall 2002 Workshop 1 Software Architectures 2 What is Grid ? "...a flexible, secure, coordinated resource- sharing.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Application Porting Support in EGEE Gergely Sipos MTA SZTAKI EGEE’08.
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
P-GRADE and GEMLCA.
1 P-GRADE Portal: a workflow-oriented generic application development portal Peter Kacsuk MTA SZTAKI, Hungary Univ. of Westminster, UK.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Services for advanced workflow programming.
7. Grid Computing Systems and Resource Management
How can Computer Networks Support National Grid Initiatives? Tamás Máray Péter Stefán NIIF/HUNGARNET.
WebFlow High-Level Programming Environment and Visual Authoring Toolkit for HPDC (desktop access to remote resources) Tomasz Haupt Northeast Parallel Architectures.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
The Hungarian ClusterGRID Project Péter Stefán research associate NIIF/HUNGARNET
1 Porting applications to the NGS, using the P-GRADE portal and GEMLCA Peter Kacsuk MTA SZTAKI Hungarian Academy of Sciences Centre for.
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
Group # 14 Dhairya Gala Priyank Shah. Introduction to Grid Appliance The Grid appliance is a plug-and-play virtual machine appliance intended for Grid.
Usage of WS-PGRADE and gUSE in European and national projects Peter Kacsuk 03/27/
Operational flash flood forecasting based on grid technology Monitoring and Forecasting Thierion Vincent P.-A. Ayral, V. Angelini, S. Sauvagnargues-Lesage,
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
General Grid Monitoring Infrastructure (GGMI) Peter kacsuk and Norbert Podhorszki MTA SZTAKI.
Introduction to Grid and Grid applications Peter Kacsuk MTA SZTAKI
Exposing WS-PGRADE/gUSE for large user communities Peter Kacsuk, Zoltan Farkas, Krisztian Karoczkai, Istvan Marton, Akos Hajnal,
Peter Kacsuk – Sipos Gergely MTA SZTAKI
WS-PGRADE for Molecular Sciences and XSEDE
Peter Kacsuk MTA SZTAKI
Grid Computing.
Recap: introduction to e-science
University of Technology
Presentation transcript:

Hungarian GRID Projects and Cluster Grid Initiative P. Kacsuk MTA SZTAKI

Clusters Hungarian Grid projects VISSZKI Globus, Condor service SuperGrid Resource scheduling AccountingP-GRADEMPICH-G Supercomputers DemoGrid SecurityGrid monitoring Data storage subsystem Applications

–Testing various tools and methods for creating a Virtual supercomputer (metacomputer) –Testing and evaluating Globus and Condor –Elaborating a national Grid infrastructure service based on Globus and Condor by connecting various clusters Ojectives of the VISSZKI project

Results of the VISSZKI project PVMMW Condor-G Globus Grid middleware Grid fabric MPI Local job management Clusters Condor Clusters Condor Clusters Condor Clusters Condor Low-level parallel development Grid level job management

GRID Generic architecture Structure of the DemoGrid project  GRID subsystems study and development -Data Storage subsystem: Relational database, OO database, geometric database, distributed file system -Monitoring subsystem -Security subsystem Demo applications –astro-physics –human brain simulation –particle physics –car engine design Data Storage Subsystem Monitoring subsystem Security subsystem Applications Hardver RDB OODB GDB DFS Data Tightly Loosely Decomp StorageCPU Network

Structure of the Hungarian Supercomputing Grid 2.5 Gb/s Internet NIIFI 2*64 proc. Sun E10000 BME 16 proc. Compaq AlphaServer ELTE 16 proc. Compaq AlphaServer SZTAKI 58 proc. cluster University (ELTE, BME) clusters

P-GRADE PVMMW Condor-G Globus SUN HPC Compaq Alpha Server Compaq Alpha Server Klaszterek Grid middleware Grid fabric High-level parallel development layer Grid level job management Condor, LSF, Sun Grid Engine Condor, PBS, LSF Condor MPI GRID portal Web based GRID access Condor, PBS, LSF GRID application Low-level parallel development The Hungarian Supercomputing GRID project

Distributed supercomputing: P-GRADE P-GRADE (Parallel GRid Application Development Environment) The first highly integrated parallel Grid application development system in the world Provides: –Parallel, supercomputing programming for the Grid –Fast and efficient development of Grid programs –Observation and visualization of Grid programs –Fault and performance analysis of Grid programs

Condor/P-GRADE on the whole range of parallel and distributed systems Super- computers 2100 Clusters Grid GFlops Mainframes P-GRADE Condor P-GRADE Condor P-GRADE Condor flocking

Berlin CCGrid Grid Demo workshop: Flocking of P-GRADE programs by Condor m0m1 P-GRADE Budapest n0n1 Madison p0p1 Westminster P-GRADE program runs at the Madison cluster P-GRADE program runs at the Budapest cluster P-GRADE program runs at the Westminster cluster

Next step: Check-pointing and migration of P-GRADE programs m0m1 P-GRADE GUI Wisconsin Budapest n0n1 London 1 P-GRADE program downloaded to London as a Condor job 2 P-GRADE program runs at the London cluster London cluster overloaded => check-pointing 3 P-GRADE program migrates to Budapest as a Condor job 4 P-GRADE program runs at the Budapest cluster

Further develoment: TotalGrid TotalGrid is a total Grid solution that integrates the different software layers of a Grid (see next slide) and provides for companies and universities –exploitation of free cycles of desktop machines in a Grid environment after the working/labor hours –achieving supercomputer capacity using the actual desktops of the institution without further investments –Development and test of Grid programs

Layers of TotalGrid Internet Ethernet PVM v. MPI Condor v. SGE PERL-GRID P-GRADE

Hungarian Cluster Grid Initiative Goal: To connect the new clusters of the Hungarian higher education institutions into a Grid By autumn 42 new clusters will be established at various universities of Hungary. Each cluster contains 20 PCs and a network server PC. –Day-time: the components of the clusters are used for education –At night: all the clusters are connected to the Hungarian Grid by the Hungarian Academic network (2.5 Gbit/sec) –Total Grid capacity in 2002: 882 PCs In 2003 further 57 similar clusters will join the Hungarian Grid –Total Grid capacity in 2003: 2079 PCs Open Grid: other clusters can join at any time

Structure of the Hungarian Cluster Grid 2.5 Gb/s Internet 2003: 99*21 PC Linux clusters, total 2079 PCs 2002: 42*21 PC Linux clusters, total 882 PCs TotalGrid

Live demonstration of TotalGrid MEANDER Nowcast Program Package: –Goal: Ultra-short forecasting (30 mins) of dangerous weather situations (storms, fog, etc.) –Method: Analysis of all the available meteorology information for producing parameters on a regular mesh (10km->1km) Collaborative partners: –OMSZ (Hungarian Meteorology Service) –MTA SZTAKI

Structure of MEANDER First guess data ALADIN SYNOP data SateliteRadar CANARI Delta analysis Basic fields: pressure, temperature, humidity, wind. Derived fields: Type of clouds, visibility, etc. GRID Rada r to grid Satelit e to grid Current time Type of clouds Overcast Visibilit y Rainfall state Visualization For meteorologist s:HAWK For users: GIF Lightning decode

P-GRADE version of MEANDER

Implementation of the Delta method in P-GRADE

netCDF output 34 Mbit Shared PERL-GRID CONDOR-PVM job 11/5 Mbit Dedicated P-GRADE PERL-GRID job HAWK netCDF output Live demo of MEANDER based on TotalGrid ftp.met.hu netCDF 512 kbit Shared netCDF input Parallel execution

GRM TRACE 34 Mbit Shared PERL-GRID CONDOR-PVM job 11/5 Mbit Dedicated P-GRADE PERL-GRID job On-line Performance Visualization in TotalGrid ftp.met.hu netCDF 512 kbit Shared netCDF input Parallel execution and GRM GRM TRACE Task of SZTAKI in the DataGrid project

PROVE visualization of the delta method

Conclusions Already several important results that can be used both for –the academic world (Cluster Grid, SuperGrid) –commercial companies (TotalGrid) Further efforts and projects are needed –to make these Grids more robust more user-friendly –having more new functionalities

Thanks for your attention ? Further information: