The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal Vrije Universiteit Amsterdam.

Slides:



Advertisements
Similar presentations
European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies Scalability.
Advertisements

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies Experiences.
7 april SP3.1: High-Performance Distributed Computing The KOALA grid scheduler and the Ibis Java-centric grid middleware Dick Epema Catalin Dumitrescu,
The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal Vrije Universiteit Amsterdam.
Christian Delbe1 Christian Delbé OASIS Team INRIA -- CNRS - I3S -- Univ. of Nice Sophia-Antipolis November Automatic Fault Tolerance in ProActive.
Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
The Distributed ASCI Supercomputer (DAS) project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
The Ibis model as a paradigm for programming distributed systems Henri Bal Vrije Universiteit Amsterdam (from Grids and Clouds to Smartphones)
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
Parallel Programming on Computational Grids. Outline Grids Application-level tools for grids Parallel programming on grids Case study: Ibis.
GridRPC Sources / Credits: IRISA/IFSIC IRISA/INRIA Thierry Priol et. al papers.
A Computation Management Agent for Multi-Institutional Grids
Distributed supercomputing on DAS, GridLab, and Grid’5000 Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
Summary Background –Why do we need parallel processing? Applications Introduction in algorithms and applications –Methodology to develop efficient parallel.
Parallel Programming on Computational Grids. Outline Grids Application-level tools for grids Parallel programming on grids Case study: Ibis.
The Distributed ASCI Supercomputer (DAS) project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
June 3, 2015 Synthetic Grid Workloads with Ibis, K OALA, and GrenchMark CoreGRID Integration Workshop, Pisa A. Iosup, D.H.J. Epema Jason Maassen, Rob van.
Real-World Distributed Computing with Ibis Henri Bal Vrije Universiteit Amsterdam.
Notes to the presenter. I would like to thank Jim Waldo, Jon Bostrom, and Dennis Govoni. They helped me put this presentation together for the field.
Parallel Programming Henri Bal Rob van Nieuwpoort Vrije Universiteit Amsterdam Faculty of Sciences.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Parallel Programming Henri Bal Vrije Universiteit Faculty of Sciences Amsterdam.
The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal, Jason Maassen, Rob van Nieuwpoort, Thilo Kielmann, Niels Drost, Ceriel Jacobs, Frank.
Grid Adventures on DAS, GridLab and Grid'5000 Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
Ibis: a Java-centric Programming Environment for Computational Grids Henri Bal Vrije Universiteit Amsterdam vrije Universiteit.
Rutgers PANIC Laboratory The State University of New Jersey Self-Managing Federated Services Francisco Matias Cuenca-Acuna and Thu D. Nguyen Department.
Parallel Programming on Computational Grids. Outline Grids Application-level tools for grids Parallel programming on grids Case study: Ibis.
The Distributed ASCI Supercomputer (DAS) project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
High-Performance Distributed Multimedia Computing Frank Seinstra, Jan-Mark Geusebroek Intelligent Systems Lab Amsterdam Informatics Institute University.
Parallel Programming Henri Bal Vrije Universiteit Faculty of Sciences Amsterdam.
4 december, The Distributed ASCI Supercomputer The third generation Dick Epema (TUD) (with many slides from Henri Bal) Parallel and Distributed.
Workload Management Massimo Sgaravatto INFN Padova.
Parallel Programming Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
Design and implementation  Main features  Socket API  No need to modify existing applications/middleware  Overlay network  FW/NAT traversal.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Course Outline Introduction in algorithms and applications Parallel machines and architectures Overview of parallel machines, trends in top-500, clusters.
Research Achievements Kenji Kaneda. Agenda Research background and goal Research background and goal Overview of my research achievements Overview of.
DAS 1-4: 14 years of experience with the Distributed ASCI Supercomputer Henri Bal Vrije Universiteit Amsterdam.
This work was carried out in the context of the Virtual Laboratory for e-Science project. This project is supported by a BSIK grant from the Dutch Ministry.
Panel Abstractions for Large-Scale Distributed Systems Henri Bal Vrije Universiteit Amsterdam.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED.
The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal Vrije Universiteit Amsterdam.
DISTRIBUTED COMPUTING
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
1 Introduction to Middleware. 2 Outline What is middleware? Purpose and origin Why use it? What Middleware does? Technical details Middleware services.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
1 Geospatial and Business Intelligence Jean-Sébastien Turcotte Executive VP San Francisco - April 2007 Streamlining web mapping applications.
A High Performance Middleware in Java with a Real Application Fabrice Huet*, Denis Caromel*, Henri Bal + * Inria-I3S-CNRS, Sophia-Antipolis, France + Vrije.
More on Adaptivity in Grids Sathish S. Vadhiyar Source/Credits: Figures from the referenced papers.
ICT infrastructure for Science: e-Science developments Henri Bal Vrije Universiteit Amsterdam.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Chapter 3 - VLANs. VLANs Logical grouping of devices or users Configuration done at switch via software Not standardized – proprietary software from vendor.
Gluepy: A Framework for Flexible Programming in Complex Grid Environments Ken Hironaka Hideo Saito Kei Takahashi Kenjiro Taura (University of Tokyo) {kenny,
Wide-Area Parallel Computing in Java Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences vrije Universiteit.
By Nitin Bahadur Gokul Nadathur Department of Computer Sciences University of Wisconsin-Madison Spring 2000.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Parallel Computing on Wide-Area Clusters: the Albatross Project Aske Plaat Thilo Kielmann Jason Maassen Rob van Nieuwpoort Ronald Veldema Vrije Universiteit.
1 Hierarchical Parallelization of an H.264/AVC Video Encoder A. Rodriguez, A. Gonzalez, and M.P. Malumbres IEEE PARELEC 2006.
Nguyen Thi Thanh Nha HMCL by Roelof Kemp, Nicholas Palmer, Thilo Kielmann, and Henri Bal MOBICASE 2010, LNICST 2012 Cuckoo: A Computation Offloading Framework.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
High level programming for the Grid Gosia Wrzesinska Dept. of Computer Science Vrije Universiteit Amsterdam vrije Universiteit.
Fault tolerance, malleability and migration for divide-and-conquer applications on the Grid Gosia Wrzesińska, Rob V. van Nieuwpoort, Jason Maassen, Henri.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Chapter 1 Characterization of Distributed Systems
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Real-World Distributed Computing with Ibis
Summary Background Introduction in algorithms and applications
Parallel programming in Java
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal Vrije Universiteit Amsterdam

The ‘Promise of the Grid’ Efficient and transparent (i.e. easy-to-use) wall-socket computing over a distributed set of resources [Sunderam ICCS’2004, based on Foster/Kesselman]

Parallel computing on grids ● Mostly limited to ● trivially parallel applications ● parameter sweeps, master/worker ● applications that run on one cluster at a time ● use grid to schedule application on a suitable cluster ● Our goal: run real parallel applications on a large-scale grid, using co-allocated resources

Efficient wide-area algorithms ● Latency-tolerant algorithms with asynchronous communication ● Search algorithms (Awari-solver [CCGrid’08] ) ● Model checkers (DiVinE [PDMC’08] ) ● Algorithms with hierarchical communication ● Divide-and-conquer ● Broadcast trees ● …..

● Performance & scalability ● Heterogeneous ● Low-level & changing programming interfaces ● writing & deploying grid applications is hard Reality: ‘Problems of the Grid’ ● Connectivity issues ● Fault tolerance ● Malleability Wide-Area Grid SystemsUser !

The Ibis Project ● Goal: ● drastically simplify grid programming/deployment ● write and go!

Approach (1) ● Write & go: minimal assumptions about execution environment ● Virtual Machines (Java) deal with heterogeneity ● Use middleware-independent APIs ● Mapped automatically onto middleware ● Different programming abstractions ● Low-level message passing ● High-level divide-and-conquer

Approach (2) ● Designed to run in dynamic/hostile grid environment ● Handle fault-tolerance and malleability ● Solve connectivity problems automatically (SmartSockets) ● Modular and flexible: can replace Ibis components by external ones ● Scheduling: Zorilla P2P system or external broker

Global picture

Rest of talk Satin: divide & conquer Communication layer (IPL) SmartSockets Applications JavaGAT Zorilla P2P

Outline ● Grid programming ● IPL ● Satin ● SmartSockets ● Grid deployment ● JavaGAT ● Zorilla ● Applications and experiments

Ibis Design

Ibis Portability Layer (IPL) ● Java-centric “run-anywhere” library ● Sent along with the application (jar-files) ● Point-to-point, multicast, streaming, …. ● Efficient communication ● Configured at startup, based on capabilities (multicast, ordering, reliability, callbacks) ● Bytecode rewriter avoids serialization overhead

Serialization ● Based on bytecode-rewriting ● Adds (de)serialization code to serializable types ● Prevents reflection overhead during runtime Java compiler bytecode rewriter JVM source bytecode

Membership Model ● JEL (Join-Elect-Leave) model ● Simple model for tracking resources, supports malleability & fault-tolerance ● Notifications of nodes joining or leaving ● Elections ● Supports all common programming models ● Centralized and distributed implementations ● Broadcast trees, gossiping

Programming models ● Remote Method Invocation (RMI) ● Group Method Invocation (GMI) ● MPJ (MPI Java 'standard') ● Satin (Divide & Conquer)

Satin: divide-and-conquer ● Divide-and-conquer is inherently hierarchical ● More general than master/worker ● Cilk-like primitives (spawn/sync) in Java ● Supports malleability and fault-tolerance ● Supports data-sharing between different branches through Shared Objects

Satin implementation ● Load-balancing is done automatically ● Cluster-aware Random Stealing (CRS) ● Combines Cilk’s Random Stealing with asynchronous wide-area steals ● Self-adaptive malleability and fault-tolerance ● Add/remove machines on the fly ● Survive crashes by efficient recomputations/checkpointing

Self-adaptation with Satin ● Adapt #CPUs to level of parallelism ● Migrate work from overloaded to idle CPUs ● Remove CPUs with poor network connectivity ● Add CPUs dynamically when ● Level of parallelism increases ● CPUs were removed or crashed ● Can also remove/add entire clusters ● E.g., for network problems [Wrzesinska et al., PPoPP’07 ]

Approach ● Weighted Average Efficiency (WAE): 1/#CPUs * Σ speed i * (1 – overhead i ) overhead is fraction idle+communication time speed i = relative speed of CPU i (measured periodically) ● General idea: Keep WAE between E min (30%) and E max (50%)

Overloaded network link ● Uplink of 1 cluster reduced to 100 KB/s ● Remove badly connected cluster, get new one Iteration duration Iteration

Connectivity Problems ● Firewalls & Network Address Translation (NAT) restrict incoming traffic ● Addressing problems ● Machines with >1 network interface (IP address) ● Machine on a private network (e.g., NAT) ● No direct communication allowed ● E.g., between compute nodes and external world

SmartSockets library ● Detects connectivity problems ● Tries to solve them automatically ● With as little help from the user as possible ● Integrates existing and several new solutions ● Reverse connection setup, STUN, TCP splicing, SSH tunneling, smart addressing, etc. ● Uses network of hubs as a side channel

Example

[Maassen et al., HPDC’07 ]

Overview JavaGAT Zorilla P2P

JavaGAT ● GAT: Grid Application Toolkit ● Makes grid applications independent of the underlying grid infrastructure ● Used by applications to access grid services ● File copying, resource discovery, job submission & monitoring, user authentication ● API is currently standardized (SAGA) ● SAGA implemented on JavaGAT

Grid Applications with GAT GAT Engine Remote Files Monitoring Info service Resource Management GridLabGlobusUnicoreSSHP2PLocal GAT Grid Application File.copy(...)‏ submitJob(...)‏ gridftp globus Intelligent dispatching [van Nieuwpoort et al., SC’07 ]

Zorilla: Java P2P supercomputing middleware

Zorilla components ● Job management ● Handling malleability and crashes ● Robust Random Gossiping ● Periodic information exchange between nodes ● Robust against Firewalls, NATs, failing nodes ● Clustering: nearest neighbor ● Flood scheduling ● Incrementally search for resources at more and more distant nodes [Drost et al., HPDC’07 ]

Overview

Ibis applications ● e-Science (VL-e) ● Brain MEG-imaging ● Mass spectroscopy ● Multimedia content analysis ● Various parallel applications ● SAT-solver, N-body, grammar learning, … ● Other programming systems ● Workflow engine for astronomy (D-grid), grid file system, ProActive, Jylab, …

Overview experiments ● DAS-3: Dutch Computer Science grid ● Satin applications on DAS-3 ● Zorilla desktop grid experiment ● Multimedia content analysis ● High resolution video processing

DAS-3DAS nodes (AMD Opterons) 792 cores 1TB memory LAN: Myrinet 10G Gigabit Ethernet WAN (StarPlane): Gb/s OPN Heterogeneous: GHz Single/dual-core Delft no Myrinet

Gene sequence comparison in Satin (on DAS-3) Speedup on 1 cluster Run times on 5 clusters Divide&conquer scales much better than master-worker 78% efficiency on 5 clusters (with 1462 WAN-msgs/sec)

Barnes-Hut (Satin) on DAS-3 Shared object extension to D&C model improves scalability 57% efficiency on 5 clusters (with 1371 WAN-msgs/sec) Speedup on 1 cluster Run times on 5 clusters

Zorilla Desktop Grid Experiment ● Small experimental desktop grid setup ● Student PCs running Zorilla overnight ● PCs with 1 CPU, 1GB memory, 1Gb/s Ethernet ● Experiment: gene sequence application ● 16 cores of DAS-3 with Globus ● 16 core desktop grid with Zorilla ● Combination, using Ibis-Deploy

Ibis-Deploy deployment tool 3574 sec1099 sec 877 sec Easy deployment with Zorilla, JavaGAT & Ibis-Deploy

Multimedia content analysis ● Analyzes video streams to recognize objects ● Extract feature vectors from images ● Describe properties (color, shape) ● Data-parallel task implemented with C++/MPI ● Compute on consecutive images ● Task-parallelism on a grid

MMCA application Client (Java) Broker (Java) Parallel Horus Server Parallel Horus Servers (C++) (any machine world-wide) (local desk-top machine) Ibis (Java) (grid)

MMCA with Ibis ● Initial implementation with TCP was unstable ● Ibis simplifies communication, fault tolerance ● SmartSockets solves connectivity problems ● Clickable deployment interface ● Demonstrated at many conferences (SC’07) ● 20 clusters on 3 continents, cores ● Frame rate increased from 1/30 to 15 frames/sec [Seinstra et al., IEEE Multimedia’07 ]

MMCA ‘ Most Visionary Research’ award at AAAI 2007, (Frank Seinstra et al.)

High Resolution Video Processing ● Realtime processing of CineGrid movie data ● 3840x fps = 1424 MB/sec ● Multi-cluster processing pipeline ● Using DAS-3, StarPlane and Ibis

CineGrid with Ibis ● Use of StarPlane requires no configuration ● StarPlane is connected to local Myrinet network ● Detected & used automatically by SmartSockets ● Easy setup of application pipeline ● Connection administration of application is simplified by the IPL election mechanism ● Simple multi-cluster deployment (Ibis-Deploy) ● Uses Ibis serialization for high throughput

Summary ● Goal: Simplify grid programming/deployment ● Key ideas in Ibis ● Virtual machines (JVM) deal with heterogeneity ● High-level programming abstractions (Satin) ● Handle fault-tolerance, malleability, connectivity problems automatically (Satin, SmartSockets) ● Middleware-independent APIs (JavaGAT) ● Modular

Acknowledgements Current members Rob van Nieuwpoort Jason Maassen Thilo Kielmann Frank Seinstra Niels Drost Ceriel Jacobs Kees Verstoep Roelof Kemp Kees van Reeuwijk Past members John Romein Gosia Wrzesinska Rutger Hofman Maik Nijhuis Olivier Aumage Fabrice Huet Alexandre Denis

More information ● Ibis can be downloaded from ● ● Papers: ● Satin [PPoPP’07], SmartSockets [HPDC’07], Gossiping [HPDC’07], JavaGAT [SC’07], MMCA [IEEE Multimedia’07] Ibis tutorials Next one at CCGrid 2008 (19 May, Lyon)