Presentation is loading. Please wait.

Presentation is loading. Please wait.

MSA’2000 Metacomputing Systems and Applications. MSA Introduction2 Organizing Committee F. Desprez, INRIA Rhône-Alpes E. Fleury, INRIA Lorraine J.-F.

Similar presentations


Presentation on theme: "MSA’2000 Metacomputing Systems and Applications. MSA Introduction2 Organizing Committee F. Desprez, INRIA Rhône-Alpes E. Fleury, INRIA Lorraine J.-F."— Presentation transcript:

1 MSA’2000 Metacomputing Systems and Applications

2 MSA Introduction2 Organizing Committee F. Desprez, INRIA Rhône-Alpes E. Fleury, INRIA Lorraine J.-F. Méhaut, INRIA Rhône-Alpes Y. Robert, ENS Lyon www.ens-lyon.fr/LIP/

3 MSA Introduction3 Program Committee H. Bal, Vrije University, Amsterdam F. Berman, UCSD San Diego J. Dongarra, UT Knoxville & ONRL G. von Laszewski, Argonne T. Ludwig, TUM München T. Priol, INRIA Rennes M. Resch, Stuttgart OC +

4 MSA Introduction4 ISBN 1-55860-475-8 22 chapters by expert authors including Andrew Chien, Jack Dongarra, Tom DeFanti, Andrew Grimshaw, Roch Guerin, Ken Kennedy, Paul Messina, Cliff Neuman, Jon Postel, Larry Smarr, Rick Stevens, and many others The Grid: Blueprint for a New Computing Infrastructure I. Foster, C. Kesselman (Eds), Morgan Kaufmann, 1999

5 MSA Introduction5 Bibliography Web –NPCACI (National Partnership for Advanced Computational Infrastructure) www.npaci.edu –GrADS (Grid Application Development Software Project) hipersoft.cs.rice.edu/grads –“An Overview of Computational Grids and Survey of a Few Research Projects”, Jack Dongarra www.netlib.org/utk/people/JackDongarra/talks.html LIP Report 99-36 –“Algorithms and Tools for (Distributed) Heterogeneous Computing: A Prospective Report” www.ens-lyon.fr/~yrobert

6 Framework

7 MSA Introduction7 Metacomputing Future of parallel computing distributed and heterogeneous Metacomputing = Making use of distributed collections of heterogeneous platforms Target = Tightly-coupled high-performance distributed applications (rather than loosely-coupled cooperative applications)

8 MSA Introduction8 Metacomputing Platforms (1) Low end of the field Cluster computing with heterogeneous networks of workstations or PCs –Ubiquitous in university departments and companies –Typical poor man’s parallel computer –Running large PVM or MPI experiments –Make use of all available resources: slower machines in addition to more recent ones

9 MSA Introduction9 Metacomputing Platforms (2) High end of the field Computational grid linking the most powerful supercomputers of the largest supercomputing centers through dedicated high-speed networks. Middle of the field Connecting medium size parallel servers (equipped with application-specific databases and application-oriented software) through fast but non-dedicated, thus creating a “meta-system”

10 MSA Introduction10 High end: Gusto

11 MSA Introduction11 Low end (1) Distributed ASCI Supercomputer (DAS) –Common platform for research –(Wide-area) parallel computing and distributed applications –November 1998, 4 universities, 200 nodes –Node 200 MHz Pentium Pro 128 MB memory, 2.5 GB disk Myrinet 1.28 Gbit/s (full duplex) Operating System: BSD/OS –ATM Network

12 MSA Introduction12 Low end (2)

13 MSA Introduction13 Administrative Issues Intensive computations on a set of processors across several countries and institutions –Strict rules to define the (good) usage of shared resources A major difficulty is to avoid a large increase in the administrative overhead –Challenge = find a tradeoff that does not increase the administrative load while preserving the users’ security se rules must be guaranteed by the runtime, together with methods to migrate computations to other sites whenever some local request is raised

14 MSA Introduction14 Tomorrow’s Virtual Super-Computer Metacomputing applications will execute on a hierarchical grid –Interconnection of clusters scattered all around the world A fundamental characteristic of the virtual super- computer: –A set of strongly heterogeneous and geographically scattered resources

15 MSA Introduction15 Algorithmic and Software Issues (1) Whereas the architectural vision is clear, the software developments are not so well understood

16 MSA Introduction16 Algorithmic and Software Issues (2) Low end of the field: –Cope with heterogeneity –Major algorithmic effort to be undertaken High end of the field –Logically assemble the distributed computers: extensions to PVM and MPI to handle distributed collection of clusters –Configuration and performance optimization Inherent complexity of networked and heterogeneous systems Resources often identified at runtime Dynamic nature of resource characteristics

17 MSA Introduction17 Algorithmic and Software Issues (3) High-performance computing applications must: –Configure themselves to fit the execution environment –Adapt their behavior to subsequent changes in resource characteristics Parallel environments focused on strongly homogeneous architectures (processor, memory, network) –Array and loop distribution, parallelizing compilers, HPF constructs, gang scheduling, MPI However… Metacomputing platforms are strongly heterogeneous!

18 Programming environments

19 MSA Introduction19 Programing models (1) Extensions of MPI: –MPI_Connect, Nexus, PACX-MPI, MPI-Plus, Data-Exchange, VCM, MagPIe, … Globus: a layered approach –Fundamental layer = a set of core services, including resource management, security, and communications that enable the linking and interoperation of distributed computer systems

20 MSA Introduction20 Programing models (2) Object-oriented technologies to cope with heterogeneity: –Encapsulate technical ``details'' such as protocols, data representations, migration policies –Legion is building on Mentat, an object-oriented parallel processing system –Albatross relies on a high-performance Java system, with a very efficient implementation of Java Remote Method Invocation.

21 MSA Introduction21 Programing models (3) Far from achieving the holy goal: –Using the computing resources remotely and transparently, just as we do with electricity, without knowing where it comes from

22 MSA Introduction22 References Globus www.globus.org Legion www.cs.virginia.org/~legion Albatross www.cs.vu.nl/~bal/albatross AppLeS www-cse.ucsd.edu/groups/hpcl/apples/apples.html NetSolve www.cs.utk.edu/netsolve

23 Algorithmic issues

24 MSA Introduction24 Data Decomposition Techniques for Cluster Computing Block-cyclic distribution paradigm = preferred layout for data-parallel programs (HPF, ScaLAPACK) Evenly balances total workload only if all processors have same speed Extending ScaLAPACK to heterogeneous clusters turns out to be surprisingly difficult

25 MSA Introduction25 Algorithmic challenge Bad news: designing a matrix-matrix product or a dense linear solver proves a hard task on a heterogeneous cluster! Next problems: –Simple linear algebra kernels on a collection of clusters (extending the platform) – More ambitious routines, composed of a variety of elementary kernels, on a heterogeneous cluster (extending the application) –Implementing more ambitious routines on more ambitious platforms (extending both)

26 MSA Introduction26 Collections of clusters (1) Slower link Fast link

27 Conclusion

28 MSA Introduction28 (A) Algorithmic issues Difficulties seem largely underestimated Data decomposition, scheduling heuristics, load balancing become extremely difficult in the context of metacomputing platforms Research community focuses on low-level communication protocols and distributed system issues (light-weight process invocation, migration,...)

29 MSA Introduction29 (B) Programming level Which is the good level ? –Data-parallelism unrealistic, due to heterogeneity –Explicit message passing too low-level –Object-oriented approaches still request the user to have a deep knowledge of both its application behavior and the underlying resources –Remote computing systems (NetSolve) face severe limitations to efficiently load-balance the work –Relying on specialized but highly-tuned libraries of all kinds may prove a good trade-off

30 MSA Introduction30 (C) Applications Key applications (from scientific computing to data-bases) have dictated the way classical parallel machines are used, programmed, and even updated into more efficient platforms Key applications will strongly influence, or even guide, the development of metacomputing environments

31 MSA Introduction31 (C) Applications (cont’d) Which applications will be worth the abundant but hard-to-access resources of the grid ? –tightly-coupled grand challenges ? –mobile computing applications ? –micro-transactions on the Web ? All these applications require new programming paradigms to enable inexperienced users to access the magic grid!

32 Today’s program

33 MSA Introduction33 Session 1: Communication and Metacomputing Infrastructures 9h00:10h00, Metacomputing in a High Performance Computing Center (invited talk), M. Resh. 10:30-11:00, Scheduling Algorithms for Efficient Gather Operation in Distributed Heterogeneous Systems, Juin-ichi Hatta & Susumu Shibusawa 11:00-11:30, Applying and Monitoring Latency Based Metacomputing Infrastructures, Philipp Drum & Günther Rackl. 11:30-12:00, MPC: A New Message Passing Library in Corba T. Es-sqally, J. Guyard & E. Fleury.

34 MSA Introduction34 Session 2: Scientific Applications and Distributed Computing 14:00-15:00, The Netsolve Environment: Processing Towards a Seamless Grid (invited talk), D. Arnold & J. Dongarra 15:30-16:00, Specification of a Scilab Meta-Computing Extension, S. Contassot-Vivier, F. Lombard, J-M. Nicod & L. Philippe 16:00-16:30, Extending WebCom: A Proposed Framework for Web based Distributed Computing, J. P. Morrison, J. J. Kennedy & D. A. Power 16:30-17:30, Panel discussion


Download ppt "MSA’2000 Metacomputing Systems and Applications. MSA Introduction2 Organizing Committee F. Desprez, INRIA Rhône-Alpes E. Fleury, INRIA Lorraine J.-F."

Similar presentations


Ads by Google