1 (1)Cluster computing (2) Grid computing) Part 4 Current trend of parallel processing.

Slides:



Advertisements
Similar presentations
1 Copyright © 2002 Pearson Education, Inc.. 2 Chapter 1 Introduction to Perl and CGI.
Advertisements

Three types of remote process invocation
Parallel Virtual Machine Rama Vykunta. Introduction n PVM provides a unified frame work for developing parallel programs with the existing infrastructure.
Mobile Agents Mouse House Creative Technologies Mike OBrien.
MPI Message Passing Interface
Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
Company LOGO Parallel Virtual Machine Issued by: Ameer Mosa Al_Saadi 1 University of Technology Computer Engineering and Information Technology Department.
Beowulf Supercomputer System Lee, Jung won CS843.
Distributed Systems 1 Topics  What is a Distributed System?  Why Distributed Systems?  Examples of Distributed Systems  Distributed System Requirements.
Types of Parallel Computers
History of Distributed Systems Joseph Cordina
1 Distributed Computing Algorithms CSCI Distributed Computing: everything not centralized many processors.
OCT1 Principles From Chapter One of “Distributed Systems Concepts and Design”
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Introduction to PVM PVM (Parallel Virtual Machine) is a package of libraries and runtime daemons that enables building parallel apps easily and efficiently.
Legion Worldwide virtual computer. About Legion Made in University of Virginia Object-based metasystems software project middleware that connects computer.
Chapter 13 The First Component: Computer Systems.
Universidad Politécnica de Baja California. Juan P. Navarro Sanchez 9th level English Teacher: Alejandra Acosta The Beowulf Project.
Computer Science 101 The Virtual Machine: Operating Systems.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
1 Lecture 4: Distributed-memory Computing with PVM/MPI.
Telnet/SSH: Connecting to Hosts Internet Technology1.
Operating Systems Chapter 4.
Bioinformatics Protein structure prediction Motif finding Clustering techniques in bioinformatics Sequence alignment and comparison Phylogeny Applying.
Lecture 6: Introduction to Distributed Computing.
1 Developing Native Device for MPJ Express Advisor: Dr. Aamir Shafi Co-advisor: Ms Samin Khaliq.
PVM and MPI What is more preferable? Comparative analysis of PVM and MPI for the development of physical applications on parallel clusters Ekaterina Elts.
PVM. PVM - What Is It? F Stands for: Parallel Virtual Machine F A software tool used to create and execute concurrent or parallel applications. F Operates.
Research Achievements Kenji Kaneda. Agenda Research background and goal Research background and goal Overview of my research achievements Overview of.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
A+ Guide to Managing and Maintaining Your PC Fifth Edition Chapter 19 PCs on the Internet.
Socket Swapping for efficient distributed communication between migrating processes MS Final Defense Praveen Ramanan 12 th Dec 2002.
OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Parallel Solution of 2-D Heat Equation Using Laplace Finite Difference Presented by Valerie Spencer.
Distributed Systems: Concepts and Design Chapter 1 Pages
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
Invitation to Computer Science 5 th Edition Chapter 6 An Introduction to System Software and Virtual Machine s.
INVITATION TO COMPUTER SCIENCE, JAVA VERSION, THIRD EDITION Chapter 6: An Introduction to System Software and Virtual Machines.
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
Chapter 1: Introduction and History  Where does the operating system fit in a computing system?  What does the operating system achieve?  What are the.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
The Grid the united computing power Jian He Amit Karnik.
1 Software. 2 What is software ► Software is the term that we use for all the programs and data on a computer system. ► Two types of software ► Program.
PVM (Parallel Virtual Machine)‏ By : Vishal Prajapati Course CS683 Computer Architecture Prof. Moreshwar R Bhujade.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
© Chinese University, CSE Dept. Distributed Systems / Distributed Systems Topic 1: Characterization of Distributed & Mobile Systems Dr. Michael R.
7. Grid Computing Systems and Resource Management
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
| nectar.org.au NECTAR TRAINING Module 4 From PC To Cloud or HPC.
Distributed Systems Unit – 1 Concepts of DS By :- Maulik V. Dhamecha Maulik V. Dhamecha (M.Tech.)
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Use of Performance Prediction Techniques for Grid Management Junwei Cao University of Warwick April 2002.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Real-Time Systems Laboratory Seolyoung, Jeong JADE (Java Agent DEvelopment framework )
PVM and MPI.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Lecture 4: Distributed-memory Computing with PVM/MPI
2. OPERATING SYSTEM 2.1 Operating System Function
Parallel Virtual Machine
Prabhaker Mateti Wright State University
Constructing a system with multiple computers or processors
University of Technology
Computer Software CS 107 Lecture 2 September 1, :53 PM.
Telnet/SSH Connecting to Hosts Internet Technology.
Chapter 2: The Linux System Part 1
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Information Technology Ms. Abeer Helwa
Constructing a system with multiple computers or processors
MPJ: A Java-based Parallel Computing System
Types of Parallel Computers
Presentation transcript:

1 (1)Cluster computing (2) Grid computing) Part 4 Current trend of parallel processing

2 Traditional parallel processing Vector type or shared memory type or distributed memory type Special technology/architecture Very expensive (millions 〜 100 millions dollars), long developing period One computer used by a number of users Special OS and programming languages Limited users

3 New trend of parallel processing Cluster computing Grid computing Features Cheap to construct processing environments. Easy to popularize because of the use of ordinary computers.

4 Cluster processing What is cluster processing Use a number of work stations or PCs to construct a virtual parallel computer. Comparing with parallel computers, cluster processing is cheap. Performance is worse than parallel computers, but it can be largely improved by using high performance network ( 100BASE- TX, ATM, etc.).

5 Outline of global computing What is global (GRID) computing Use all the resources on internet to construct a super distributed parallel virtual computer Intranet + Internet Super parallel computers PC cluster User High performance PC

6 Implement of cluster processing (1) Well known software for cluster processing PVM (Parallel Virtual Machine) MPI (Message Passing Interface) History of PVM 1989 PVM1.0 released by Oak Rige National Laboratory PVM2.0 released by University of Tennessee PVM3.0 released Published with free manual 2001 PVM released with C and Fortran.

7 Implement of cluster processing (2) History of MPI Specification (MP-I) was decided by the experts from 40 companies Specification of MPI-2 was decided MPI-1.1 based implementation was released. It is used as default environment in many distributed parallel processing systems.

8 Common points of PVM and MPI Communication is based on 1 to 1 message passing. They are designed for not only work stations and PCs but also parallel computers. Corresponding to many OS and Languages. ( OS: UNIX, Windows95/NT, Languages: C, Fortran, JAVA) Free Standard software in distributed and parallel computing. send(5, message ) receive(3) Processor ID: 3 Processor ID: 5

9 Construction of PVM Software of PVM Daemon pvmd used for the communication between processors. Console pvm used for constructing virtual parallel computer ( xpvm is the console combined with GUI) Library of functions such as pvm_send and pvm_receive for message sending and receiving. On any computer, one can use pvm or xpvm to construct a virtual parallel computer, and then make and execute programs. A virtual parallel computer pvmd (opty1) (optima) (opty2) (opty3) ( Constructed by connected 4 computers )

10 Start of PVM (1) (1) On any computer, use command pvm or xpvm to start pvmd. On optima, use pvm or xpvm to start pvmd.

11 Start of PVM (2) (2) On console pvm add computers to construct a virtual parallel computer. (At the this time, pvmd is started in these computers.) On optima add opty1, opty2, opty3 to virtual parallel computer by pvmd.

12 Execution of PVM (1) Program execution on PVM (1) When the program which contains communication functions of pvm is executed, the specified programs will be executed by pvmd When program prog1 is executed on optima which contains communication functions, the process (in this case prog 1) will be started in all processors by pvmd.

13 Execution of PVM (2) (2) Once the execution of a program started, communication and processing inside the program will be done automatically until the end. Processing is held with the communication between processors.

14 Using PVM (1) Using command pvm chen[11]% pvm pvm> add opty1 opty2 opty3 ( add opty1, opty2, opty3 to the virtual computer ) 3 successful HOST DTID opty opty2 c0000 opty pvm> conf (Show the configuration of the virtual parallel computer) 4 hosts, 1 data format HOST DTID ARCH SPEED DSIG optima X86SOL x opty X86SOL x opty2 c0000 X86SOL x opty X86SOL x pvm> spawn edm4 ( Execute program edm4 on the virtual computer) 1 successful t80001 pvm> halt ( Halt of the virtual computer ) Terminated chen[12]%

15 Using PVM (2) Using xpvm (X application)

16 PVM programming (1) Example (hello.c, hello_other.c)

17 PVM Programming (2) Start the process hello Hello, Dr. t80004 ( Receive the message by pvm_recv ) Fine, thank you, Dr. t80004 ( Process hello_other is started ) Hello,Prof. t4000a. How are you? ( Send message by pvm_send ) pvm_spawn (Process hello_other is started on other computers by pvm_spawn. Started process has its task ID.) pvm_send pvm_recv ( Finish the process by pvm_exit )

18 Grid computing (1) Distributed and parallel processing techniques - TCP/IP protocol such as telnet, rlogin - Cluster processing such as PVM, MPI - WWW related technique such as HTTP/HTML/CGI - Agent related technique such as Plangent, Aglets -... They can not be used to Grid computing without changing. ( Some of them can be used. )

19 Grid computing (2) Abilities necessary for grid computing Generalization: easy to add computers Security: unknown users , mobile users Tolerance of fault Heterogeneity: Language , OS , hardware, security level High performance: super high performance, super high speed network Scalability : Very large scale

20 Grid computing (3) History of Grid computing 1980 ’ s : Distributed computing 1990 ’ s : Gigabit network I-way, 1995: Application Alliance (NCSA) Virtual Machine Room PACIs (NCSA/SDSC NSF National Technology Grid), 1998 〜 NASA Information Power Grid, 1999 〜 eGrid (European Grid), 2000 〜 ApGrid (Asia-Pacific Grid) 2000 〜...

21 Implement of grid computing Software for grid computing Toolkit: Globus, Legion, AppLes Message passing: MPICH-G2 GridRPC: Ninf, Netsolve, Nimrod Business software:United Devices, Entropia, Parabon,... Ninf is developed in Japan

22 Achievement of Ninf Solving 大規模非凸2次計画問題 by SCRM (kojima-Tuncel- Takeda) method: The world record is achieved by using large scale cluster (128 processors) Solving Cyclic Polynomial Equation by Homotopy method: The world record is achieved by using large scale cluster (128 processors)

23 Summary (1) Changing in parallel processing From parallel computers to cluster processing & grid computing Combined with high performance network Using popular technologies Using internet which has huge processing ability Parallel processing becomes possible for ordinary users Advanced technologies are necessary.

24 Summary (2) Cluster processing wide used technologies are used for parallel processing. Parallel processing becomes possible for ordinary users. Grid computing 最先端の広帯域ネットワークを用いたインフラ技術 Combination of network technology and computing technology Great progress in computer related fields. Necessity of the comprehensive research Parallel processing , Language , OS Network Application