Parallel Processing LAB NO 1.

Slides:



Advertisements
Similar presentations
1.2 History of Operating Systems
Advertisements

Multiple Processor Systems
MPI Message Passing Interface
Parallel Processing with OpenMP
Introduction to Openmp & openACC
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Dinker Batra CLUSTERING Categories of Clusters. Dinker Batra Introduction A computer cluster is a group of linked computers, working together closely.
Development of a Compact Cluster with Embedded CPUs Sritrusta Sukaridhoto, Yoshifumi Sasaki, Koichi Ito and Takafumi Aoki.
History of Distributed Systems Joseph Cordina
Reference: Message Passing Fundamentals.
Reference: Getting Started with MPI.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Operating Systems CS208. What is Operating System? It is a program. It is the first piece of software to run after the system boots. It coordinates the.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Parallelization: Conway’s Game of Life. Cellular automata: Important for science Biology – Mapping brain tumor growth Ecology – Interactions of species.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
Independent Study of Parallel Programming Languages An Independent Study By: Haris Ribic, Computer Science - Theoretical Independent Study Advisor: Professor.
1 Lecture 20: Parallel and Distributed Systems n Classification of parallel/distributed architectures n SMPs n Distributed systems n Clusters.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Multiprocessing. Going Multi-core Helps Energy Efficiency William Holt, HOT Chips 2005 Adapted from UC Berkeley "The Beauty and Joy of Computing"
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
1.1 Operating System Concepts Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Hybrid MPI and OpenMP Parallel Programming
GPU Architecture and Programming
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Parallel Programming with MPI By, Santosh K Jena..
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Types of Operating Systems 1 Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Distributed Computing Systems CSCI 6900/4900. Review Distributed system –A collection of independent computers that appears to its users as a single coherent.
Globus Grid Tutorial Part 2: Running Programs Across Multiple Resources.
Lecture 3: Computer Architectures
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Distributed Real-time Systems- Lecture 01 Cluster Computing Dr. Amitava Gupta Faculty of Informatics & Electrical Engineering University of Rostock, Germany.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
CS4315A. Berrached:CMS:UHD1 Introduction to Operating Systems Chapter 1.
NGS computation services: APIs and.
Distributed Computing Systems CSCI 6900/4900. Review Definition & characteristics of distributed systems Distributed system organization Design goals.
Implementing Processes and Threads CS550 Operating Systems.
Message Passing Interface Using resources from
Background Computer System Architectures Computer System Software.
Primitive Concepts of Distributed Systems Chapter 1.
Heterogeneous Processing KYLE ADAMSKI. Overview What is heterogeneous processing? Why it is necessary Issues with heterogeneity CPU’s vs. GPU’s Heterogeneous.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
MPI Basics.
MPI Message Passing Interface
CRESCO Project: Salvatore Raia
NGS computation services: APIs and Parallel Jobs
Is System X for Me? Cal Ribbens Computer Science Department
Advanced Operating Systems
Distributed computing deals with hardware
By Brandon, Ben, and Lee Parallel Computing.
MPI MPI = Message Passing Interface
Hybrid MPI and OpenMP Parallel Programming
Introduction to Parallel Computing
Presentation transcript:

Parallel Processing LAB NO 1

Parallel Processing Ability to carry out multiple operations/tasks at the same time using more than one processors or processor cores Differs from multitasking or multiprogramming in which a single CPU executes several programs at the same time

Distributed Computing Multiple autonomous computers that communicate through a computer network to achieve a common goal In parallel computing, all processors have access to shared memory, which can be used to exchange information between processors In distributed computing, each processor has its own private memory (distributed memory); info can be exchanged by passing messages between processors

Cluster Computing Computer cluster is a set of loosely connected computers that work together so that they can be viewed as a single, highly available system Used for either load balancing or high availability

Grid Computing Applying resources of many computers (from multiple administrative domains) in a network to a single problem at the same time Although a grid can be dedicated to a specialized application, it is more common to use a single grid for a variety of different purposes

Cluster vs Grid Homogeneous system vs heterogeneous system Tight coupling vs loose coupling Grids are inherently distributed and spread over LANs, Man or WAN Single system image vs independent, autonoumous nodes In a cluster, entire system behaves like a single system with resources managed by a centralized resource manager In a grid, every node is autonomous, has its own resource manager and behaves like an independent entity.

Parallel Computing Memory Architectures Shared Memory Can be simultaneously accessed by multiple processes Programs may run on single or multiple processors In multi-processor systems, large block of RAM can be accessed by several CPUs Distributed Memory Each processor in a multi-processor computer system has its own private memory MPI or socket programming are main tools used for distributed memory environment

MPI- Message Passing Interface A library of functions or an API Allows communication between processors in a distributed memory architecture Allows portability Uses either a C or a FORTRAN compiler

MPI Data Types MPI datatype handle C datatype MPI_INT int MPI_SHORT MPI_LONG long MPI_FLOAT float MPI_DOUBLE double MPI_CHAR char MPI_BYTE unsigned char

MPI Program # include <stdio.h> # include <mpi.h> main(int argc, char **argv) { //coding MPI_Init(&argc, &argv); MPI_Finalize(); }

MPI Init O 4 1 Network 3 2

MPI Finalize O 4 1 Network 3 2

Compiling and running mpicc –o <executable file name> <source file name> mpirun <executable file name> mpirun –np <no> <executable file name>