מבוא לעיבוד מקבילי – הרצאה מס ' 2 29.10.01 תרגול על המערך המקבילי.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Deino MPI Installation The famous “cpi.c” Profiling
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 CS 668: Lecture 2 An Introduction to MPI Fred Annexstein University of Cincinnati CS668: Parallel Computing Fall 2007 CC Some.
מבוא לעיבוד מקבילי – הרצאה מס ' 2 תרגול MPI על המערך המקבילי.
Parallel Programming in C with MPI and OpenMP
1 Message Passing Programming (MPI). 2 What is MPI? A message-passing library specification extended message-passing model not a language or compiler.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Director of Contra Costa College High Performance Computing Center
Parallel Systems Lab 1 Dr. Guy Tel-Zur. מטרות השיעור התחברות לשתי הפלטפורמות העיקריות של הקורס : –מחשב לינוקס וירטואלי במעבדה –הקלאסטר המקבילי תרגול ביצוע.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
An Introduction to MPI Parallel Programming with the Message Passing Interface Prof S. Ramachandram.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
1 Introduction to Parallel Programming with Single and Multiple GPUs Frank Mueller
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Chapter 4 Message-Passing Programming. Learning Objectives Understanding how MPI programs execute Understanding how MPI programs execute Familiarity with.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
Introduction to MPI Programming Ganesh C.N.
Chapter 4.
MPI Basics.
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
Introduction to MPI CDP.
Send and Receive.
CS 584.
Send and Receive.
Introduction to Message Passing Interface (MPI)
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
MPI: Message Passing Interface
CSCE569 Parallel Computing
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Presentation transcript:

מבוא לעיבוד מקבילי – הרצאה מס ' תרגול על המערך המקבילי

מטרות המפגש היום לוודא כי לכל זוג יש חשבון משתמש על המערך המקבילי. תרגול ביצוע משימות בסיסיות תחת מערכת ההפעלה Linux. הרצת תכניות מקביליות בסיסיות המשתמשות ב - MPI.

יעדים יישור קו הכרות ראשונית עם MPI

Basic Linux Commands – 1/5 login: username password: passwd Enter the system exitExit the system pico, vi, EmacsText editors gcc –o file file.cC Compiler

Basic Linux Commands – 2/5 LinuxDOS ls ls -l dirSee files cpcopyCopy files rmdelErase files

Basic Linux Commands – 3/5 LinuxDOS mkdir Make directory rmdir Remove directory mvrenameMore/Rename uname -averOS version

Basic Linux Commands – 4/5 Getting help: man topic Looking at the contents of a file: more Quit from man or more: q Where am I? pwd Clear the screen: clear Print the contents of a file: cat

Basic Linux Commands – 5/5 Redirection: >, >> Pipe: | telnet ftp ping chmod chown

Linux FAQ

The vi Editor ESCPuts you in command mode h, j, k, lLeft, down, up, right or use the arrows keys w, W, b, BForward, backward by word 0, $First, last position of current line /patternSearch forward for pattern ?patternSearch backward for pattern n,NRepeat last search in same, opposite direction xDelete character ddDelete current line DDelete to end of line dwDelete word p, PPut deleted text before, after cursor uUndo last command.Repeat the last command i, aInsert text before, after cursor [Puts you into INPUT MODE] o, OOpen new line for text below, above cursor [Puts you into INPUT MODE] ZZSave file and quit :wSave file :q!Quit, without saving changes

Our Parallel Cluster: The Dwarves There are 12 computers with Linux operating system. dwarf[1-12] or dwarf[1-12]m dwarf1[m], dwarf3[m]-dwarf7[m] - Pentium II 300 MHz, dwarf9[m]-dwarf12[m] - Pentium III 450 MHz (dual CPU), dwarf2[m], dwarf8[m] - Pentium III 733 MHz (dual CPU).

The Dwarves Networking: Two Kinds of NICs dwarf1..dwarf12 – nodes names for the Fast Ethernet link dwarf1m.. dwarf12m – nodes names for the Myrinet network

The Dwarves IP Addresses Fast Ethernet: * Where * is between 111 to 122 Myrinet: * Where * is between 161 to 172

Connecting to the Dwarves

תרגיל מס ' 1 התחבר לאחת מהתחנות תוך שימוש ב - telnet. כתוב תכנית מחשב קצרה כגון : Hello World בצע קומפילציה : gcc –o hello_world hello_world.c הרץ את התכנית ושמור הפלט : %./hello_world > hello.txt בדוק את הפלט על - ידי : more hello.txt

פתרון תרגיל מס ' 1 – 1/3

פתרון תרגיל מס ' 1 – 2/3

פתרון תרגיל מס ' 1 – 3/3

What is message passing? Data transfer plus synchronization l Requires cooperation of sender and receiver DataProcess 0 Process 1 May I Send? Yes Data Time

Message-Passing Abstraction Process PProcessQ Address Y AddressX SendX, Q, t ReceiveY, P, t Match Local process address space Local process address space Send specifies buffer to be transmitted and receiving process Recv specifies sending process and application storage to receive into Memory to memory copy, but need to name processes Optional tag on send and matching rule on receive User process names local data and entities in process/tag space too In simplest form, the send/recv match achieves pairwise synch event – Other variants too Many overheads: copying, buffer management, protection

Space-Time Diagram of a Message-Passing Program

MPI - Message Passing Library MPI is a standard not an implementation Popular implementations are LAM and MPICH MPICH is installed under /usr/local/mpich Always put in the code: #include “mpi.h” Compilation: mpicc –o filename file.c Executing: mpirun –np N filename

MPI Naming Conventions MPI_Xxxxx(parameter,...) Example: MPI_Init(&argc,&argv ).

The First 4 Functions of MPI MPI_Init MPI_Finalize MPI_Comm_size MPI_Comm_rank

The First 4 Functions Syntax int MPI_Init(*argc, ***argv) int MPI_Finilize() int MPI_Comm_size(MPI_Comm comm, int *size) int MPI_Comm_rank(MPI_Comm comm, int *rank)

MPI Communicator A communicator is a handle representing a group of processors that can communicate with one another. The communicator name is required as an argument to all point-to-point and collective operations. The communicator specified in the send and receive calls must agree for communication to take place. Processors can communicate only if they share a communicator.

Basic Point to Point Functions MPI_Send MPI_Recv MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm); MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status status);

Basic Collective Functions MPI_Bcast MPI_Reduce The exact syntax: MPI_Bcast(void *buf, int count, MPI_Datatype datatype, int root, MPI_Comm comm); MPI_Reduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm);

MPI Datatypes MPI DatatypeC Type MPI_CHAR signed char MPI_SHORT signed short int MPI_INT signed int MPI_LONG signed long int MPI_UNSIGNED_CHAR unsigned char MPI_UNSIGNED_SHORT unsigned short int MPI_UNSIGNED unsigned int MPI_UNSIGNED_LONG unsigned long int MPI_FLOAT float MPI_DOUBLE double MPI_LONG_DOUBLE long double MPI_BYTE (none) MPI_PACKED (none)

MPI Example: Mandelbrot MPICH Running on Windows NT (Dual Celeron 400MHz) Compiled with Visual C++ 6.

Home Work #1: MPI Tutorial Four-up Postscript for Tutorial on MPI (tutorial4.ps)

תרגיל מס ' 2 הרצת תכנית קצרה ב - MPI: –Hello_World מקבילי כתוב תכנית בה כל מחשב יאמר שלום ויודיע את מספר התהליך שלו בריצה : Hello world from process 1 of 2

פתרון תרגיל מס ' 2 – 1/3 #include #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &size ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); printf( "Hello world from process %d of %d\n", rank, size ); MPI_Finalize(); return 0; }

פתרון תרגיל מס ' 2 – 2/3 helloworld: helloworld.c mpicc -o helloworld helloworld.c clean: /bin/rm -f helloworld *.o % make The Makefile

Hello World - Execution % mpicc -o helloworld helloworld.c % mpirun -np 4 helloworld Hello world from process 0 of 4 Hello world from process 3 of 4 Hello world from process 1 of 4 Hello world from process 2 of 4 %

תרגיל מס ' 3 : חישוב  חישוב באמצעות אינטגרציה נבצע אינטגרציה על הפונקציה f(x)=4/(1+x 2 ) בין 0 ל - 1 על - ידי חלוקת התחום ל - n חלקים

פתרון תרגיל מס ' 3 בפתרון השתמשנו בפונקציה למדידת זמן הנקראת : MPI_Wtime() ראה דוגמא לפתרון התרגיל תחת : /usr/local/mpich/examples/basic/cpi.c

פתרון תרגיל מס ' 3 #include "mpi.h" #include double f(double a) { return (4.0 / (1.0 + a*a)); } void main(int argc, char *argv[]) { int done = 0, n, myid, numprocs, i; double PI25DT = ; double mypi, pi, h, sum, x; double startwtime, endwtime; int namelen; char processor_name[MPI_MAX_PROCESSOR_NAME];

פתרון תרגיל מס ' 3 MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name,&namelen); fprintf(stderr,"Process %d on %s\n",myid, processor_name); fflush(stderr); n = 0;

פתרון תרגיל מס ' 3 while (!done) { if (myid == 0) { printf("Enter the number of intervals: (0 quits) "); fflush(stdout); scanf("%d",&n); startwtime = MPI_Wtime(); } MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); if (n == 0) done = 1; else {

פתרון תרגיל מס ' 3 h = 1.0 / (double) n; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs) { x = h * ((double)i - 0.5); sum += f(x); } mypi = h * sum; MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);

פתרון תרגיל מס ' 3 if (myid == 0) { printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); endwtime = MPI_Wtime(); printf("wall clock time = %f\n",endwtime-startwtime); } } /* end of if */ } /* end of while */ MPI_Finalize(); } /* end of main */

תרגיל הגשה מס ' 1 העבודה תתבצע בזוגות התרגיל יוגש באמצעות ה - תוך שבועיים – היום האחרון להגשה הוא יום ב ' ה - 12/11/01. יש לתעד את התכנית. אמנם התרגיל הוא ללא ציון, אך הגשתו חובה !

תרגיל הגשה מס ' 1: אינטגרציה בשיטת הטרפז כתוב תכנית מחשב מקבילית המבצעת אינטגרציה בשיטת הטרפז השתמש בפקודות העברת מסרים בין המעבדים מטיפוס Point-to-Point שנלמדו בשיעור : MPI_Send ו - MPI_Recv מעבד מס ' 0 יקבל מהמשתמש את תחום האינטגרציה ואת מס ' הטרפזים : לדוגמה, מ -a ל - b עם n טרפזים.

תרגיל הגשה מס ' 1 מעבד מס ' 0 ירכז אליו את תוצאות הביניים הנשלחות מהמעבדים וידפיס את התוצאה הסופית. כמו - כן, יש להדפיס את גבולות האינטגרציה ומס ' הטרפזים, n. להריץ התכנית 3 פעמים עבור שלושה ערכים של n: n=100,1000,10000 יש לצרף את הפלט של הריצות

תרגיל הגשה מס ' 1 הפונקציה עליה יש לבצע את האינטגרציה היא פולינום ממעלה שלישית : השווה עם התוצאה האנליטית

תרגיל הגשה מס ' 1 גבולות האינטגרציה הם : a=Min(ID1,ID2)/ b=Max(ID1,ID2)/ כאשר ID1 ו - ID2 הם מספרי ת. ז. של המגישים, כולל ספרת ביקורת.