Parallel Systems Lab 1 Dr. Guy Tel-Zur. מטרות השיעור התחברות לשתי הפלטפורמות העיקריות של הקורס : –מחשב לינוקס וירטואלי במעבדה –הקלאסטר המקבילי תרגול ביצוע.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Tutorial on MPI Experimental Environment for ECE5610/CSC
Parallel Systems Parallel Systems Tools Dr. Guy Tel-Zur.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Computational Physics Lecture 4 Dr. Guy Tel-Zur.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Deino MPI Installation The famous “cpi.c” Profiling
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 CS 668: Lecture 2 An Introduction to MPI Fred Annexstein University of Cincinnati CS668: Parallel Computing Fall 2007 CC Some.
מבוא לעיבוד מקבילי – הרצאה מס ' 2 תרגול MPI על המערך המקבילי.
Parallel Programming in C with MPI and OpenMP
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
מבוא לעיבוד מקבילי – הרצאה מס ' תרגול על המערך המקבילי.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Director of Contra Costa College High Performance Computing Center
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 484. Message Passing Based on multi-processor Set of independent processors Connected via some communication net All communication between processes.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
1 Introduction to Parallel Programming with Single and Multiple GPUs Frank Mueller
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Chapter 4 Message-Passing Programming. Learning Objectives Understanding how MPI programs execute Understanding how MPI programs execute Familiarity with.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
Introduction to MPI Programming Ganesh C.N.
Chapter 4.
MPI Basics.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
Introduction to MPI CDP.
Send and Receive.
CS 584.
Send and Receive.
Introduction to Message Passing Interface (MPI)
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
Introduction to Parallel Programming with MPI
CSCE569 Parallel Computing
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Introduction to Parallel Programming with Single and Multiple GPUs
Presentation transcript:

Parallel Systems Lab 1 Dr. Guy Tel-Zur

מטרות השיעור התחברות לשתי הפלטפורמות העיקריות של הקורס : –מחשב לינוקס וירטואלי במעבדה –הקלאסטר המקבילי תרגול ביצוע משימות בסיסיות תחת מערכת ההפעלה Linux הרצת תכניות מקביליות בסיסיות המשתמשות ב MPI- שימוש בדבאגר מקבילי Allinea DDT u וב - Profiler

יעדים יישור קו בנושא לינוקס הכרות ראשונית עם MPI –פיתוח קוד –קימפול –ניפוי שגיאות –הרצה –ניתוח התוצאה

Basic Linux Commands – 1/5 gcc –o file file.cC Compiler pico, vi, (x)Emacs, gedit… or Edit on Windows then transfer file using ftp Text editors exitExit the system login: username password: passwd Enter the system

Basic Linux Commands – 2/5 rmdelErase files cpcopyCopy files ls ls -l dirSee files LinuxDOS

Basic Linux Commands – 3/5 mvrenameMore/Rename uname -averOS version rmdir Remove directory mkdir Make directory LinuxDOS

Basic Linux Commands – 4/5 Getting help: man topic Look at the contents of a file: cat, more,head and tail Quit from man or more: q Where am I? pwd Clear the screen: clear

Basic Linux Commands – 5/5 Redirection: >, >> Pipe: | telnet ftp ping chmod chown

Linux FAQ

The vi Editor ESCPuts you in command mode h, j, k, l Left, down, up, right or use the arrows keys w, W, b, BForward, backward by word 0, $First, last position of current line /patternSearch forward for pattern ?patternSearch backward for pattern n,NRepeat last search in same, opposite direction xDelete character ddDelete current line DDelete to end of line dwDelete word p, PPut deleted text before, after cursor uUndo last command.Repeat the last command i, a Insert text before, after cursor [Puts you into INPUT MODE] o, O Open new line for text below, above cursor [Puts you into INPUT MODE] ZZSave file and quit :wSave file :q!Quit, without saving changes

vi reference card Download and print: irror/vi-ref.pdf

Vi for Windows

Vi for windows

nano

Other text editors Vi, Vim Pico Emacs/Xemacs Nedit (very friendly) Eclipse IDE‏

Our Parallel Cluster: gathering information Kernel version: uname –a CPU information: more /proc/cpuinfo Memory Information: more /proc/meminfo

הגנה על הפרטיות – הרשאות גישה לקבצים

Connecting to a remote node Secured: SSH SSH client from: PuTTY: ty/ Please download Putty!!!!!

Putty

Putty + X windows Install xmin g and xming fonts

Download links Putty: – Xming: – ng

תרגיל מס ' 1 התחבר לאחת מהתחנות תוך שימוש ב - ssh כתוב תכנית מחשב קצרה כגון : Hello World בצע קומפילציה : gcc –o hello_world hello_world.c הרץ את התכנית ושמור הפלט: %./hello_world > hello.txt בדוק את הפלט על-ידי: more hello.txt

The GNU compiler gcc filename.c – Will produce an executable “a.out” gcc –o runme filename.c – Will produce an executable “runme” Optimization: gcc –O3 –o runme filename.c gcc –c filename.c will produce an object file “filename.o”

פתרון תרגיל מס ' 1 – 1/3

פתרון תרגיל מס ' 1 – 2/3

פתרון תרגיל מס ' 1 – 3/3

MPI-Quick reference card 1/2

MPI-Quick reference card 2/2

MPI Quick Reference Card: MPI Quick Reference Card:

What is message passing? Data transfer through messaging Requires a sender and a receiver cooperation DataProcess 0 Process 1 May I Send? Yes Data Time

Point to Point: Basic Send/Receive תהליך מס ' 1 x Send(&x,2); תהליך מס ' 2 y Recv(&y,1); הזזת נתונים

Space-Time Diagram of a Message-Passing Program

MPI - Message Passing Interface API MPI is a standard not an implementation Popular implementations are LAM and MPICH MPICH is installed under /usr/local/mpich Always put in the code: #include “mpi.h” Compilation: mpicc –o filename file.c Execution: mpirun –np N filename Help: man mpirun

MPI Naming Conventions MPI_Xxxxx(parameter,...)‏ Example: MPI_Init(&argc,&argv)‏

The First 4 Functions of MPI MPI_Init MPI_Finalize MPI_Comm_size MPI_Comm_rank …and don’t forget the #include “mpi.h”

The First 4 Functions Syntax int MPI_Init(int argc, char *argv[] )‏ int MPI_Finilize()‏ int MPI_Comm_size(MPI_Comm comm, int *size)‏ int MPI_Comm_rank(MPI_Comm comm, int *rank)‏

MPI Communicator A communicator is a handle representing a group of processors that can communicate with one another. The communicator name is required as an argument to all point-to-point and collective operations. The communicator specified in the send and receive calls must agree for communication to take place. Processors can communicate only if they share a communicator.

Basic Point to Point Functions MPI_Send MPI_Recv MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm); MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status status);

MPI_Send int MPI_Send(void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm);

MPI Data types MPI data typeC Type MPI_CHAR signed char MPI_SHORT signed short int MPI_INT signed int MPI_LONG signed long int MPI_UNSIGNED_CHAR unsigned char MPI_UNSIGNED_SHORT unsigned short int MPI_UNSIGNED unsigned int MPI_UNSIGNED_LONG unsigned long int MPI_FLOAT float MPI_DOUBLE double MPI_LONG_DOUBLE long double MPI_BYTE (none) ‏ MPI_PACKED (none) ‏

תרגיל מס ' 2 הרצת תכנית קצרה ב - MPI: Hello_World מקבילי כתוב תכנית בה כל מחשב יאמר שלום ויודיע את מספר התהליך שלו בריצה, לדוגמה : Hello world from process 1 of 2

פתרון תרגיל מס ' 2 – 1/3 // see more examples: /usr/local/mpich/examples #include #include "mpi.h" int main( argc, argv )‏ int argc; char **argv; { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &size ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); printf( "Hello world from process %d of %d\n", rank, size ); MPI_Finalize(); return 0; }

פתרון תרגיל מס ' 2 – 2/3 helloworld: helloworld.c mpicc -o helloworld helloworld.c clean: /bin/rm -f helloworld *.o % make The Makefile

פתרון תרגיל מס ' 2, 3/3 Note on syntax: int main( argc, argv )‏ int argc; char **argv; Is Equivalent to: Int main(int argc, char *argv[]) Then: MPI _ Init (&argc,&argv);

תרגיל מס ' 2: מימוש ב - C++

Create a “machinefile” השמות המעודכנים של המחשבים Parallel1 Parallel2 parallel3 השמות המעודכנים של המחשבים Parallel1 Parallel2 parallel3

Hello World - Execution % mpicc -o helloworld helloworld.c % mpirun -np 4 helloworld Or mpirun –machinefile./machinefile –np 4 helloworld Hello world from process 0 of 4 Hello world from process 3 of 4 Hello world from process 1 of 4 Hello world from process 2 of 4 %

machinefile If no machinefile is specified you are running on the local machine If a machinefile does exist you are running on the machines specified in the file

תרגיל מס ' 3: חישוב  חישוב באמצעות אינטגרציה נבצע אינטגרציה על הפונקציה f(x)=4/(1+x 2 ) בין 0 ל - 1 על - ידי חלוקת התחום ל - n חלקים

P3 P2 Other methods for integrating the area of a circle P1 P0 P1 P0P7 P6 P5 P4 P3 P2

Master/Workers

פתרון תרגיל מס ' 3 בפתרון השתמשנו בפונקציה למדידת זמן הנקראת : MPI_Wtime()‏ ראה דוגמא לפתרון התרגיל תחת : /usr/local/mpich/examples/cpi.c ניתן להוריד את התכנית גם מהכתובת : usingmpi/examples/simplempi/cpi_c.htm Copy this program to a location under your home directory where you have a write permission! Execute it from there.

פתרון תרגיל מס ' 3 #include "mpi.h" #include double f(double a)‏ { return (4.0 / (1.0 + a*a)); } void main(int argc, char *argv[])‏ { int done = 0, n, myid, numprocs, i; double PI25DT = ; double mypi, pi, h, sum, x; double startwtime, endwtime; int namelen; char processor_name[MPI_MAX_PROCESSOR_NAME];

פתרון תרגיל מס ' 3- המשך MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name,&namelen); fprintf(stderr,"Process %d on %s\n",myid, processor_name); fflush(stderr); n = 0;

פתרון תרגיל מס ' 3 - המשך while (!done)‏ { if (myid == 0)‏ { printf("Enter the number of intervals: (0 quits) "); fflush(stdout); scanf("%d",&n); startwtime = MPI_Wtime(); } MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); if (n == 0)‏ done = 1; else {

פתרון תרגיל מס ' 3 - המשך h = 1.0 / (double) n; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs)‏ { x = h * ((double)i - 0.5); sum += f(x); } mypi = h * sum; MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0,MPI_COMM_WORLD);

פתרון תרגיל מס ' 3 – המשך Basic Collective Functions MPI_Bcast MPI_Reduce The exact syntax: MPI_Bcast(void *buf, int count, MPI_Datatype datatype, int root, MPI_Comm comm); MPI_Reduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm);

פתרון תרגיל מס ' 3 - המשך MPI Reduce

פתרון תרגיל מס ' 3 - המשך if (myid == 0)‏ { printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); endwtime = MPI_Wtime(); printf("wall clock time = %f\n",endwtime-startwtime); } } /* end of if */ } /* end of while */ MPI_Finalize(); } /* end of main */

הרצת 4 תהליכים

הרצת שני תהליכים על שני מעבדים

הערכת ביצועי הריצה 1/2

הערכת ביצועי הריצה 2/2

How to try cpi.c? mkdir mpi cd mpi cp /usr/local/mpich2- 1.0/share/examples_graphics/cpi.c. mpicc –o cpi cpi.c create a hostfile mpirun –np 4 –machinefile./machinefile./cpi מטלות : –התחברות ב SSH –יצירת מחיצה + עריכת קובץ ושמירתו –קימפול ללא MPI עם GCC ועם MPI על - ידי MPICC –הרצת התכנית CPI

תרגול עם Allinea DDT באתר הקורס במכללה נמצא קובץ tar עם חומר לימודי. נא להוריד את הקובץ, לפתוח אותו ולבצע את התרגילים העוסקים ב - MPI, ראו השקף הבא

נא לפתוח את חומר ההדרכה המצוי תחת המחיצה handouts

Allinea DDT tutorial Session 1- Program crash: cstartmpi Session 2 – f90, skip Session 3 – Deadlock in MPI: cpi, loop Session 4 – Memory leaks: mandel, loop Session 5 – GPU Computing, skip Session 6 – C example: matrix Session 7 – f90, skip

Eclipse PTP

Eclipse PTP SC12 tutorial: color.pdf SC12 tutorial: color.pdf