Introduction to Parallel Processing

Slides:



Advertisements
Similar presentations
Technotronics GCECT '091 Decode C This is a C Programming event, where you will be given problems to solve using C only. You will be given a Linux system.
Advertisements

University of Chicago Department of Energy The Parallel and Grid I/O Perspective MPI, MPI-IO, NetCDF, and HDF5 are in common use Multi TB datasets also.
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Job Submission.
High Productivity Computing Systems for Command and Control 13 th ICCRTS: C2 for Complex Endeavors Bellevue, WA June 17 – 19, 2008 Scott Spetka – SUNYIT.
Grid Computing, B. Wilkinson, C Program Command Line Arguments A normal C program specifies command line arguments to be passed to main with:
Intel® performance analyze tools Nikita Panov Idrisov Renat.
Parallel Apriori Algorithm Using MPI Congressional Voting Records Çankaya University Computer Engineering Department Ahmet Artu YILDIRIM January 2010.
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Parallel Systems Parallel Systems Tools Dr. Guy Tel-Zur.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
High Availability (HA) May 03, Motivation  New Technology  The opportunity to create a cluster  Exploring with Linux Operating system.
Two Broad Categories of Software
Introduction to Java Lab CS110A – Lab Section 004 Instructor: Duo Wei.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
With RTAI, MPICH2, MPE, Jumpshot, Sar and hopefully soon OProfile or VTune Dawn Nelson CSC523.
INTRODUCTION TO C PROGRAMMING LANGUAGE Computer Programming Asst. Prof. Dr. Choopan Rattanapoka and Asst. Prof. Dr. Suphot Chunwiphat.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
MPE/Jumpshot Evaluation Report Adam Leko Hans Sherburne, UPC Group HCS Research Laboratory University of Florida Color encoding key: Blue: Information.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Executing Message-Passing Programs Mitesh Meswani.
Using The Cluster. What We’ll Be Doing Add users Run Linpack Compile code Compute Node Management.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 3 Distributed Memory Programming with MPI An Introduction to Parallel Programming Peter Pacheco.
CSC 215 : Procedural Programming with C C Compilers.
Parallel Interactive Computing with PyTrilinos and IPython Bill Spotz, SNL (Brian Granger, Tech-X Corporation) November 8, 2007 Trilinos Users Group Meeting.
VAMPIR. Visualization and Analysis of MPI Resources Commercial tool from PALLAS GmbH VAMPIRtrace - MPI profiling library VAMPIR - trace visualization.
Lab System Environment
Sharif University of technology, Parallel Processing course, MPI & ADA Server Introduction By Shervin Daneshpajouh.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Replay Compilation: Improving Debuggability of a Just-in Time Complier Presenter: Jun Tao.
Profiling, Tracing, Debugging and Monitoring Frameworks Sathish Vadhiyar Courtesy: Dr. Shirley Moore (University of Tennessee)
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
Lecture 5 Barriers and MPI Introduction Topics Barriers Uses implementations MPI Introduction Readings – Semaphore handout dropboxed January 24, 2012 CSCE.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Matlab for Engineers Gari Clifford © Centre for Doctoral Training in Healthcare Innovation Institute of Biomedical Engineering Department of.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
I can run this simple BAT file to copy files: (this was tried with and without the pause command)
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Using MPI on Dept. Clusters Min LI Sep Outline Run MPI programs on single machine Run mpi programs on multiple machines Assignment 1.
CCA Common Component Architecture CCA Forum Tutorial Working Group Contributors: Introduction to the Ccaffeine Framework.
Message Passing Interface Using resources from
Solvency II Tripartite template V2 and V3 Presentation of the conversion tools proposed by FundsXML France.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarksEGEE-III INFSO-RI MPI on the grid:
Hello Educational presentation.
CSC 215 : Procedural Programming with C
Auburn University
Roy Taragan Shaham Kenat
Special jobs with the gLite WMS
Using Paraguin to Create Parallel Programs
Creating and running applications on the NGS
Programming without BlueJ Week 12
Introduction to C Topics Compilation Using the gcc Compiler
Lab 1 introduction, debrief
Performance report Load the performance report module module load perf-report Run your program with perf-report perf-report mpirun -np 4 bin/bt.B.4.mpi_io_simple.
Advanced TAU Commander
Paul Sexton CS 566 February 6, 2006
Cmake Primer.
How to Run a Java Program
Lecture 10 Dr. Guy Tel-Zur.
Parallel Processing Dr. Guy Tel-Zur.
GNU gcov gcov is a test coverage program running with gcc.
Introduction to Clusters, Rocks, and MPI
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

Introduction to Parallel Processing Working on the Educational Cluster “hobbit” Guy Tel-Zur

Logging in

Basic Linux Commands

Our First MPI program

Compiling and Executing

Cluster Monitoring http://hobbit5.ee.bgu.ac.il

Different MPI packages You can always try: mpirun -h telzur@gtz2:~mpi>mpd & telzur@gtz2:~/mpi> mpiexec -machinefile ./machinefile -np 4 ./hellow Hello world from process 0 of 4 Hello world from process 1 of 4 Hello world from process 3 of 4 Hello world from process 2 of 4 telzur@gtz2:~/mpi>

cpilog Discuss this program with the students Ref: /home/telzur/mpi/mpe/cpilog.c

Profiling a demo from my laptop telzur@gtz2:~/mpi/mpe> ~/mpich2-install/bin/mpirun -machinefile ../machinefile -np 4 ./cpilog Process 0 running on gtz2 Process 2 running on gtz2 Process 1 running on gtz2 Process 3 running on gtz2 pi is approximately 3.1415926535899028, Error is 0.0000000000001097 wall clock time = 0.106649 Writing logfile.... Enabling the Default clock synchronization... Finished writing logfile ./cpilog.clog2. telzur@gtz2:~/mpi/mpe>

Profiling - log file conversion telzur@gtz2:~/mpi/mpe> clog2TOslog2 ./cpilog.clog2 GUI_LIBDIR is set. GUI_LIBDIR = /usr/local/lib SLOG-2 Header: version = SLOG 2.0.6 NumOfChildrenPerNode = 2 TreeLeafByteSize = 65536 MaxTreeDepth = 0 MaxBufferByteSize = 6010 Categories is FBinfo(635 @ 6118)‏ MethodDefs is FBinfo(0 @ 0)‏ LineIDMaps is FBinfo(232 @ 6753)‏ TreeRoot is FBinfo(6010 @ 108)‏ TreeDir is FBinfo(38 @ 6985)‏ Annotations is FBinfo(0 @ 0)‏ Postamble is FBinfo(0 @ 0)‏ Number of Drawables = 204 Number of Unmatched Events = 0 Total ByteSize of the logfile = 14168 timeElapsed between 1 & 2 = 38 msec timeElapsed between 2 & 3 = 139 msec

Starting Jumpshot4 telzur@gtz2:~/mpi/mpe> ~/mpich2-install/bin/jumpshot ./cpilog.slog2

Parallel Debugger