IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Using MPI - the Fundamentals University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Paul Gray, University of Northern Iowa Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October University of Oklahoma.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Director of Contra Costa College High Performance Computing Center
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI and OpenMP By: Jesus Caban and Matt McKnight.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Task/ChannelMessage-passing TaskProcess Explicit channelsMessage communication.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
NORA/Clusters AMANO, Hideharu Textbook pp. 140-147.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
Timing in MPI Tarik Booker MPI Presentation May 7, 2003.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Implementing Processes and Threads CS550 Operating Systems.
Message Passing Interface Using resources from
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
PVM and MPI.
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Introduction to parallel computing concepts and technics
MPI Basics.
Introduction to MPI.
MPI Message Passing Interface
Introduction to MPI CDP.
MPI: The Message-Passing Interface
Introduction to Message Passing Interface (MPI)
Message Passing Models
CS 5334/4390 Spring 2017 Rogelio Long
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to Parallel Computing with MPI
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Parallel Processing - MPI
MPI Message Passing Interface
Some codes for analysis and preparation for programming
Presentation transcript:

IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September 20, 2006 This work has been supported in part by the Defense Advanced Research Projects Agency (DARPA) under contract No. NBCH

IBM Research © 2006 IBM Corporation 2 The Problem  Static Analysis of C programs is useful  Existing Abstract Syntax Tree (AST) in Eclipse CDT provides basic navigation and information, but needs more

IBM Research © 2006 IBM Corporation 3 CDT AST Extensions  Enhance existing CASTNode and Visitor –bottom-up traversal  Add other additional graphs: –Call Graph –Control Flow Graph –Data Dependence Graph  Traversal of these new graphs available in: –Topological Order –Reverse Topological Order

IBM Research © 2006 IBM Corporation 4 Bottom-up AST traversal org.eclipse.cdt.core.dom.ast.ASTVisitor org.eclipse.cdt.core.dom.ast.c.CASTVisitor Existing: public int visit(IASTxxx..){ return PROCESS_CONTINUE; } New: public int leave(IASTxxx..){ return PROCESS_CONTINUE; }

IBM Research © 2006 IBM Corporation 5 Call Graph main a kei gee foo Recursive calls detected… A cycle is detected on foo, gee and kei #include "mpi.h" #include "stdio.h" void foo(int x); void gee(int x); void kei(int x); void foo(int x){ x ++; gee(x); } void gee(int x){ x *= 3; kei(x); } void kei(int x){ x = x % 10; foo(x); } void a(int x){ x --; } int main3(int argc, char* argv[]){ int x = 0; foo(x); a(x); }

IBM Research © 2006 IBM Corporation 6 Control Flow Graph & Data Flow Dependence Graph – sample program #include #include "mpi.h" // Sample MPI program int main(int argc, char* argv[]){ printf("Hello MPI World the original.\n"); int my_rank; /* rank of process */ int p; /* number of processes */ int source; /* rank of sender */ int dest; /* rank of receiver */ int tag=0; /* tag for messages */ char message[100], *tmp; /* storage for message */ MPI_Status status ; /* return status for receive */ int * array; /* start up MPI */ array = (int *)malloc(sizeof(int) * 10); MPI_Init(&argc, &argv); /* find out process rank */ MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); /* find out number of processes */ MPI_Comm_size(MPI_COMM_WORLD, &p); MPI_Barrier(MPI_COMM_WORLD); if (my_rank !=0){ /* create message */ sprintf(message, "Greetings from process %d!", my_rank); dest = 0; /* use strlen+1 so that '\0' get transmitted */ MPI_Send(message, strlen(message)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); MPI_Barrier(MPI_COMM_WORLD); } else{ printf("From process 0: Num processes: %d\n",p); for (source = 1; source < p; source++) { MPI_Recv(message, 100, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status); printf("%s\n",message); } MPI_Barrier(MPI_COMM_WORLD); } /* shut down MPI */ MPI_Finalize(); free(array); return 0; } if for

IBM Research © 2006 IBM Corporation 7 Control Flow Graph entry printf Int my_rank Int source Int p Char message[100], *tmp Int tag = 0 Int dest MPI_Init() MPI_Status status array = malloc() Int *array MPI_Comm_rank() MPI_Comm_size my_rank != 0 dest = 0 sprintf MPI_Barrier() A A MPI_Send() free(array) (join block) MPI_Barrier() source ++ printf MPI_Recv source < p source = 1 printf MPI_Finalize() B B return 0 exit if for

IBM Research © 2006 IBM Corporation 8 Data Dependence Graph (DDG) entry printf Int my_rank Int source Int p Char message[100], *tmp Int tag = 0 Int dest MPI_Init() MPI_Status status array = malloc() Int *array MPI_Comm_rank() MPI_Comm_size my_rank != 0 dest = 0 sprintf MPI_Barrier() A A MPI_Send() free(array) (join block) MPI_Barrier() source ++ printf MPI_Recv source < p source = 1 printf MPI_Finalize() B B return 0 exit Control flow Data flow Control flow Work in Progress (graph not complete)

IBM Research © 2006 IBM Corporation 9 Summary  MPI Barrier Analysis uses these structures  Is this valuable as an addition to CDT?  Other Future plans –Parallel Tools Platform (PTP) Analysis: Static and Dynamic Analysis of MPI, OpenMP, and LAPI programs for detection of common errors Code Refactorings for Performance Optimization, e.g. refactoring for improved computation / communication overlap