Computer Science 320 Introduction to Cluster Computing.

Slides:



Advertisements
Similar presentations
Continuation of chapter 6…. Nested while loop A while loop used within another while loop is called nested while loop. Q. An illustration to generate.
Advertisements

8 March 2013Birkbeck College, U. London1 Introduction to Programming Lecturer: Steve Maybank Department of Computer Science and Information Systems
Computer Science 320 Clumping in Parallel Java. Sequential vs Parallel Program Initial setup Execute the computation Clean up Initial setup Create a parallel.
Introduction to Computer Science Robert Sedgewick and Kevin Wayne Recursive Factorial Demo pubic class Factorial {
MPI Message Passing Interface
1 Computer Science, University of Warwick Accessing Irregularly Distributed Arrays Process 0’s data arrayProcess 1’s data arrayProcess 2’s data array Process.
Parallel Processing & Parallel Algorithm May 8, 2003 B4 Yuuki Horita.
Slide-1 University of Maryland Five Common Defect Types in Parallel Computing Prepared for Applied Parallel Computing Prof. Alan Edelman Taiga Nakamura.
Parallel Processing1 Parallel Processing (CS 667) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
Problem Solving 5 Using Java API for Searching and Sorting Applications ICS-201 Introduction to Computing II Semester 071.
Sahalu Junaidu ICS 573: High Performance Computing 8.1 Topic Overview Matrix-Matrix Multiplication Block Matrix Operations A Simple Parallel Matrix-Matrix.
Point-to-Point Communication Self Test with solution.
CS 106 Introduction to Computer Science I 02 / 12 / 2007 Instructor: Michael Eckmann.
1 Chapter 7 User-Defined Methods Java Programming from Thomson Course Tech, adopted by kcluk.
Synchronization in Java Nelson Padua-Perez Bill Pugh Department of Computer Science University of Maryland, College Park.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
High Performance Communication using MPJ Express 1 Presented by Jawad Manzoor National University of Sciences and Technology, Pakistan 29 June 2015.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
Parallel Programming with Java
CS 179: GPU Programming Lecture 20: Cross-system communication.
Optimizing Threaded MPI Execution on SMP Clusters Hong Tang and Tao Yang Department of Computer Science University of California, Santa Barbara.
A Bridge to Your First Computer Science Course Prof. H.E. Dunsmore Concurrent Programming Threads Synchronization.
Collective Communication
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
Computer Science 320 Load Balancing for Hybrid SMP/Clusters.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Threads and Processes.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Computer Science 320 Broadcasting. Floyd’s Algorithm on SMP for i = 0 to n – 1 parallel for r = 0 to n – 1 for c = 0 to n – 1 d rc = min(d rc, d ri +
12/1/98 COP 4020 Programming Languages Parallel Programming in Ada and Java Gregory A. Riccardi Department of Computer Science Florida State University.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
An Introduction to Parallel Programming with MPI March 22, 24, 29, David Adams
MPJ Express Alon Vice Ayal Ofaim. Contributors 2 Aamir Shafi Jawad Manzoor Kamran Hamid Mohsan Jameel Rizwan Hanif Amjad Aziz Bryan Carpenter Mark Baker.
 2003 Joel C. Adams. All Rights Reserved. Calvin CollegeDept of Computer Science(1/11) Java Sockets and Simple Networking Joel Adams and Jeremy Frens.
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Thinking in Parallel – Implementing In Code New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
Computer Science 320 Introduction to Hybrid SMP/Clusters.
Computer Science 320 Load Balancing with Clusters.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture FIT5174 Distributed & Parallel Systems Lecture 5 Message Passing and MPI.
An Introduction to MPI (message passing interface)
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 8 October 23, 2002 Nayda G. Santiago.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Computer Science 320 A First Program in Parallel Java.
Message Passing Computing 1 iCSC2015,Helvi Hartmann, FIAS Message Passing Computing Lecture 2 Message Passing Helvi Hartmann FIAS Inverted CERN School.
Computer Science 320 Reduction. Estimating π Throw N darts, and let C be the number of darts that land within the circle quadrant of a unit circle Then,
Computer Science A 1. Course plan Introduction to programming Basic concepts of typical programming languages. Tools: compiler, editor, integrated editor,
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
Message Passing Interface Using resources from
Exceptions and Error Handling. Exceptions Errors that occur during program execution We should try to ‘gracefully’ deal with the error Not like this.
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
Coming up Implementation vs. Interface The Truth about variables Comparing strings HashMaps.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 6, 2006 Session 22.
Programming in Java Transitioning from Alice. Becomes not myFirstMethod but …. public static void main (String[] arg) { // code for testing classes goes.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Chapter 7 User-Defined Methods.
Multithreaded Programming in Java
Computing Adjusted Quiz Total Score
CNT 4007C Project 2 Good morning, everyone. In this class, we will have a brief look at the project 2. Project 2 is basically the same with project 1.
Recursive GCD Demo public class Euclid {
class PrintOnetoTen { public static void main(String args[]) {
Presentation transcript:

Computer Science 320 Introduction to Cluster Computing

SMP (shared memory processor) architecture Cluster architecture

Cluster Middleware for Parallel Java

A Cluster Program in Parallel Java static Comm world; static int size; static int rank; static long x; static long t1, t2, t3; public static void main(String[] args) throws Exception{ t1 = System.currentTimeMillis(); Comm.init(args); world = Comm.world(); size = world.size(); rank = world.rank(); x = Long.parseLon(args[rank]); isPrime(x); // Output running time of rank here. } $ java –Dpj.np=4 Program1Clu /

Communicators and Messages Cluster processes communicate by sending messages over the backend network A communicator is an abstraction of a medium for sending and receiving messages

A PJ World Comm.init(); Comm world = Comm.world(); int size = world.size(); int rank = world.rank(); $ java –Dpj.np=4 Program1Clu /

Types of Communication Point-to-point: transfer data from one process to another process Collective: transfer data to all processes

Sending and Receiving world.send(toRank, buffer) CommStatus status = world.receive(fromRank, buffer) A buffer is an abstraction of a data source It can be a number, an array, a portion of an array, a matrix, a portion of a matrix, or other items

Sending and Receiving Master-worker pattern uses send and receive

Wildcard Receive CommStatus status = world.receive(null, buffer) Don’t need to know the rank of the processor; just receive from anybody

Nonblocking Send CommRequest request = new CommRequest(); world.send(toRank, buffer, request); // Other processing goes here request.waitForFinish(); Starts another thread to send the data, returns to do other work, then can block until send completes

Nonblocking Receive CommRequest request = new CommRequest(); world.receive(fromRank, buffer, request); // Other processing goes here CommStatus status = request.waitForFinish(); Starts another thread to receive the data, returns to do other work, then can block until receive completes

Send-Receive CommStatus status = world.receive(toRank, srcBuffer, fromRank, dstBuffer); Performs a send and a receive; nonblocking version is similar to the others