Parallel Programming & Cluster Computing N-Body Simulation and Collective Communications Henry Neeman, University of Oklahoma Paul Gray, University of.

Slides:



Advertisements
Similar presentations
CSE 160 – Lecture 9 Speed-up, Amdahl’s Law, Gustafson’s Law, efficiency, basic performance metrics.
Advertisements

CS1104: Computer Organisation School of Computing National University of Singapore.
Formulation of an algorithm to implement Lowe-Andersen thermostat in parallel molecular simulation package, LAMMPS Prathyusha K. R. and P. B. Sunil Kumar.
N-Body I CS 170: Computing for the Sciences and Mathematics.
Parallel & Cluster Computing Distributed Cartesian Meshes Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy,
Introduction to Parallel Programming & Cluster Computing Applications and Types of Parallelism Josh Alexander, University of Oklahoma Ivan Babic, Earlham.
Parallel Strategies Partitioning consists of the following steps –Divide the problem into parts –Compute each part separately –Merge the results Divide.
April 9, 2015Applied Discrete Mathematics Week 9: Relations 1 Solving Recurrence Relations Another Example: Give an explicit formula for the Fibonacci.
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2007.
ECE669 L4: Parallel Applications February 10, 2004 ECE 669 Parallel Computer Architecture Lecture 4 Parallel Applications.
Module on Computational Astrophysics Professor Jim Stone Department of Astrophysical Sciences and PACM.
Analysis of Algorithms. Time and space To analyze an algorithm means: –developing a formula for predicting how fast an algorithm is, based on the size.
APPLICATIONS OF DIFFERENTIATION
Collective Communications Self Test with solution.
Cmpt-225 Algorithm Efficiency.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Mathematical Modeling and Engineering Problem solving.
Parallel Programming & Cluster Computing Applications and Types of Parallelism Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday.
Search Lesson CS1313 Spring Search Lesson Outline 1.Searching Lesson Outline 2.How to Find a Value in an Array? 3.Linear Search 4.Linear Search.
CS 4730 Physical Simulation CS 4730 – Computer Game Design.
Spring2012 Lecture#10 CSE 246 Data Structures and Algorithms.
1a-1.1 Parallel Computing Demand for High Performance ITCS 4/5145 Parallel Programming UNC-Charlotte, B. Wilkinson Dec 27, 2012 slides1a-1.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
‘Tis not folly to dream: Using Molecular Dynamics to Solve Problems in Chemistry Christopher Adam Hixson and Ralph A. Wheeler Dept. of Chemistry and Biochemistry,
Lecture 10 CSS314 Parallel Computing
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
Forces and the Laws of Motion Chapter Changes in Motion Objectives  Describe how force affects the motion of an object  Interpret and construct.
Analysis of Algorithms
Differential Calculus The mathematics of teeny little amounts Differential Calculus 1 
Parallel Programming & Cluster Computing Applications and Types of Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research.
Parallel & Cluster Computing Monte Carlo Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08 Education.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
MPI Communications Point to Point Collective Communication Data Packaging.
Supercomputing in Plain English Applications and Types of Parallelism PRESENTERNAME PRESENTERTITLE PRESENTERDEPARTMENT PRESENTERINSTITUTION DAY MONTH DATE.
Supercomputing in Plain English Supercomputing in Plain English Applications and Types of Parallelism Henry Neeman, Director OU Supercomputing Center for.
Supercomputing in Plain English Applications and Types of Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research Blue Waters.
Introduction to Parallel Programming & Cluster Computing Applications and Types of Parallelism Joshua Alexander, U Oklahoma Ivan Babic, Earlham College.
Parallel Programming & Cluster Computing Overview of Parallelism Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education.
Physics 111: Mechanics Lecture 4
Parallel & Cluster Computing N-Body Simulation and Collective Communications Henry Neeman, Director OU Supercomputing Center for Education & Research University.
Numerical Modeling in Astronomy By Astronomers who sleep at night.
Sampling distributions rule of thumb…. Some important points about sample distributions… If we obtain a sample that meets the rules of thumb, then…
Parallel Programming & Cluster Computing MPI Collective Communications Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen Providence.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
Supercomputing in Plain English Part VII: Applications and Types of Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
The Laws of Motion Newton’s Three Laws. What is a Force? It is something we experience every single day. You are exerting a force on your pencil right.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
Parallel & Cluster Computing Transport Codes and Shifting Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma.
27-Jan-16 Analysis of Algorithms. 2 Time and space To analyze an algorithm means: developing a formula for predicting how fast an algorithm is, based.
Data Structures and Algorithms in Parallel Computing Lecture 10.
Supercomputing in Plain English Collective Communications and N-Body Problems Henry Neeman, Director OU Supercomputing Center for Education & Research.
Introduction to Parallel Programming & Cluster Computing MPI Collective Communications Joshua Alexander, U Oklahoma Ivan Babic, Earlham College Michial.
Barnes Hut – A Broad Review Abhinav S Bhatele The 27th day of April, 2006.
1a.1 Parallel Computing and Parallel Computers ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.
Parallel Programming & Cluster Computing Linear Algebra Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education Program’s.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Parallel Programming & Cluster Computing Monte Carlo Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education Program’s.
Complexity of Algorithms Fundamental Data Structures and Algorithms Ananda Guna January 13, 2005.
Work Done by a Constant Force The work done by a constant force is defined as the distance moved multiplied by the component of the force in the direction.
All-to-All Pattern A pattern where all (slave) processes can communicate with each other Somewhat the worst case scenario! 1 ITCS 4/5145 Parallel Computing,
Big-O notation.
Saturday November 12 – Tuesday November
Lecture 14: Inter-process Communication
RELATIVISTIC EFFECTS.
Parallel Computing Demand for High Performance
Parallel Computing Demand for High Performance
Parallel Programming & Cluster Computing Transport Codes and Shifting
Presentation transcript:

Parallel Programming & Cluster Computing N-Body Simulation and Collective Communications Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education Program’s Workshop on Parallel & Cluster computing Oklahoma Supercomputing Symposium, Monday October

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October N Bodies

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October N-Body Problems An N-body problem is a problem involving N “bodies” – that is, particles (e.g., stars, atoms) – each of which applies a force to all of the others. For example, if you have N stars, then each of the N stars exerts a force (gravity) on all of the other N–1 stars. Likewise, if you have N atoms, then every atom exerts a force (nuclear) on all of the other N–1 atoms.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Body Problem When N is 1, you have a simple 1-Body Problem: a single particle, with no forces acting on it. Given the particle’s position P and velocity V at some time t 0, you can trivially calculate the particle’s position at time t 0 +Δt: P(t 0 +Δt) = P(t 0 ) + VΔt V(t 0 +Δt) = V(t 0 )

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Body Problem When N is 2, you have – surprise! – a 2-Body Problem: exactly 2 particles, each exerting a force that acts on the other. The relationship between the 2 particles can be expressed as a differential equation that can be solved analytically, producing a closed-form solution. So, given the particles’ initial positions and velocities, you can trivially calculate their positions and velocities at any later time.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October N-Body Problems (N > 3) For N of 3 or more, no one knows how to solve the equations to get a closed form solution. So, numerical simulation is pretty much the only way to study groups of 3 or more bodies. Popular applications of N-body codes include: astronomy (e.g., galaxy formation, cosmology); chemistry (e.g., protein folding, molecular dynamics). Note that, for N bodies, there are on the order of N 2 forces, denoted O(N 2 ).

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October N Bodies

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Force #1 A

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Force #2 A

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Force #3 A

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Force #4 A

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Force #5 A

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Force #6 A

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Force #N-1 A

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October N-Body Problems Given N bodies, each body exerts a force on all of the other N – 1 bodies. Therefore, there are N (N – 1) forces in total. You can also think of this as (N (N – 1)) / 2 forces, in the sense that the force from particle A to particle B is the same (except in the opposite direction) as the force from particle B to particle A.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Let’s say that you have some task to perform on a certain number of things, and that the task takes a certain amount of time to complete. Let’s say that the amount of time can be expressed as a polynomial on the number of things to perform the task on. For example, the amount of time it takes to read a book might be proportional to the number of words, plus the amount of time it takes to settle into your favorite easy chair. C 1. N + C 2 Aside: Big-O Notation

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Big-O: Dropping the Low Term C 1. N + C 2 When N is very large, the time spent settling into your easy chair becomes such a small proportion of the total time that it’s virtually zero. So from a practical perspective, for large N, the polynomial reduces to: C 1. N In fact, for any polynomial, if N is large, then all of the terms except the highest-order term are irrelevant.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Big-O: Dropping the Constant C 1. N Computers get faster and faster all the time. And there are many different flavors of computers, having many different speeds. So, computer scientists don’t care about the constant, only about the order of the highest-order term of the polynomial. They indicate this with Big-O notation: O(N) This is often said as: “of order N.”

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October N-Body Problems Given N bodies, each body exerts a force on all of the other N – 1 bodies. Therefore, there are N (N – 1) forces total. In Big-O notation, that’s O(N 2 ) forces. So, calculating the forces takes O(N 2 ) time to execute. But, there are only N particles, each taking up the same amount of memory, so we say that N-body codes are of: O(N) spatial complexity (memory) O(N 2 ) time complexity

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October O(N 2 ) Forces Note that this picture shows only the forces between A and everyone else. A

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October How to Calculate? Whatever your physics is, you have some function, F(A,B), that expresses the force between two bodies A and B. For example, for stars and galaxies, F(A,B) = G · m A · m B / dist(A,B) 2 where G is the gravitational constant and m is the mass of the body in question. If you have all of the forces for every pair of particles, then you can calculate their sum, obtaining the force on every particle. From that, you can calculate every particle’s new position and velocity.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October How to Parallelize? Okay, so let’s say you have a nice serial (single-CPU) code that does an N-body calculation. How are you going to parallelize it? You could: have a server feed particles to processes; have a server feed interactions to processes; have each process decide on its own subset of the particles, and then share around the forces; have each process decide its own subset of the interactions, and then share around the forces.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Do You Need a Master? Let’s say that you have N bodies, and therefore you have ½ N (N - 1) interactions (every particle interacts with all of the others, but you don’t need to calculate both A  B and B  A). Do you need a server? Well, can each processor determine, on its own, either (a) which of the bodies to process, or (b) which of the interactions to process? If the answer is yes, then you don’t need a server.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Parallelize How? Suppose you have N p processors. Should you parallelize: by assigning a subset of N / N p of the bodies to each processor, OR by assigning a subset of ½ N (N - 1) / N p of the interactions to each processor?

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Data vs. Task Parallelism Data Parallelism means parallelizing by giving a subset of the data to each process, and then each process performs the same tasks on the different subsets of data. Task Parallelism means parallelizing by giving a subset of the tasks to each process, and then each process performs a different subset of tasks on the same data.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Data Parallelism for N-Body? If you parallelize an N-body code by data, then each processor gets N / N p pieces of data. For example, if you have 8 bodies and 2 processors, then: P 0 gets the first 4 bodies; P 1 gets the second 4 bodies. But, every piece of data (i.e., every body) has to interact with every other piece of data, to calculate the forces. So, every processor will have to send all of its data to all of the other processors, for every single interaction that it calculates. That’s a lot of communication!

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Task Parallelism for N-body? If you parallelize an N-body code by task, then each processor gets all of the pieces of data that describe the particles (e.g., positions, velocities). Then, each processor can calculate its subset of the interaction forces on its own, without talking to any of the other processors. But, at the end of the force calculations, everyone has to share all of the forces that have been calculated, so that each particle ends up with the total force that acts on it (global reduction).

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October MPI_Reduce Here’s the syntax for MPI_Reduce : MPI_Reduce(sendbuffer, recvbuffer, count, datatype, operation, root, communicator); For example, to do a sum over all of the particle forces: MPI_Reduce( local_particle_force_sum, global_particle_force_sum, number_of_particles, MPI_DOUBLE, MPI_SUM, server_process, MPI_COMM_WORLD);

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Sharing the Result In the N-body case, we don’t want just one processor to know the result of the sum, we want every processor to know. So, we could do a reduce followed immediately by a broadcast. But, MPI gives us a routine that packages all of that for us: MPI_Allreduce. MPI_Allreduce is just like MPI_Reduce except that every process gets the result (so we drop the server_process argument).

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October MPI_Allreduce Here’s the syntax for MPI_Allreduce : MPI_Allreduce(sendbuffer, recvbuffer, count, datatype, operation, communicator); For example, to do a sum over all of the particle forces: MPI_Allreduce( local_particle_force_sum, global_particle_force_sum, number_of_particles, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD);

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Collective Communications A collective communication is a communication that is shared among many processes, not just a sender and a receiver. MPI_Reduce and MPI_Allreduce are collective communications. Others include: broadcast, gather/scatter, all-to-all.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October Collectives Are Expensive Collective communications are very expensive relative to point-to-point communications, because so much more communication has to happen. But, they can be much cheaper than doing zillions of point-to- point communications, if that’s the alternative.

SC08 Parallel & Cluster Computing: N-Body Oklahoma Supercomputing Symposium, October To Learn More

Thanks for your attention! Questions?