Parallel & Cluster Computing Overview of Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08.

Slides:



Advertisements
Similar presentations
CSE 160 – Lecture 9 Speed-up, Amdahl’s Law, Gustafson’s Law, efficiency, basic performance metrics.
Advertisements

Concurrency The need for speed. Why concurrency? Moore’s law: 1. The number of components on a chip doubles about every 18 months 2. The speed of computation.
High Performance Computing and CyberGIS Keith T. Weber, GISP GIS Director, ISU.
Topics Parallel Computing Shared Memory OpenMP 1.
2 Less fish … More fish! Parallelism means doing multiple things at the same time: you can get more work done in the same time.
Parallel & Cluster Computing Distributed Cartesian Meshes Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy,
MPI and C-Language Seminars Seminar Plan  Week 1 – Introduction, Data Types, Control Flow, Pointers  Week 2 – Arrays, Structures, Enums, I/O,
Mr Barton’s Maths Notes
Reference: Message Passing Fundamentals.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Quantitative.
11/3/2005Comp 120 Fall November 10 classes to go! Cache.
Efficiently Sharing Common Data HTCondor Week 2015 Zach Miller Center for High Throughput Computing Department of Computer Sciences.
Parallel & Cluster Computing Linear Algebra Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08 Education.
Parallel Programming & Cluster Computing Shared Memory Multithreading Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October.
Lesson 4: Percentage of Amounts.
Warm-up Problem 2 Write three positive integers in a line. In the space just below and between each pair of adjacent integers, write their difference.
void ordered_fill (float* array, int array_length) { int index; for (index = 0; index < array_length; index++) { array[index] = index; }
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Parallel Programming & Cluster Computing Shared Memory Multithreading Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen Providence.
Parallel & Cluster Computing Clusters & Distributed Parallelism Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy,
Supercomputing in Plain English Overview: What the Heck is Supercomputing? Henry Neeman, Director OU Supercomputing Center for Education & Research Blue.
Review of Memory Management, Virtual Memory CS448.
Parallel & Cluster Computing An Overview of High Performance Computing Henry Neeman, Director OU Supercomputing Center for Education & Research University.
Supercomputing in Plain English Supercomputing in Plain English Overview: What the Heck is Supercomputing? Henry Neeman, University of Oklahoma Assistant.
Introduction to Research Consulting Henry Neeman, University of Oklahoma Director, OU Supercomputing Center for Education & Research (OSCER) Assistant.
Parallel & Cluster Computing MPI Introduction Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08 Education.
Parallel Programming & Cluster Computing Overview: What the Heck is Supercomputing? Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen.
Creating Mathematical Conversations using Open Questions Marian Small Sydney August, 2015 #LLCAus
IT253: Computer Organization Lecture 3: Memory and Bit Operations Tonga Institute of Higher Education.
Parallel & Cluster Computing Monte Carlo Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08 Education.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Parallel Processing Sharing the load. Inside a Processor Chip in Package Circuits Primarily Crystalline Silicon 1 mm – 25 mm on a side 100 million to.
Lecture 7: Design of Parallel Programs Part II Lecturer: Simon Winberg.
Parallel & Cluster Computing Distributed Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08.
Parallel Programming & Cluster Computing Overview of Parallelism Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education.
Parallel Programming & Cluster Computing Shared Memory Multithreading Henry Neeman, Director OU Supercomputing Center for Education & Research University.
Parallel & Cluster Computing Stupid Compiler Tricks Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08.
Parallel & Cluster Computing N-Body Simulation and Collective Communications Henry Neeman, Director OU Supercomputing Center for Education & Research University.
Parallel Programming & Cluster Computing An Overview of High Performance Computing Henry Neeman, University of Oklahoma Paul Gray, University of Northern.
Supercomputing in Plain English Instruction Level Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma.
Optimizing Charm++ Messaging for the Grid Gregory A. Koenig Parallel Programming Laboratory Department of Computer.
Parallel Programming & Cluster Computing Distributed Multiprocessing David Joiner, Kean University Tom Murphy, Contra Costa College Henry Neeman, University.
Parallel Programming & Cluster Computing N-Body Simulation and Collective Communications Henry Neeman, University of Oklahoma Paul Gray, University of.
Supercomputing in Plain English Shared Memory Multithreading Blue Waters Undergraduate Petascale Education Program May 23 – June
Parallel & Cluster Computing Transport Codes and Shifting Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma.
Parallel Programming & Cluster Computing Shared Memory Parallelism Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education.
Parallel & Cluster Computing 2005 Supercomputing Overview Paul Gray, University of Northern Iowa David Joiner, Kean University Tom Murphy, Contra Costa.
Pass the Buck Every good programmer is lazy, arrogant, and impatient. In the game “Pass the Buck” you try to do as little work as possible, by making your.
Computer Organization CS224 Fall 2012 Lesson 52. Introduction  Goal: connecting multiple computers to get higher performance l Multiprocessors l Scalability,
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Introduction to Parallel Programming & Cluster Computing MPI Collective Communications Joshua Alexander, U Oklahoma Ivan Babic, Earlham College Michial.
Supercomputing and Science An Introduction to High Performance Computing Part V: Shared Memory Parallelism Henry Neeman, Director OU Supercomputing Center.
Supercomputing in Plain English Part V: Shared Memory Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of.
Concurrency and Performance Based on slides by Henri Casanova.
Parallel Programming & Cluster Computing Monte Carlo Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education Program’s.
Dynamic Load Balancing Tree and Structured Computations.
Parallel & Cluster Computing Shared Memory Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma.
Parallel Programming & Cluster Computing Overview: What the Heck is Supercomputing? Henry Neeman, University of Oklahoma Charlie Peck, Earlham College.
Supercomputing in Plain English Shared Memory Multithreading
Setting Up a Low Cost Statewide Cyberinfrastructure Initiative
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Introduction to Research Facilitation
The University of Adelaide, School of Computer Science
Morgan Kaufmann Publishers
EE 193: Parallel Computing
CSE8380 Parallel and Distributed Processing Presentation
The Jigsaw Puzzle Metaphor
Presentation transcript:

Parallel & Cluster Computing Overview of Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08 Education Program’s Workshop on Parallel & Cluster computing August

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Okla. Supercomputing Symposium 2006 Keynote: Dan Atkins Head of NSF’s Office of Cyber- infrastructure 2004 Keynote: Sangtae Kim NSF Shared Cyberinfrastructure Division Director 2003 Keynote: Peter Freeman NSF Computer & Information Science & Engineering Assistant Director 2005 Keynote: Walt Brooks NASA Advanced Supercomputing Division Director Keynote: Jay Boisseau Director Texas Advanced Computing Center U. Texas Austin Tue Oct 7 OU Over 250 registrations already! Over 150 in the first day, over 200 in the first week, over 225 in the first month. FREE! Parallel Computing Workshop Mon Oct OU sponsored by SC08 FREE! Symposium Tue Oct OU 2008 Keynote: José Munoz Deputy Office Director/ Senior Scientific Advisor Office of Cyber- infrastructure National Science Foundation

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Outline Parallelism The Jigsaw Puzzle Analogy for Parallelism The Desert Islands Analogy for Distributed Parallelism Parallelism Issues

Parallelism

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Parallelism Less fish … More fish! Parallelism means doing multiple things at the same time: you can get more work done in the same time.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August What Is Parallelism? Parallelism is the use of multiple processors to solve a problem, and in particular the use of multiple processors working concurrently on different parts of a problem. The different parts could be different tasks, or the same tasks on different pieces of the problem’s data.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Why Parallelism Is Good The Trees: We like parallelism because, as the number of processing units working on a problem grows, we can solve the same problem in less time. The Forest: We like parallelism because, as the number of processing units working on a problem grows, we can solve bigger problems.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Kinds of Parallelism Shared Memory Multithreading Distributed Memory Multiprocessing Hybrid Shared/Distributed Parallelism

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August The Jigsaw Puzzle Analogy

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Serial Computing Suppose you want to do a jigsaw puzzle that has, say, a thousand pieces. We can imagine that it’ll take you a certain amount of time. Let’s say that you can put the puzzle together in an hour.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Shared Memory Parallelism If Paul sits across the table from you, then he can work on his half of the puzzle and you can work on yours. Once in a while, you’ll both reach into the pile of pieces at the same time (you’ll contend for the same resource), which will cause a little bit of slowdown. And from time to time you’ll have to work together (communicate) at the interface between his half and yours. The speedup will be nearly 2-to-1: y’all might take 35 minutes instead of 30.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August The More the Merrier? Now let’s put Charlie and Scott on the other two sides of the table. Each of you can work on a part of the puzzle, but there’ll be a lot more contention for the shared resource (the pile of puzzle pieces) and a lot more communication at the interfaces. So y’all will get noticeably less than a 4-to-1 speedup, but you’ll still have an improvement, maybe something like 3-to-1: the four of you can get it done in 20 minutes instead of an hour.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Diminishing Returns If we now put Rebecca and Jen and Alisa and Darlene on the corners of the table, there’s going to be a whole lot of contention for the shared resource, and a lot of communication at the many interfaces. So the speedup y’all get will be much less than we’d like; you’ll be lucky to get 5-to-1. So we can see that adding more and more workers onto a shared resource is eventually going to have a diminishing return.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Distributed Parallelism Now let’s try something a little different. Let’s set up two tables, and let’s put you at one of them and Paul at the other. Let’s put half of the puzzle pieces on your table and the other half of the pieces on Paul’s. Now y’all can work completely independently, without any contention for a shared resource. BUT, the cost of communicating is MUCH higher (you have to scootch your tables together), and you need the ability to split up (decompose) the puzzle pieces reasonably evenly, which may be tricky to do for some puzzles.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August More Distributed Processors It’s a lot easier to add more processors in distributed parallelism. But, you always have to be aware of the need to decompose the problem and to communicate between the processors. Also, as you add more processors, it may be harder to load balance the amount of work that each processor gets.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Load Balancing Load balancing means giving everyone roughly the same amount of work to do. For example, if the jigsaw puzzle is half grass and half sky, then you can do the grass and Julie can do the sky, and then y’all only have to communicate at the horizon – and the amount of work that each of you does on your own is roughly equal. So you’ll get pretty good speedup.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Load Balancing Is Good When every processor gets the same amount of work, the job is load balanced. We like load balancing, because it means that our speedup can potentially be linear: if we run on N p processors, it takes 1/N p as much time as on one processor. For some codes, figuring out how to balance the load is trivial (e.g., breaking a big unchanging array into sub- arrays). For others, load balancing is very tricky (e.g., a dynamically evolving collection of arbitrarily many blocks of arbitrary size).

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Load Balancing Load balancing can be easy, if the problem splits up into chunks of roughly equal size, with one chunk per processor. Or load balancing can be very hard.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Load Balancing Load balancing can be easy, if the problem splits up into chunks of roughly equal size, with one chunk per processor. Or load balancing can be very hard. EASY

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Load Balancing Load balancing can be easy, if the problem splits up into chunks of roughly equal size, with one chunk per processor. Or load balancing can be very hard. EASY HARD

Distributed Multiprocessing: The Desert Islands Analogy

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August An Island Hut Imagine you’re on an island in a little hut. Inside the hut is a desk. On the desk is: a phone; a pencil; a calculator; a piece of paper with numbers; a piece of paper with instructions.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Instructions The instructions are split into two kinds: Arithmetic/Logical: e.g., Add the 27 th number to the 239 th number Compare the 96 th number to the 118 th number to see whether they are equal Communication: e.g., dial and leave a voic containing the 962 nd number call your voic box and collect a voic from and put that number in the 715 th slot

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Is There Anybody Out There? If you’re in a hut on an island, you aren’t specifically aware of anyone else. Especially, you don’t know whether anyone else is working on the same problem as you are, and you don’t know who’s at the other end of the phone line. All you know is what to do with the voic s you get, and what phone numbers to send voic s to.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Someone Might Be Out There Now suppose that Paul is on another island somewhere, in the same kind of hut, with the same kind of equipment. Suppose that he has the same list of instructions as you, but a different set of numbers (both data and phone numbers). Like you, he doesn’t know whether there’s anyone else working on his problem.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Even More People Out There Now suppose that Charlie and Scott are also in huts on islands. Suppose that each of the four has the exact same list of instructions, but different lists of numbers. And suppose that the phone numbers that people call are each others’. That is, your instructions have you call Paul, Charlie and Scott, Paul’s has him call Charlie, Scott and you, and so on. Then you might all be working together on the same problem.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August All Data Are Private Notice that you can’t see Paul’s or Charlie’s or Scott’s numbers, nor can they see yours or each other’s. Thus, everyone’s numbers are private: there’s no way for anyone to share numbers, except by leaving them in voic s.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Long Distance Calls: 2 Costs When you make a long distance phone call, you typically have to pay two costs: Connection charge: the fixed cost of connecting your phone to someone else’s, even if you’re only connected for a second Per-minute charge: the cost per minute of talking, once you’re connected If the connection charge is large, then you want to make as few calls as possible.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Like Desert Islands Distributed parallelism is very much like the Desert Islands analogy: Processors are independent of each other. All data are private. Processes communicate by passing messages (like voic s). The cost of passing a message is split into the latency (connection time) and the bandwidth (time per byte).

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Latency vs Bandwidth on topdawg We recently tested the Infiniband interconnect on OU’s large Linux cluster. Latency – the time for the first bit to show up at the destination – is about 3 microseconds; Bandwidth – the speed of the subsequent bits – is about 5 Gigabits per second. Thus, on topdawg’s Infiniband: the 1 st bit of a message shows up in 3 microsec; the 2 nd bit shows up in 0.2 nanosec. So latency is 15,000 times worse than bandwidth!

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Latency vs Bandwidth on topdawg We recently tested the Infiniband interconnect on OU’s large Linux cluster. Latency – the time for the first bit to show up at the destination – is about 3 microseconds; Bandwidth – the speed of the subsequent bits – is about 5 Gigabits per second. Latency is 15,000 times worse than bandwidth! That’s like having a long distance service that charges $150 to make a call; 1¢ per minute – after the first 10 days of the call.

Parallelism Issues

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Speedup The goal in parallelism is linear speedup: getting the speed of the job to increase by a factor equal to the number of processors. Very few programs actually exhibit linear speedup, but some come close.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Scalability Platinum = NCSA 1024 processor PIII/1GHZ Linux Cluster Note: NCSA Origin timings are scaled from 19x19x53 domains. Scalable means “performs just as well regardless of how big the problem is.” A scalable code has near linear speedup. Better

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Strong vs Weak Scalability Strong Scalability: If you double the number of processors, but you keep the problem size constant, then the problem takes half as long to complete (i.e., the speed doubles). Weak Scalability: If you double the number of processors, and double the problem size, then the problem takes the same amount of time to complete (i.e., the speed doubles).

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Scalability Platinum = NCSA 1024 processor PIII/1GHZ Linux Cluster Note: NCSA Origin timings are scaled from 19x19x53 domains. This benchmark shows weak scalability. Better

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Granularity Granularity is the size of the subproblem that each process works on, and in particular the size that it works on between communicating or synchronizing with the others. Some codes are coarse grain (a few very big parallel parts) and some are fine grain (many little parallel parts). Usually, coarse grain codes are more scalable than fine grain codes, because less time is spent managing the parallelism, so more is spent getting the work done.

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Parallel Overhead Parallelism isn’t free. Behind the scenes, the compiler and the hardware have to do a lot of overhead work to make parallelism happen. The overhead typically includes: Managing the multiple processes Communication among processes Synchronization (described later)

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August Okla. Supercomputing Symposium 2006 Keynote: Dan Atkins Head of NSF’s Office of Cyber- infrastructure 2004 Keynote: Sangtae Kim NSF Shared Cyberinfrastructure Division Director 2003 Keynote: Peter Freeman NSF Computer & Information Science & Engineering Assistant Director 2005 Keynote: Walt Brooks NASA Advanced Supercomputing Division Director Keynote: Jay Boisseau Director Texas Advanced Computing Center U. Texas Austin Tue Oct 7 OU Over 250 registrations already! Over 150 in the first day, over 200 in the first week, over 225 in the first month. FREE! Parallel Computing Workshop Mon Oct OU sponsored by SC08 FREE! Symposium Tue Oct OU 2008 Keynote: José Munoz Deputy Office Director/ Senior Scientific Advisor Office of Cyber- infrastructure National Science Foundation

SC08 Parallel & Cluster Computing: Parallelism Overview University of Oklahoma, August To Learn More

Thanks for your attention! Questions?