Download presentation
Presentation is loading. Please wait.
Published byGrace Whitehead Modified over 9 years ago
1
Definitions Speed-up Efficiency Cost Diameter Dilation Deadlock Embedding Scalability Big Oh notation Latency Hiding Termination problem Bernstein’s conditions Embarrassingly parallel Monte Carlo Method Synchronization –Local –Global (butterfly, counter, tree Load balancing –Dynamic –Semi-dynamic –Static Pipelining Work Pool Detatched thread Monitor DSM
2
Types of General Questions Difference between –Definitions on slide 1, Static and dynamic load balancing; Local and global synchronization, Shared memory vs. distributed memory; Process vs. thread; Monitor vs. semaphore; Partitioning and divide & conquer algorithms; Blocking and non-blocking; Centralized and distributed load balancing Techniques for mutual exclusion Characteristics of shared memory programming techniques Three types of algorithms that are suitable for a pipelining approach? Examples. What are three factors to be considered when estimating speed-up? Examples of using local/global synchronization What would need to be considered when generating random numbers at multiple-processors Termination techniques (acknowledgement and ring algorithms) Methods for implementing barriers Which techniques apply to which algorithms
3
Analysis of Parallel Algorithms Gustafson’s and Amdahl’s laws Compute Speedup (t 1 /t P ) in terms of –data set size (n) –processors (P) –Processing speed (comp), communication speed (comm s ), and communication latency (comm l ) The speedup problem will use a relative simple problem using one of the following techniques of parallel programming. –Pipeline –Divide and conquer –Partition Compare expected speed-up of algorithms that we considered in class. Example question could be to state which algorithm would show better speed-up and why?
4
Message Passing Programming Understand the purpose and use of the MPI calls in the appendix Generate parallel c code that makes use of one of the following techniques to manipulate an array of numbers in some way –Using Broadcast –Using Scatter/Gather –Using blocking sends/receives –Using non-blocking sends/receives
5
Shared Memory Programming Create a simple loop using using programming language constructs –forall or par. –using unix processors –using java threads –using opemmp –using pThreads The characteristics of each shared memory paradigm covered in class
6
Parallel Algorithm Pseudo-code Pseudo-code or example of one of the following: Game of Life or Sharks and Fish Sorting N-body Algorithm Mandelbrot set with static and dynamic allocation Solving systems of linear equations Heat distribution problem Global Barriers (butterfly, counter, or tree) Bucket sort Eratosthenes sieve Moore’s shortest path algorithm
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.