Download presentation
Presentation is loading. Please wait.
Published byKerry Oliver Modified over 9 years ago
1
CS4402 – Parallel Computing Lecture 1: Classification of Parallel Computers Classification of Parallel Computation Important Laws of Parallel Compuation
2
How I used to make breakfast……….
3
How to set family to work...
4
How finally got to the office in time….
5
What is Parallel Computing? In the simplest sense, parallel computing is the simultaneous use of multiple computing resources to solve a problem. Parallel computing is the solution for "Grand Challenge Problems“: yweather and climate ybiological, human genome ychemical and nuclear reactions Parallel Computing is a necessity for some commercial applications: yparallel databases, data mining ycomputer-aided diagnosis in medicine Ultimately, parallel computing is an attempt to minimize time.
8
Grand Challenges Problems
9
List of Supercomputers zFind this information at http://www.top500.org/
10
Reason 1: Speedup
11
Reason 2: Economy Resources already available. yTaking advantage of non-local resources yCost savings - using multiple "cheap" computing resources instead of paying for time on a supercomputer. A parallel system is cheaper than a better processor. yTransmission speeds. yLimits to miniaturization. yEconomic limitations.
12
Reason 3: Scalability
13
13
14
14 Types of || Computers Parallel Computers HardwareSoftware Shared memory Distributed memory Hybrid memory SIMDMIMD
17
17
23
The Banking Analogy z Tellers: Parallel Processors z Customers: tasks z Transactions: operations z Accounts: data
24
Vector/Array z Each teller/processor gets a very fine-grained task z Use pipeline parallelism z Good for handling batches when operations can be broken down into fine- grained stages
25
SIMD (Single-Instruction- Multiple-Data) z All processors do the same things or idle z Phase 1: data partitioning and distributed z Phase 2: data-parallel processing z Efficient for big, regular data-sets
26
Systolic Array z Combination of SIMD and Pipeline parallelism z 2-d array of processors with memory at the boundary z Tighter coordination between processors z Achieve very high speeds by circulating data among processors before returning to memory
27
MIMD(Multi-Instruction- Multiple-Data) z Each processor (teller) operates independently z Need synchronization mechanism yby message passing yor mutual exclusion (locks) z Best suited for large- grained problems z Less than data-flow parallelism
28
28 Important Laws of || Computing.
29
29
30
30
31
31
32
32
33
33
34
34
35
35
36
36 Important Consequences z f=0 when no serial part S(n)=n perfect speedup. z f=1 when everything is serial S(n)=1 no parallel code.
37
37 Important Consequences z S(n) is increasing when n is increasing zS(n) is decreasing when f is increasing.
38
38 Important Consequences no matter how many processors are being used the speedup cannot increase above Examples: f = 5% S(n) < 20 f = 10% S(n) < 10 f = 20% S(n) < 5.
39
39
40
40
41
41
42
42 Gustafson’s Law - More
43
43 Gustafson’s Speed-up When s+p=1 Important Consequences: 1)S(n) is increasing when n is increasing 2)S(n) is decreasing when n is increasing 3)There is no upper bound for the speedup.
44
44 To read: 1.John L. Gustafson, Re-evaluating Amdahl's Law, http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html 2.Yuan Shi, Re-evaluating Amdahl's and Gustafson’s Laws, http://www.cis.temple.edu/~shi/docs/amdahl/amdahl.html http://www.cis.temple.edu/~shi/docs/amdahl/amdahl.html 3.Wilkinson’s book, 1.sections of the laws of parallel computing 2.sections about types of parallel machines and compuation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.