Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parallel Processing & Distributed Systems Thoai Nam And Vu Le Hung.

Similar presentations


Presentation on theme: "Parallel Processing & Distributed Systems Thoai Nam And Vu Le Hung."— Presentation transcript:

1 Parallel Processing & Distributed Systems Thoai Nam And Vu Le Hung

2 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Chapter 1: Introduction  Introduction –What is parallel processing? –Why do we use parallel processing?  Applications  Parallel Processing Terminology

3 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Sequential Processing  1 CPU  Simple  Big problems???

4 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Application Demands

5 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Power Computing (1)  Power Processor –50 Hz -> 100 Hz -> 1 GHz -> 2 Ghz ->... -> Upper bound???  Smart worker –Better algorithm  Parallel processing

6 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Power Computing (2)  Processor –Performance of killer micro  50~60% / year –Transistor count  Doubles every 3 years –On-chip parallel processing  100 million transistors by early 2000’s  Ex) Hyper-threading by Intel –Clock rates go up only slowly  30% / year  Memory –DRAM size  4 times every 3 years  Speed gap between memory and processor increases!!!

7 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Architectural Trend  Greatest trend in VLSI is increase in parallelism –Up to 1985 : bit level parallelism: 4 bit -> 8 bit -> 16 bit -> 32 bit –After mid 80’s : instruction level parallelism, valuable but limited.  Coarse-grain parallelism becomes more popular  Today’s microprocessors have multiprocessor support  Bus-based multiprocessor servers and workstations –Sun, SGI, Intel, etc

8 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Parallel Processing Terminology  Parallel Processing  Parallel Computer –Multi-processor computer capable of parallel processing  Throughput  Speedup S = Time(the most efficient sequential algorithm) / Time(parallel algorithm)  Parallelism: –Pipeline –Data parallelism –Control parallelism

9 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Pipeline  A number of steps called segments or stages  The output of one segment is the input of other segment Stage 1 Stage 2 Stage 3

10 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Data Parallelism  Applying the same operation simultaneously to elements of a data set

11 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Pipeline & Data Parallelism A B C w4w3w2w1 A BC w5 w2w1 A B C w5w2 A B C w4w1 A B C w6w3 1. Sequential execution 2. Pipeline 3. Data Parallelism

12 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Pipeline & Data Parallelism  Pipeline is a special case of control parallelism  T(s): Sequential execution time T(p): Pipeline execution time (with 3 stages) T(dp): Data-parallelism execution time (with 3 processors) S(p): Speedup of pipeline S(dp): Speedup of data parallelism widget12345678910 T(s)36912151821242730 T(p)3456789101112 T(dp)33366699912 S(p)11+1/21+4/522+1/72+1/42+1/32+2/52+5/112+1/2 S(dp)12322+1/232+1/32+2/332+1/2

13 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Pipeline & Data Parallelism

14 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Control Parallelism  Applying different operations to different data elements simultaneously

15 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM Scalability  An algorithm is scalable if the level of parallelism increases at least linearly with the problem size.  An architecture is scalable if it continues to yield the same performance per processor, albeit used in large problem size, as the number of processors increases.  Data-parallelism algorithms are more scalable than control-parallelism algorithms


Download ppt "Parallel Processing & Distributed Systems Thoai Nam And Vu Le Hung."

Similar presentations


Ads by Google