Download presentation
Presentation is loading. Please wait.
1
Greg Stitt ECE Department University of Florida
Parallelism Greg Stitt ECE Department University of Florida
2
Why are microprocessors slow?
Von Neumann architecture “Stored-program” machine Memory for instructions (and data)
3
Von Neumann architecture
Summary 1) Fetch instruction 2) Decode instruction, fetch data 3) Execute 4) Store results 5) Repeat from 1 until end of program Problem Inherently sequential Only executes one instruction at a time Does not take into consideration parallelism of application
4
Von Neumann architecture
Problem 2: Von Neumann bottleneck Constantly reading/writing data for every instruction requires high memory bandwidth Performance limited by bandwidth of memory RAM Bandwidth not sufficient - “Von Neumann bottleneck” Control Datapath
5
Improvements Increase resources in datapath to execute multiple instructions in parallel VLIW - very long instruction word Compiler encodes parallelism into “very-long” instructions Superscalar Architecture determines parallelism at run time - out-of-order instruction execution Von Neumann bottleneck still problem RAM Control Datapath Datapath Datapath . . .
6
Why is RC fast? RC implements custom circuits for an application
Circuits can exploit massive amount of parallelism VLIW/Superscalar Parallelism ~5 ins/cycle in best case (rarely occurs) RC Potentially thousands As many ops as will fit in device Also, supports different types of parallelism
7
Circuit for Bit Reversal
Types of Parallelism Bit-level C Code for Bit Reversal Circuit for Bit Reversal Bit Reversed X Value Original X Value Processor FPGA Requires only 1 cycle (speedup of 32x to 128x) for same clock x = (x >>16) | (x <<16); x = ((x >> 8) & 0x00ff00ff) | ((x << 8) & 0xff00ff00); x = ((x >> 4) & 0x0f0f0f0f) | ((x << 4) & 0xf0f0f0f0); x = ((x >> 2) & 0x ) | ((x << 2) & 0xcccccccc); x = ((x >> 1) & 0x ) | ((x << 1) & 0xaaaaaaaa); sll $v1[3],$v0[2],0x10 srl $v0[2],$v0[2],0x10 or $v0[2],$v1[3],$v0[2] srl $v1[3],$v0[2],0x8 and $v1[3],$v1[3],$t5[13] sll $v0[2],$v0[2],0x8 and $v0[2],$v0[2],$t4[12] srl $v1[3],$v0[2],0x4 and $v1[3],$v1[3],$t3[11] sll $v0[2],$v0[2],0x4 and $v0[2],$v0[2],$t2[10] ... Binary Compilation Processor Requires between 32 and 128 cycles
8
Types of Parallelism Arithmetic-level Parallelism (i.e. “wide” parallelism) C Code Circuit . . . for (i=0; i < 128; i++) y += c[i] * x[i] .. for (i=0; i < 128; i++) y[i] += c[i] * x[i] .. 128 multipliers * + . . . . . . 128 adders . . . Processor Processor Processor FPGA 1000’s of instructions Several thousand cycles ~ 7 cycles (assuming 1 op per cycle) Speedup > 100x for same clock
9
Types of Parallelism Pipeline Parallelism (i.e., “deep” parallelism)
for (i=0; i < ; i++) { y[i] += c[i] * x[i] + c[i+1] * x[i+1] + ….. + c[i+11] * x[i+11] } Problem: 12* multipliers would require huge area Solution: Use resources required by one iteration, and start new iteration every cycle * + Registers After filling up pipeline, performs 12 mults + 12 adds every cycle Performance can be further increased by “unrolling” loop or replicating datapath to perform multiple iterations every cycle.
10
Types of Parallelism Task-level Parallelism
e.g. MPEG-2 Each box is a task All tasks executes in parallel Each task may have bit-level, wide, and deep parallelism
11
How to exploit parallelism?
General Idea Identify tasks Create circuit for each task Communication between tasks with buffers How to create circuit for each task? Want to exploit bit-level, arithmetic-level, and pipeline-level parallelism Solution: Systolic arrays, pipelines
12
Systolic Arrays Systolic definition Analogy with heart pumping blood
The rhythmic contraction of the heart, especially of the ventricles, by which blood is driven through the aorta and pulmonary artery after each dilation or diastole. Analogy with heart pumping blood We want architecture that pumps data through efficiently. Data flows from memory in a rhythmic fashion, passing through many processing elements before it returns to memory. [Hung]
13
Systolic Arrays General Idea: Fully pipelined circuit, with I/O at top and bottom level Local connections - each element communicates with elements at same level or level below Inputs arrive each cycle Outputs depart each cycle, after pipeline is full
14
Pipelining Simple Example
Create DFG (data flow graph) for body of loop Represent data dependencies of code for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[i] b[i+1] b[i+2] + + a[i]
15
Simple Example Add pipeline stages to each level of DFG b[i] b[i+1]
a[i]
16
Simple Example Allocate one resource (adder, ALU, etc) for each operation in DFG Resulting systolic architecture: b[0] b[1] b[2] Cycle 1 + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; +
17
Simple Example Allocate one resource for each operation in DFG
Resulting systolic architecture: b[1] b[2] b[3] Cycle 2 + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[0]+b[1] b[2] +
18
Simple Example Allocate one resource for each operation in DFG
Resulting systolic architecture: b[2] b[3] b[4] Cycle 3 + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[1]+b[2] b[3] + b[0]+b[1]+b[2]
19
Simple Example Allocate one resource for each operation in DFG
Resulting systolic architecture: b[3] b[4] b[5] Cycle 4 + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[2]+b[3] b[4] + b[1]+b[2]+b[3] a[0] First output appears, takes 4 cycles to fill pipeline
20
Simple Example Allocate one resource for each operation in DFG
Resulting systolic architecture: b[4] b[5] b[6] Cycle 5 + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[3]+b[4] b[5] + b[2]+b[3]+b[4] Total Cycles => 4 init + 99 = 103 One output per cycle at this point, 99 more until completion a[1]
21
uP Performance Comparison
Assumptions: 10 instructions for loop body CPI = 1.5 Clk 10x faster than FPGA Total SW cycles: 100*10*1.5 = 1,500 cycles RC Speedup (1500/103)*(1/10) = 1.46x for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2];
22
uP Performance Comparison
What if uP clock is 15x faster? e.g. 3 GHz vs. 200 MHz RC Speedup (1500/103)*(1/15) = .97x RC is slightly slower But! RC requires much less power Several Watts vs ~100 Watts SW may be practical for embedded uPs => low power Clock may be just 2x faster (1500/103)*(1/2) = 7.3x faster RC may be cheaper Depends on area needed This example would certainly be cheaper for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2];
23
Simple Example, Cont. Improvement to systolic array
Why not execute multiple iterations at same time? No data dependencies Loop unrolling for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; Unrolled DFG b[i] b[i+1] b[i+2] b[i+1] b[i+2] b[i+3] + + + + a[i] a[i+1]
24
Simple Example, Cont. How much to unroll?
Limited by memory bandwidth and area Must get all inputs once per cycle b[i] b[i+1] b[i+2] b[i+1] b[i+2] b[i+3] + + + + a[i] a[i+1] Must write all outputs once per cycle Must be sufficient area for all ops in DFG
25
Unrolling Example Original circuit 1st iteration requires 3 inputs
for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; 1st iteration requires 3 inputs b[0] b[1] b[2] + + a[0]
26
Unrolling Example Original circuit
for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; Each unrolled iteration requires one additional input b[0] b[1] b[2] b[3] + + + + a[0] a[1]
27
Unrolling Example Original circuit
for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; Each cycle brings in 4 inputs (instead of 6) b[2] b[3] b[4] b[5] + + b[0]+b[1] b[2] b[3] b[1]+b[2] + +
28
Performance after unrolling
How much unrolling? Assume b[] elements are 8 bits First iteration requires 3 elements = 24 bits Each unrolled iteration requires 1 element = 8 bit Due to overlapping inputs Assume memory bandwidth = 64 bits/cycle Can perform 6 iterations in parallel ( ) = 64 bits New performance Unrolled systolic architecture requires 4 cycles to fill pipeline, 100/6 iterations ~ 21 cycles With unrolling, RC is (1500/21)*(1/15) = 4.8x faster than 3 GHz microprocessor!!! for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2];
29
Importance of Memory Bandwidth
Performance with wider memories 128-bit bus 14 iterations in parallel 64 extra bits/8 bits per iteration = 8 parallel iterations + 6 original unrolled iterations = 14 total parallel iterations Total cycles = 4 to fill pipeline + 100/14 = ~11 Speedup (1500/11)*(1/15) = 9.1x Doubling memory width increased speedup from 4.8x to 9.1x!!! Important Point Performance of hardware often limited by memory bandwidth More bandwidth => more unrolling => more parallelism => BIG SPEEDUP
30
Delay Registers Common mistake
Forgetting to add registers for values not used during a cycle Values “delayed” or passed on until needed + + Instead of + + Incorrect Correct
31
Delay Registers Illustration of incorrect delays Cycle 1 b[0] b[1]
for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[0] b[1] b[2] Cycle 1 + +
32
Delay Registers Illustration of incorrect delays Cycle 2 b[1] b[2]
for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[1] b[2] b[3] Cycle 2 + b[0]+b[1] + b[2] + ?????
33
Delay Registers Illustration of incorrect delays Cycle 3 b[2] b[3]
for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[2] b[3] b[4] Cycle 3 + b[1]+b[2] + b[0] + b[1] + b[3] b[2] + ?????
34
Another Example Your turn Steps Build DFG for body of loop
Add pipeline stages Map operations to hardware resources Assume divide takes one cycle Determine maximum amount of unrolling Memory bandwidth = 128 bits/cycle Determine performance compared to uP Assume 15 instructions per iteration, CPI = 1.5, CLK = 15x faster than RC short b[1004], a[1000]; for (i=0; i < 1000; i++) a[i] = avg( b[i], b[i+1], b[i+2], b[i+3], b[i+4] );
35
Another Example, Cont. What if divider takes 20 cycles?
But, fully pipelined Calculate the effect on performance In systolic architectures, performance usually dominated by throughput of pipeline, not latency
36
Dealing with Dependencies
op2 is dependent on op1 when the input to op2 is an output from op1 Problem: limits arithmetic parallelism, increases latency i.e. Can’t execute op2 before op1 Serious Problem: FPGAs rely on parallelism for performance Little parallelism = Bad performance op1 op2
37
Dealing with Dependencies
Partial solution Parallelizing transformations e.g. tree height reduction a b c d a b c d + + + + + + Depth = # of adders Depth = log2( # of adders )
38
Dealing with Dependencies
Simple example w/ inter-iteration dependency - potential problem for systolic arrays Can’t keep pipeline full a[0] = 0; for (i=1; i < 8; I++) a[i] = b[i] + b[i+1] + a[i-1]; Can’t execute until 1st iteration completes - limited arithmetic parallelism, increases latency b[1] b[2] a[0] b[2] b[3] a[1] + + + +
39
Dealing with Dependencies
But, systolic arrays also have pipeline-level parallelism - latency less of an issue a[0] = 0; for (i=1; i < 8; I++) a[i] = b[i] + b[i+1] + a[i-1]; b[1] b[2] a[0] b[2] b[3] a[1] + + + +
40
Dealing with Dependencies
But, systolic arrays also have pipeline-level parallelism - latency less of an issue a[0] = 0; for (i=1; i < 8; I++) a[i] = b[i] + b[i+1] + a[i-1]; b[1] b[2] a[0] b[2] b[3] + + + a[1] +
41
Dealing with Dependencies
But, systolic arrays also have pipeline-level parallelism - latency less of an issue a[0] = 0; for (i=1; i < 8; I++) a[i] = b[i] + b[i+1] + a[i-1]; b[1] b[2] a[0] b[2] b[3] b[3] b[4] + + + + a[1] + a[2] +
42
Dealing with Dependencies
But, systolic arrays also have pipeline-level parallelism - latency less of an issue a[0] = 0; for (i=1; i < 8; I++) a[i] = b[i] + b[i+1] + a[i-1]; b[1] b[2] a[0] b[2] b[3] b[3] b[4] + + + Add pipeline stages => systolic array + a[1] + a[2] + Only works if loop is fully unrolled! Requires sufficient memory bandwidth *Outputs not shown
43
Dealing with Dependencies
Your turn char b[1006]; for (i=0; i < 1000; i++) { acc=0; for (j=0; j < 6; j++) acc += b[i+j]; a[i] = acc; } Steps Build DFG for inner loop (note dependencies) Fully unroll inner loop (check to see if memory bandwidth allows) Assume bandwidth = 64 bits/cycle Add pipeline stages Map operations to hardware resources Determine performance compared to uP Assume 15 cycles per iteration, CPI = 1.5, CLK = 15x faster than RC
44
Dealing with Control If statements
Can’t wait for result of condition - stalls pipeline char b[1006], a[1000]; for (i=0; i < 1000; i++) { if (I % 2 == 0) a[I] = b[I] * b[I+1]; else a[I] = b[I+2] + b[I+3] ; } Convert control into computation - if conversion b[i] b[I+1] b[I+2] b[I+3] i 2 * + % MUX a[i]
45
Dealing with Control If conversion, not always so easy a[i] b[i]
char b[1006], a[1000], a2[1000]; for (i=0; i < 1000; i++) { if (I % 2 == 0) a[I] = b[I] * b[I+1]; else a2[I] = b[I+2] + b[I+3] ; } a[i] b[i] b[I+1] i 2 b[I+2] b[I+3] a2[i] * % + MUX MUX a[i] a2[i]
46
Other Challenges Outputs can also limit unrolling Example
4 outputs, 1 input Each output 32 bits Total output bandwidth for 1 iteration = 128 bits Memory bus = 128 bits Can’t unroll, even though inputs only use 32 bits long b[1004], a[1000]; for (i=0, j=0; i < 1000; i+=4, j++) { a[i] = b[j] + 10 ; a[i+1] = b[j] * 23; a[i+2] = b[j] - 12; a[i+3] = b[j] * b[j]; }
47
Other Challenges Requires streaming data to work well Systolic array
But, pipelining is wasted because small data stream There is often a large overhead each time circuit is started Examples: Communication across PCIx bus, network, etc. Needs to be outweighed by parallelism on large data stream Point - systolic arrays work best with repeated computation for (i=0; i < 4; i++) a[i] = b[i] + b[i+1]; b[0] b[1] b[1] b[2] b[2] b[3] b[3] b[4] + + + + a[0] a[1] a[2] a[3]
48
Other Challenges Memory bandwidth Example
Values so far are “peak” values Can only be achieved if all input data stored sequentially (or in same rows) of memory Often not the case Example Two-dimensional arrays long a[100][100], b[100][100]; for (i=1; i < 100; i++) { for (j=1; j < 100; j++) { a[i][j] = avg( b[i-1][j], b[I][j-1], b[I+1][j], b[I][j+1]); }
49
Other Challenges Memory bandwidth, cont. Example 2
Multiple array inputs b[] and c[] stored in different locations Memory accesses may jump back and forth Possible solutions Use multiple memories, or multiported memory (high cost) Interleave data from b[] and c[] in memory (programming effort) If no compiler support, requires manual rewite long a[100], b[100], c[100]; for (i=0; i < 100; i++) { a[i] = b[i] + c[i]; }
50
Other Challenges Dynamic memory access patterns Possible solution
Sequence of addresses not known until run time Clearly, not sequential Possible solution Something creative enough for a Ph.D thesis int f( int val ) { long a[100], b[100], c[100]; for (i=0; i < 100; i++) { a[i] = b[rand()%100] + c[i * val] }
51
Other Challenges Pointer-based data structures
Even if scanning through list, data could be all over memory Very unlikely to be sequential Can cause aliasing problems Greatly limits optimization potential Solutions are another Ph. D. Pointers ok if used as array int f( int val ) { long a[100], b[100]; long *p = b; for (i=0; i < 100; i++, p++) { a[i] = *p + 1; } int f( int val ) { long a[100], b[100]; for (i=0; i < 100; i++) { a[i] = b[i] + 1; } equivalent to
52
Other Challenges Not all code is just one loop Main point to remember
Yet another Ph.D. Main point to remember Systolic arrays are extremely fast, but only certain types of code work What can we do instead of systolic arrays?
53
Other Options Try something completely different Try slight variation
Example - 3 inputs, but can only read 2 per cycle Not possible - can only read two inputs per cycle for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; + +
54
Variations Example, cont.
Break previous rules - use extra delay registers for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[i] b[i+1] b[i+2] + +
55
Variations Example, cont.
Break previous rules - use extra delay registers for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[0] b[1] Junk Cycle 1 + +
56
Variations Example, cont.
Break previous rules - use extra delay registers for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; Junk Junk b[2] b[0] b[2] b[1] Cycle 2 + Junk +
57
Variations Example, cont.
Break previous rules - use extra delay registers for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[1] b[2] Junk Junk Junk Cycle 3 + b[0]+b[1] b[2] + Junk
58
Variations Example, cont.
Break previous rules - use extra delay registers for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; Junk Junk b[3] b[1] b[2] b[3] Cycle 4 + Junk Junk + b[0]+b[1]+b[2] Junk
59
Variations Example, cont.
Break previous rules - use extra delay registers for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[2] b[3] Junk Junk Junk Junk Cycle 5 + b[1] + b[2] b[3] + First output after 5 cycles Junk a[0]
60
Variations Example, cont.
Break previous rules - use extra delay registers for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; Junk Junk b[4] b[2] b[3] b[4] Cycle 6 + Junk Junk + Junk on next cycle b[1]+b[2]+b[3] Junk
61
Variations Example, cont.
Break previous rules - use extra delay registers for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[3] b[4] Junk Junk Junk Junk Cycle 7 + b[2]+b[3] b[4] Valid output every 2 cycles - approximately 1/2 the performance + Second output 2 cycles later Junk a[1]
62
Entire Circuit Input Address Generator RAM Buffer Controller Datapath
Buffers handle differences in speed between RAM and datapath Buffer Controller Datapath Buffer Output Address Generator RAM
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.