Examples of One- Dimensional Systolic Arrays Motivation & Introduction We need a high-performance, special-purpose computer system to meet specific application.

Slides:



Advertisements
Similar presentations
Reversible Gates in various realization technologies
Advertisements

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Convolution circuits synthesis Perkowski. FIR-filter like structure b4b3 b2b1 +++ a4000 a4*b4.
Systolic Arrays & Their Applications
Lecture 19: Parallel Algorithms
 Suppose for a moment that you were asked to perform a task and were given the following list of instructions to perform:
Why Systolic Architecture ?. Motivation & Introduction We need a high-performance, special-purpose computer system to meet specific application. I/O and.
1 5.1 Pipelined Computations. 2 Problem divided into a series of tasks that have to be completed one after the other (the basis of sequential programming).
Chapter 1 — Computer Abstractions and Technology — 1 Lecture 7 Carry look ahead adders, Latches, Flip-flops, registers, multiplexors, decoders Digital.
1 5.1 Pipelined Computations. 2 Problem divided into a series of tasks that have to be completed one after the other (the basis of sequential programming).
East Los Angeles College
Numerical Algorithms ITCS 4/5145 Parallel Computing UNC-Charlotte, B. Wilkinson, 2009.
Numerical Algorithms Matrix multiplication
Numerical Algorithms • Matrix multiplication
Examples of One-Dimensional Systolic Arrays
ACCESS IC LAB Graduate Institute of Electronics Engineering, NTU Why Systolic Architecture ? VLSI Signal Processing 台灣大學電機系 吳安宇.
Applications of Systolic Array FTR, IIR filtering, and 1-D convolution. 2-D convolution and correlation. Discrete Furier transform Interpolation 1-D and.
CSE621/JKim Lec4.1 9/20/99 CSE621 Parallel Algorithms Lecture 4 Matrix Operation September 20, 1999.
Lecture 21: Parallel Algorithms
CSE 160/Berman Programming Paradigms and Algorithms W+A 3.1, 3.2, p. 178, 5.1, 5.3.3, Chapter 6, 9.2.8, , Kumar Berman, F., Wolski, R.,
Pipelined Computations Divide a problem into a series of tasks A processor completes a task sequentially and pipes the results to the next processor Pipelining.
1 EECS Components and Design Techniques for Digital Systems Lec 21 – RTL Design Optimization 11/16/2004 David Culler Electrical Engineering and Computer.
Systolic Computing Fundamentals. This is a form of pipelining, sometimes in more than one dimension. Machines have been constructed based on this principle,
Recap – Our First Computer WR System Bus 8 ALU Carry output A B S C OUT F 8 8 To registers’ input/output and clock inputs Sequence of control signal combinations.
COMPE575 Parallel & Cluster Computing 5.1 Pipelined Computations Chapter 5.
1 Lecture 24: Parallel Algorithms I Topics: sort and matrix algorithms.
Hashing General idea: Get a large array
Distributed Arithmetic: Implementations and Applications
Copyright 2008 Koren ECE666/Koren Part.6a.1 Israel Koren Spring 2008 UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Digital Computer.
CSE 160/Berman Lecture 6 -- Programming Paradigms and Algorithms W+A 3.1, 3.2, p. 178, 5.1, 5.3.3, Chapter 6, 9.2.8, , Kumar
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Introduction to Convolution circuits synthesis image processing, speech processing, DSP, polynomial multiplication in robot control. convolution.
Chapter 6-2 Multiplier Multiplier Next Lecture Divider
Chapter One Introduction to Pipelined Processors.
Chapter 8 Problems Prof. Sin-Min Lee Department of Mathematics and Computer Science.
Analysis of Algorithms
Prerequisites: Fundamental Concepts of Algebra
Important Components, Blocks and Methodologies. To remember 1.EXORS 2.Counters and Generalized Counters 3.State Machines (Moore, Mealy, Rabin-Scott) 4.Controllers.
Speeding up of pipeline segments © Fr Dr Jaison Mulerikkal CMI.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Chapter 4 MARIE: An Introduction to a Simple Computer.
Processor Architecture
Decomposition Data Decomposition – Dividing the data into subgroups and assigning each piece to different processors – Example: Embarrassingly parallel.
Reconfigurable Computing - Pipelined Systems John Morris Chung-Ang University The University of Auckland ‘Iolanthe’ at 13 knots on Cockburn Sound, Western.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
CprE / ComS 583 Reconfigurable Computing Prof. Joseph Zambreno Department of Electrical and Computer Engineering Iowa State University Lecture #12 – Systolic.
Recursive Architectures for 2DLNS Multiplication RESEARCH CENTRE FOR INTEGRATED MICROSYSTEMS - UNIVERSITY OF WINDSOR 11 Recursive Architectures for 2DLNS.
3/12/2013Computer Engg, IIT(BHU)1 CONCEPTS-1. Pipelining Pipelining is used to increase the speed of processing It uses temporal parallelism In pipelining,
Digital Logic Design Basics Combinational Circuits Sequential Circuits Pu-Jen Cheng Adapted from the slides prepared by S. Dandamudi for the book, Fundamentals.
Reconfigurable Computing - Options in Circuit Design John Morris Chung-Ang University The University of Auckland ‘Iolanthe’ at 13 knots on Cockburn Sound,
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
VLSI SP Course 2001 台大電機吳安宇 1 Why Systolic Architecture ? H. T. Kung Carnegie-Mellon University.
Recap – Our First Computer WR System Bus 8 ALU Carry output A B S C OUT F 8 8 To registers’ read/write and clock inputs Sequence of control signal combinations.
Buffering Techniques Greg Stitt ECE Department University of Florida.
Pipelining and Retiming 1
Examples of One-Dimensional Systolic Arrays
Pipelining and Vector Processing
Central Processing Unit
Pipelined Computations
Pipeline Pattern ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2012 slides5.ppt Oct 24, 2013.
Pipeline Pattern ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2012 slides5.ppt March 20, 2014.
Pipeline Pattern ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson slides5.ppt August 17, 2014.
UNIVERSITY OF MASSACHUSETTS Dept
UNIVERSITY OF MASSACHUSETTS Dept
Presentation transcript:

Examples of One- Dimensional Systolic Arrays

Motivation & Introduction We need a high-performance, special-purpose computer system to meet specific application. I/O and computation imbalance is a notable problem. The concept of Systolic architecture can map high-level computation into hardware structures. Systolic system works like an automobile assembly line. Systolic system is easy to implement because of its regularity and easy to reconfigure. Systolic architecture can result in cost-effective, high- performance special-purpose systems for a wide range of problems.

Pipelined Computations Pipelined program divided into a series of tasks that have to be completed one after the other. Each task executed by a separate pipeline stage Data streamed from stage to stage to form computation P1P2P3P4P5 f, e, d, c, b, a

Pipelined Computations Computation consists of data streaming through pipeline stages Execution Time = Time to fill pipeline (P-1) + Time to run in steady state (N-P+1) + Time to empty pipeline (P-1) P1P2P3P4P5 f, e, d, c, b, a abfedc abfedc abfedc abfedc abfedc time P5 P4 P3 P2 P1 P = # of processors N = # of data items (assume P < N) This slide must be explained in all detail. It is very important

Pipelined Example: Sieve of Eratosthenes Goal is to take a list of integers greater than 1 and produce a list of primes –E.g. For input , output is A pipelined approach: –Processor P_i divides each input by the i-th prime –If the input is divisible (and not equal to the divisor), it is marked (with a negative sign) and forwarded –If the input is not divisible, it is forwarded –Last processor only forwards unmarked (positive) data [primes]

Sieve of Eratosthenes Pseudo-Code Code for processor Pi (and prime p_i): –x=recv(data,P_(i-1)) –If (x>0) then If (p_i divides x and p_i = x ) then send(-x,P_(i+1) If (p_i does not divide x or p_i = x) then send(x, P_(i+1)) –Else Send(x,P_(i+1)) Code for last processor –x=recv(data,P_(i- 1)) –If x>0 then send(x,OUTPUT) P2P3P5P7out / Processor P_i divides each input by the i-th prime

Programming Issues Algorithm will take N+P-1 to run where N is the number of data items and P is the number of processors. –Can also consider just the odd bnys or do some initial part separately In given implementation, number of processors must store all primes which will appear in sequence –Not a scalable approach –Can fix this by having each processor do the job of multiple primes, i.e. mapping logical “processors” in the pipeline to each physical processor –What is the impact of this on performance? P2P3P5P7P11P13P17 processor does the job of three primes

Processors for such operation In pipelined algorithm, flow of data moves through processors in lockstep. The design attempts to balance the work so that there is no bottleneck at any processor In mid-80’s, processors were developed to support in hardware this kind of parallel pipelined computation Two commercial products from Intel: –Warp (1D array) – iWarp (components for 2D array) Warp and iWarp were meant to operate synchronously Wavefront Array Processor (S.Y. Kung) was meant to operate asynchronously, –i.e. arrival of data would signal that it was time to execute

Systolic Arrays from Intel Warp and iWarp were examples of systolic arrays –Systolic means regular and rhythmic, –data was supposed to move through pipelined computational units in a regular and rhythmic fashion Systolic arrays meant to be special-purpose processors or co- processors. They were very fine-grained very simple computationProcessors implement a limited and very simple computation, usually called cells Communication is very fast, granularity meant to be around one operation/communication!

Systolic Algorithms Systolic arrays were built to support systolic algorithms, a hot area of research in the early 80’s Systolic algorithms used pipelining through various kinds of arrays to accomplish computational goals: –Some of the data streaming and applications were very creative and quite complex –CMU a hotbed of systolic algorithm and array research (especially H.T. Kung and his group)

Example 1: “pipelined” polynomial evaluation Polynomial Evaluation is done by using a Linear array with 2D. Expression: Y = ((((a n x+a n-1 )*x+a n-2 )*x+a n-3 )*x……a 1 )*x + a 0 Function of PEs in pairs –1. Multiply input by x –2. Pass result to right. –3. Add a j to result from left. –4. Pass result to right. First processor in pair Second processor in pair

Using systolic array for polynomial evaluation. This pipelined array can produce a polynomial on new X value on every cycle - after 2n stages. Another variant:Another variant: you can also calculate various polynomials on the same X. This is an example of a deeply pipelined computation- –The pipeline has 2n stages. X++X+XX+ x anan x a n-1 a n-2 a0a0 x x ………. Example 1: polynomial evaluation Y = ((((a n x+a n-1 )*x+a n-2 )*x+a n-3 )*x……a 1 )*x + a 0 Adding processor Multiplying processor X is broadcasted 0

Example 2: Matrix Vector Multiplication There are many ways to solve a matrix problems using systolic arrays, some of the methods are: –Triangular Array performing gaussian elimination with neighbor pivoting. –Triangular Array performing orthogonal triangularization. Simple matrix multiplication methods are shown in next slides.

Matrix Vector Multiplication: Each cell’s function is: –1. To multiply the top and bottom inputs. –2. Add the left input to the product just obtained. –3. Output the final result to the right. Each cell consists of an adder and a few registers. Example 2: Matrix Vector Multiplication

Matrix Multiplication PE1PE2PE3 n m l a-- d b - g e c - h f - - i z y x p qr Example 2: Matrix Vector Multiplication At time t0 the array receives 1, a, p, q, and r ( The other inputs are all zero). At time t1, the array receive m, d, b, p, q, and r ….e.t.c The results emerge after 5 steps. Analyze how row [a b c] is multiplied by column [p q r] T to return first element of the column vector [X Y Z] T

Each cell (P1, P2, P3) does just one instruction Multiply the top and bottom inputs, add the left input to the product just obtained, output the final result to the right The cells are simple Just an adder and a few registers The cleverness comes in the order in which you feed input into the systolic array At time t0, the array receives l, a, p, q, and r –(the other inputs are all zero) At time t1, the array receives m, d, b, p, q, and r And so on. Results emerge after 5 steps PE1PE2PE3 n m l a-- d b - g e c - h f - - i z y x p qr To visualize how it works it is good to do a snapshot animation

Systolic Processors, versus Cellular Automata versus Regular Networks of Automata Data Path Block Data Path Block Data Path Block Data Path Block Systolic processor Control Block Control Block Control Block Control Block Cellular AutomatonThese slides are for one- dimensional only

Systolic Processors, versus Cellular Automata versus Regular Networks of Automata Control Block Control Block Control Block Control Block Control Block Control Block Control Block Control Block Cellular Automaton General and Soldiers, Symmetric Function Evaluator Data Path Block Data Path Block Data Path Block Data Path Block Regular Network of Automata

Introduction to Polynomial multiplication, filtering and Convolution circuits synthesis Perkowski

Example 3: FIR Filter or Convolution

Convolution as polynomial multiplication (a 3 x 3 + a 2 x 2 + a 1 x + a 0 ) (b 3 x 3 + b 2 x 2 + b 1 x + b 0 ) b 3 a 3 x 6 + b 3 a 2 x 5 + b 3 a 1 x 4 + b 3 a 0 x 3 b 2 a 3 x 5 + b 2 a 2 x 4 + b 2 a 1 x 3 + b 2 a 0 x 2 b 1 a 3 x 4 + b 1 a 2 x 3 + b 1 a 1 x 2 + b 1 a 0 x b 0 a 3 x 3 + b 0 a 2 x 2 + b 0 a 1 x + b 0 a 0 *

FIR-filter like structure b4b3 b2b1 +++ a4000 a4*b4 First we will explain how it works Vector of b i stands in place, vector of a i moves from highest coefficient of a towards highest coefficient of b

b4b3 b2b1 +++ a400 a4*b4 a3 a3*b4+a4b3

b4b3 b2b1 +++ a3a40 a4*b4 a2 a3*b4+a4b3 a4*b2+a3*b3+a2*b4

b4b3 b2b1 +++ a2a3a4 a4*b4 a1 a3*b4+a4b3 a4*b2+a3*b3+a2*b4 a1*b4+a2*b3+a3*b2+a4*b1

b4b3 b2b1 +++ a1a2a3 a4*b4 0 a3*b4+a4b3 a4*b2+a3*b3+a2*b4 a1*b4+a2*b3+a3*b2+a4*b1 a1*b3+a2*b2+a3*b1

We redesign this architecture. We insert Dffs to avoid many levels of logic b4b3 b2b1 +++ a4a2a3 a4*b4 a4*b3 a4*b2a4*b1 We simulate it again shifting vector a. Vector a is broadcasted and it moves, highest coefficient to highest coefficient

b4b3 b2b1 +++ a3a1a2 a4*b4 a4*b3+a3b4 a4*b2+a3b3 a4*b1+a3b2 a3b1

b4b3 b2b1 +++ a20a1 a4*b4 a4*b3+a3b4 a4*b2+a3b3+a2b4 a4*b1+a3b2+a2b3 a3b1+a2b2 a2b1 The disadvantage of this circuit is broadcasting

Another way to draw exactly the same architecture with broadcast input

A family of systolic designs for convolution computation Given the sequence of weights {w 1, w 2,..., w k } And the input sequence {x 1, x 2,..., x k }, Compute the result sequence {y 1, y 2,..., y n+1-k } Defined by y i = w 1 x i + w 2 x i w k x i+k-1

Design B1 - Broadcast input, - move results systolically, - weights stay - (Semi-systolic convolution arrays with global data communication Previously proposed for circuits to implement a pattern matching processor and for circuit to implement polynomial multiplication. -

Types of systolic structure: design B1 wider systolic path (partial result y i move) x3x3 x2x2 x1x1 y3y3 y2y2 y1y1 W1W1 W2W2 W3W3 y in x in y out y out = y in + W  x in W Please analyze this circuit drawing snapshots like in an animated movie of data in subsequent moments of time broadcast Discuss disadvantages of broadcast Results move out

We go back to our unified way of drawing processors

We insert more Dffs to avoid broadcasting b4b3 b2b1 +++ a4a2a3 a4*b We simulate it again shifting vector a. Vector a moves, highest coefficient to highest coefficient

b4b3 b2b1 +++ a3a1a2 a4*b4 a3b4 a4b3 0 a400 0 With this modification the circuit does not work correctly like this. Try something new….

b4b3 b2b1 a3a1a2 a4*b4 a3b4a4b3 0 a400 0 a2b4 a1b4 a3b3 a2b3 a1b a4b2 a3b2 a2b2 a1b a4b1 a3b1 a2b1 First sum Second sum Let us check what happens when we shift a through b with highest bit towards highest bit approach When we add the results the timing is correct. But the trouble is big adder to add these results from columns

Another way of drawing this type of architecture

Types of systolic structure: design F Input move Weights stay Partial results fan-in needs adder applications : signal processing, pattern matching y 1 ’s Z out = W  x in x out = x in Z out x out x in W x3x3 x2x2 x1x1 W3W3 W2W2 W1W1 ADDER

Design F - Fan-in results, move inputs, weights stay - Semi-systolic convolution arrays with global data communication When number of cell is large, the adder can be implemented as a pipelined adder tree to avoid large delay. Design of this type using unbounded fan-in.

FIR-filter like structure, assume two delays b4b3 b2b1 +++ So we invent a new trick. We create two delays not one in order to shift it everywhere

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++

b4b3 b2b1 +++ We get this structure without broadcasting and without big adder. The trouble is still two combinational blocks in series which may slow down the clock Remember that FIR filter, convolution and polynomial multiplication is in essence the same pattern of moving data. This pattern of moving data is fundamental to many applications so we spend more time to discuss it. Data moves left to right, result of convolution to left. Filter coefficient stay in place.

FIR circuit: initial design delays Pipelining of x i

FIR circuit: registers added below weight multipliers Notice changed timing here

Example 3: Convolution There are many ways to implement convolution using systolic arrays, one of them is shown: –u(n) : The input of sequence from left. –w(n) : The weights preloaded in n processing elements (PEs). –y(n) : The sequence from right (Initial value: 0) and having the same speed as u(n). In this operation each cell’s function is: –1. Multiply the inputs coming from left with weights and output the input received to the next cell. –2. Add the final value to the inputs from right. W0W0 W1W1 W2W2 W3W3 u i ……u 0 y i ……y 0 0 WiWi a in b out a out b in a out = a in b out = b in + a in * w i y are outputs, initially zeroed PE Data moves left to right, result of convolution to left. Filter coefficient stay in place. The same as before but differently drawn

Each cell operation. W0W0 W1W1 W2W2 W3W3 u i ……u 0 y i ……y 0 0 WiWi a in b out a out b in a out = a in b out = b in + a in * w i Convolution (cont) Systolic array. The input of sequence from left. This is just one solution to this problem Thus we showed already 3 variants of executing convolution

Various Possible Implementations Convolution is very important, we use it in several applications. So let us think what are to implement it Convolution is very important, we use it in several applications. So let us think what are all the possible ways to implement it Convolution Algorithm Two loops

Bag of Tricks that can be used Preload-repeated-value Replace-feedback-with-register Internalize-data-flow Broadcast-common-input Propagate-common-input Retime-to-eliminate-broadcasting

Bogus Attempt at Systolic FIR for i=1 to n in parallel for j=1 to k in place y i += w j * x i+j-1 feedback from sequential implementation Replace with register Inner loop realized in place Stage 1: directly from equation Stage 2: feedback = y i = y i Stage 3:

Bogus Attempt continued: Outer Loop for i=1 to n in parallel for j=1 to k in place y i += w j * x i+j-1 Factorize w j

Bogus Attempt continued: Outer Loop - 2 for i=1 to n in parallel for j=1 to k in place y i += w j * x i+j-1 Because we do not want to have broadcast, we retime the signal w, this requires also retiming of X j

Another possibility of retiming for i=1 to n in parallel for j=1 to k in place y i += w j * x i+j-1 Bogus Attempt continued: Outer Loop - 2a

Yet another approach is to broadcast common input x i-1 Bogus Attempt continued: Outer Loop - 3 for i=1 to n in parallel for j=1 to k in place y i += w j * x i+j-1

Attempt at Systolic FIR: now internal loop is in parallel 1 2 3

Outer Loop continuation for FIR filter

Continue: Optimize Outer Loop Preload-repeated Value Based on previous slide we can preload weights Wi

Continue: Optimize Outer Loop Broadcast Common Value This design has broadcast. Some purists tell this is not systolic as systolic should have all short wires.

Continue: Optimize Outer Loop Retime to Eliminate Broadcast We delay these signals y i

The design becomes not intuitive. Therefore, we have to explain in detail “How it works” y1=x1w1 x1 x2

More history based types of systolic structure Convolution problem weight : {w 1, w 2,..., w k } inputs : {x 1, x 2,..., x n } results : {y 1, y 2,..., y n+k-1 } y i = w 1 x i + w 2 x i w k x i+k-1 (combining two data streams) H. T. Kung’s grouping work assume k = 3 Polynomial Multiplication of 1-D convolution problem

Types of systolic structure: Design B2 Inputs broadcast Weights move Results stay w i circulate use multiplier-accumulator hardware w i has a tag bit (signals accumulator to output results) needs separate bus (or other global network for collecting output) W in x in W out y = y + W in  x in W out = W in y x3x3 x2x2 x1x1 y1y1 y2y2 y3y3 W2W2 W3W3 W1W1

Design B2 Broadcast input, move weights, results stay [(Semi-) systolic convolution arrays with global data communication] The path for moving y i ’s is wider then w i ’s because of y i ’s carry more bits then w i ’s in numerical accuracy. The use of multiplier- accumulators may also help increase precision of the result, since extra bit can be kept in these accumulators with modest cost. Semisystolic because of broadcast

Types of systolic structure: Design R1 Inputs and weights move in the opposite directions Results stay can use tag bit no bus (systolic output path is sufficient) one-half the cells are at work at any time applications : pattern matching y = y + W in  x in x out = x in W out = W in x1x1 x3x3 x2x2 W1W1 W2W2 y3y3 y2y2 y1y1 W in x in W out y x out Because results stay, more than one result can be in general stored in each processor, which complicates the design Very long w and x

Design R1 continued - Results stay, inputs and weights move in opposite directions - Pure-systolic convolution arrays with global data communication Design R1 has the advan- tage that it dose not require a bus, or any other global net- work, for collecting output from cells. The basic ideal of this de- sign has been used to imple- ment a pattern matching chip. Show in class and compare the pattern matching chip

Types of systolic structure: design R2 Inputs and weights move in the same direction at different speeds Results stay x j ’s move twice as fast as the w j ’s all cells work at any time need additional registers (to hold w value) applications : pipeline multiplier W1W1 W2W2 W3W3 W4W4 W5W5 x3x3 x2x2 x1x1 y1y1 y2y2 y3y3 WWW W y W in W out x in x out y = y + W in  x in W = W in W out = W x out = x in

Design R2 - Results stay, inputs and weights move in the same direction but at different speeds - Pure-systolic convolution arrays with global data communication Multiplier-accumulator can be used effectively and so can tag bit method to signal the output of each cell. Compared with R1, all cells work all the time when additional register in each cell to hold a w value.

Types of systolic structure: design W1 Inputs and results move in the opposite direction Weights stay one-half the cells are work constant response time applications : polynomial division y out = y in + W  x in x out = x in y in x in y out W x out x1x1 x3x3 x2x2 W1W1 W2W2 y W3W3

Design W1 -Weights stay, inputs and results move in opposite direction - Pure-systolic convolution arrays with global data communication This design is fundamental in the sense that it can be naturally extend to perform recursive filtering. This design suffers the same drawback as R1, only appro- ximately 1/2 cells work at any given time unless two inde- pendent computation are in- terleaved in the same array.

Overlapping the executions of design W1 multiply-and-add in design W1

Types of systolic structure: design W2 Inputs and results move in the same direction at different speeds Weights stay high throughputsall cells work (high throughputs rather than fast response) x W x in x out y in y out y out = y in + W in  x in x = x in x out = x W1W1 W2W2 x5x5 W3W3 x7x7 x3x3 x2x2 x1x1 y1y1 y2y2 y3y3 WWW x4x4 x6x6

Design W2 -Weights stay, inputs and results move in the same direction but at different speeds - Pure-systolic convolution arrays with global data communication This design lose one advan- tage of W1, the constant response time. 2-D convolution, This design has been extended to implement 2-D convolution, where high throughputs rather than fast response are of concern.

FIR Summary: comparison of sequential and systolic

Remarks on Linear Arrays Above designs are all possible systolic designs for the convolution problem. (some are semi-) Using a systolic control path, weight can be selected on- the-fly to implement interpolation or adaptive filtering. We need to understand precisely the strengths and drawbacks of each design so that an appropriate design can be selected for a given environment. For improving throughput, it may be worthwhile to implement multiplier and adder separately to allow overlapping of their execution. (Such as next page show) When chip pin is considered: pure-systolic requires four I/O ports; semi-systolic requires three I/O ports.

Conclusions on 1D and 1.5D Systolic Arrays Systolic arrays are more than processor arrays which execute systolic algorithms. one of the following –A systolic cell takes on one of the following forms: 1.A special purpose cell with hardwired functions, 2.A vector-computer-like cell with instruction decoding and a processing element, 3.A systolic processor complete with a control unit and a processing unit. Smarter processor for SAT, Petrick, etc.

Large Systolic Arrays as general purpose computers Large Systolic Arrays as general purpose computers Originally, systolic architectures were motivated for high performance special purpose computational systems that meet the constraints of VLSI, However, it is possible to design systolic systems which: –have high throughputs –yet are not constrained to a single VLSI chip.

Problems with systolic array design 1.Hard to design - hard to understand low level realization may be hard to realize 2. Hard to explain remote from the algorithm function can’t readily be deduced from the structure 3.Hard to verify

Key architectural issues in designing special-purpose systems special-purpose systems Simple and regular design Simple, regular design yields cost-effective special systems. Concurrency and communication Design algorithm to support high concurrency and meantime to employ only simple blocks. Balancing computation with I/O A special-purpose system should be a match to a variety of I/O bandwidths.

Two Dimensional Systolic Arrays Two Dimensional Systolic Arrays In 1978, the first systolic arrays were introduced as a feasible design for special purpose devices which meet the VLSI constraints. These special purpose devices were able to perform four types of matrix operations at high processing speeds: –matrix-vector multiplication, –matrix-matrix multiplication, –LU-decomposition of a matrix, –Solution of triangular linear systems.

General Systolic Organization General Systolic Organization

Example 2: Example 2: Matrix- Matrix Multiplication All previously shown tricks can be applied

Seth Copen Goldstein, CMU A.R. Hurson 2. David E. Culler, UC. Berkeley, Syeda Mohsina Afroze and other students of Advanced Logic Synthesis, ECE 572, 1999 and 2000.Seth Copen Goldstein, CMU A.R. Hurson 2. David E. Culler, UC. Berkeley, Syeda Mohsina Afroze and other students of Advanced Logic Synthesis, ECE 572, 1999 and Sources