Download presentation
Presentation is loading. Please wait.
Published byΕἰλείθυια Βουγιουκλάκης Modified over 5 years ago
1
CprE / ComS 583 Reconfigurable Computing
Prof. Joseph Zambreno Department of Electrical and Computer Engineering Iowa State University Lecture #4 – FPGA Technology Mapping
2
CprE 583 – Reconfigurable Computing
Quick Points Lectures are viewable for on-campus students via WebCT Class list created: Less focus on interconnect theory More on interconnects in actual devices Read [AggLew94], [ChaWon96A], [Deh96A] for more details August 31, 2006 CprE 583 – Reconfigurable Computing
3
CprE 583 – Reconfigurable Computing
Recap Various FPGA programming technologies (Anti-fuse, (E)EPROM, Flash, SRAM): SRAM most popular August 31, 2006 CprE 583 – Reconfigurable Computing
4
CprE 583 – Reconfigurable Computing
LUTs and Digital Logic k inputs 2k possible input values k-LUT corresponds to 2k x 1 bit memory Truth table is stored 22k possible functions – O(22k / k!) unique F = A0A1A2 + Ā0A1Ā2 + Ā0 Ā1 Ā2 A0 A1 A2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 255 ... August 31, 2006 CprE 583 – Reconfigurable Computing
5
CprE 583 – Reconfigurable Computing
Outline Recap General Routing Architectures FPGA Architectural Issues Early Commercial FPGAs Xilinx XC3000 Xilinx XC4000 Technology Mapping using LUTs August 31, 2006 CprE 583 – Reconfigurable Computing
6
General Routing Architecture
A wire segment is a wire unbroken by programmable switches A track is a sequence of one or more wire segments in a line A routing channel is a group of parallel tracks A connection block provides connectivity from the inputs and outputs of a logic block to the wire segments in the channels A switch block is a block which provides connectivity between the horizontal and vertical wire segments on all four of its sides CprE 583 – Reconfigurable Computing August 31, 2006
7
CprE 583 – Reconfigurable Computing
Switch Boxes Fs – connections offered per incoming wire Universal switchbox can connect any set of inputs to their target output channels simultaneously Build-able with Fs = 3 Xilinx XC4000 switchbox is Fs = 3 but not universal Read [ChaWon96A] for more details August 31, 2006 CprE 583 – Reconfigurable Computing
8
Architectural Issues [AhmRos04A]
What values of N, I, and K minimize the following parameters? Area Delay Area-delay product Assumptions All routing wires length 4 Fully populated IMUX Wiring is half pass transistor, half tri-state August 31, 2006 CprE 583 – Reconfigurable Computing
9
Number of Inputs per Cluster
Lots of opportunities for input sharing in large clusters [BetRos97A] Reducing inputs reduces the size of the device and makes it faster Most FPGA devices (Xilinx) have 4 BLE per cluster with more inputs than actually needed August 31, 2006 CprE 583 – Reconfigurable Computing
10
CprE 583 – Reconfigurable Computing
Logic Cluster Size Small block cluster more efficient Includes area needed for routing Smallest clusters (e.g. one BLE per cluster) not “CAD friendly” Most commercial devices have 4-8 BLEs per cluster August 31, 2006 CprE 583 – Reconfigurable Computing
11
CprE 583 – Reconfigurable Computing
Effect of N and K on Area Cluster size of N = [6-8] is good, K = [4-5] August 31, 2006 CprE 583 – Reconfigurable Computing
12
Effect of N and K on Performance
Inconclusive: Big K and N > 3 value looks good August 31, 2006 CprE 583 – Reconfigurable Computing
13
Effect of N and K on Area-Delay
K = 4-6, N= 4-10 looks OK August 31, 2006 CprE 583 – Reconfigurable Computing
14
Putting it All Together
Area: LUT count decreases with k (slower than exponential) LUT size increases with k (exponential logic area, ~linear interconnect area) Delay: LUT depth decreases with k (logarithmic) LUT delay increases with k (linear) Examples: Xilinx XC3000 family Fs = 3 I = 5 N = 2 Xilinx XC4000 family I = 9 N ~ 2.5 August 31, 2006 CprE 583 – Reconfigurable Computing
15
CprE 583 – Reconfigurable Computing
XC3000 Logic Block 5-LUT, or two 4-LUTs August 31, 2006 CprE 583 – Reconfigurable Computing
16
CprE 583 – Reconfigurable Computing
XC4000 Logic Block August 31, 2006 CprE 583 – Reconfigurable Computing
17
CprE 583 – Reconfigurable Computing
XC4000 Routing Structure August 31, 2006 CprE 583 – Reconfigurable Computing
18
XC4000 Routing Structure (cont.)
August 31, 2006 CprE 583 – Reconfigurable Computing
19
LUT Computational Limits
k-LUT can implement 22k functions Given n such k-LUTs, can implement (22k)n Since 4-LUTs are efficient, want to find n such that (224)n >= 22M Example – implementing a 7-LUT with 4-LUTs: 4-LUT A0–A3 A4 A5 A6 CprE 583 – Reconfigurable Computing August 31, 2006
20
LUT Computational Limits (cont.)
How much computation can be performed in a table lookup? Upper bound (from previous) – n <= 2M-3 Need n 4-LUTs to cover a M-LUT: (224)n >= 22M nlog(224) >= log(22M) n24 log(2) >= 2M log(2) n24 >= 2M n >= 2M-4 Adding upper bound – 2M-4 <= n <= 2M-3 August 31, 2006 CprE 583 – Reconfigurable Computing
21
CprE 583 – Reconfigurable Computing
LUTs Versus Memories Can also implement (22k)w as a single large memory with k inputs and w outputs Large memory advantage – no need for interconnect and only one input decoder required Consider a 32K x 8bit memory (170M λ2, 21ns latency) w = 8 k = 16 (or 2 8-bit inputs to address 216 locations) Can implement an 8-bit addition or subtraction Xilinx XC3042 – LUTs (180M λ2, 13ns CLB delay) 15-bit parity calculation: 5 4-LUTs (<2% of XC4032) – 3.125M λ2) Entire SRAM – 170M λ2 7-bit addition: 14 4-LUTs (<5% of XC4032) – 8.75M λ2) August 31, 2006 CprE 583 – Reconfigurable Computing
22
LUT Technology Mapping
Task: map netlist to LUTs, minimizing area and/or delay Similar to technology mapping for traditional designs Library approach not feasible – O(22k / k!) elements in library In general it is NP-hard August 31, 2006 CprE 583 – Reconfigurable Computing
23
CprE 583 – Reconfigurable Computing
Area vs. Delay Mapping We note that delay and area optimization are somewhat orthogonal concepts; often we obtain one at the expense of the other. In the slide below, we see two possible mappings for LUTs where K=4. The mapping on the left obtains a delay of 2 LUT delays from inputs to output, while the mapping on the right gives 3 LUT delays through the network. However, the mapping on the right uses only 3 LUTs, while the mapping on the left uses 4 LUTs. August 31, 2006 CprE 583 – Reconfigurable Computing
24
CprE 583 – Reconfigurable Computing
Decomposition As noted earlier, the decomposition of the network into gates can also affect the mapping. In the slide below, note that the topmost network is mapped naively into 4 LUTs (again K=4). The decompositions at the bottom show how different (smaller) mappings can be found, depending on how the OR gate is decomposed; the decomposition on the right does not allow a mapping into 2 LUTs. August 31, 2006 CprE 583 – Reconfigurable Computing
25
CprE 583 – Reconfigurable Computing
Why Replicate? Node replication in the network can also simplify the mapping; the network on the left cannot be mapped into fewer than 3 4-input LUTs; whereas the network on the right can fit in 2 LUTs. August 31, 2006 CprE 583 – Reconfigurable Computing
26
CprE 583 – Reconfigurable Computing
Reconvergence We must also take advantage of reconvergence in the network; in the figure below, the mapping on the left does not look far back enough (towards the inputs) to find the superior mapping indicated on the right August 31, 2006 CprE 583 – Reconfigurable Computing
27
CprE 583 – Reconfigurable Computing
Dynamic Programming 1 1 1 1 1 1 1 1 1 1 1 1 Dynamic programming is used in both Chortle and FlowMap. The target network is traversed from inputs to outputs (topological order); for any node, the optimal (local) solution can be found by combining the solutions at nodes which appear above the node being considered (i.e. closer to the inputs). Since these nodes have already been evaluated, this task is greatly simplified. In the example here, the mapping goal is to minimize delay. We iterate through the nodes in order from the inputs. At each node, we can consider a LUT which implements that node; the delay for such an implementation is then one plus the maximum delay over all nodes which feed the LUT under consideration. The minimum delay for the node is thus the minimum delay over all LUT implementations of that node. In the diagram on the left, each loop represents the minimum depth LUT which implements each node, with the delay of the LUT written inside the loop. The figure on the right shows the corresponding mapping for area; the value of a node in this case is thus the sum of the values of the nodes which fanin to the LUT plus 1. 2 1 2 1 2 3 August 31, 2006 CprE 583 – Reconfigurable Computing
28
CprE 583 – Reconfigurable Computing
Summary FPGA design issues involve number of logic blocks per cluster, number of inputs per logic block, routing architecture, and k-LUT size Can build M-LUT with n k-LUTs where 2M-3 <= n <= 2M-4 Large LUTs generally inefficient Technology mapping is simplified because of 4-LUT properties Techniques – decomposition, replication, reconvergence, dynamic programming Area- or delay-optimal mapping still NP hard August 31, 2006 CprE 583 – Reconfigurable Computing
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.