CS294-6 Reconfigurable Computing Day 18 October 22, 1998 Control.

Slides:



Advertisements
Similar presentations
1 Pipelining Part 2 CS Data Hazards Data hazards occur when the pipeline changes the order of read/write accesses to operands that differs from.
Advertisements

Implementation Approaches with FPGAs Compile-time reconfiguration (CTR) CTR is a static implementation strategy where each application consists of one.
Understanding Operating Systems Fifth Edition
1/1/ /e/e eindhoven university of technology Microprocessor Design Course 5Z008 Dr.ir. A.C. (Ad) Verschueren Eindhoven University of Technology Section.
Caltech CS184a Fall DeHon1 CS184a: Computer Architecture (Structures and Organization) Day17: November 20, 2000 Time Multiplexing.
Reconfigurable Computing: What, Why, and Implications for Design Automation André DeHon and John Wawrzynek June 23, 1999 BRASS Project University of California.
CS 151 Digital Systems Design Lecture 25 State Reduction and Assignment.
CS294-6 Reconfigurable Computing Day 5 September 8, 1998 Comparing Computing Devices.
Penn ESE Spring DeHon 1 ESE (ESE534): Computer Organization Day 21: April 2, 2007 Time Multiplexing.
Mary Jane Irwin ( ) [Adapted from Computer Organization and Design,
CS294-6 Reconfigurable Computing Day 6 September 10, 1998 Comparing Computing Devices.
CS294-6 Reconfigurable Computing Day 8 September 17, 1998 Interconnect Requirements.
Caltech CS184a Fall DeHon1 CS184a: Computer Architecture (Structures and Organization) Day10: October 25, 2000 Computing Elements 2: Cascades, ALUs,
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Caltech CS184a Fall DeHon1 CS184a: Computer Architecture (Structures and Organization) Day8: October 18, 2000 Computing Elements 1: LUTs.
Dr. Turki F. Al-Somani VHDL synthesis and simulation – Part 3 Microcomputer Systems Design (Embedded Systems)
Penn ESE Fall DeHon 1 ESE (ESE534): Computer Organization Day 19: March 26, 2007 Retime 1: Transformations.
Penn ESE Spring DeHon 1 ESE (ESE534): Computer Organization Day 26: April 18, 2007 Et Cetera…
Penn ESE Spring DeHon 1 ESE (ESE534): Computer Organization Day 11: February 14, 2007 Compute 1: LUTs.
Chapter 4 Processor Technology and Architecture. Chapter goals Describe CPU instruction and execution cycles Explain how primitive CPU instructions are.
State Machines Timing Computer Bus Computer Performance Instruction Set Architectures RISC / CISC Machines.
Spring 2002EECS150 - Lec15-seq2 Page 1 EECS150 - Digital Design Lecture 15 - Sequential Circuits II (Finite State Machines revisited) March 14, 2002 John.
Trends toward Spatial Computing Architectures Dr. André DeHon BRASS Project University of California at Berkeley.
CS294-6 Reconfigurable Computing Day 16 October 15, 1998 Retiming.
CS294-6 Reconfigurable Computing Day 14 October 7/8, 1998 Computing with Lookup Tables.
CS294-6 Reconfigurable Computing Day 19 October 27, 1998 Multicontext.
Penn ESE Spring DeHon 1 ESE (ESE534): Computer Organization Day 23: April 9, 2007 Control.
CS294-6 Reconfigurable Computing Day 16 October 20, 1998 Retiming Structures.
Penn ESE Spring DeHon 1 FUTURE Timing seemed good However, only student to give feedback marked confusing (2 of 5 on clarity) and too fast.
ENGIN112 L25: State Reduction and Assignment October 31, 2003 ENGIN 112 Intro to Electrical and Computer Engineering Lecture 25 State Reduction and Assignment.
Penn ESE Spring DeHon 1 ESE (ESE534): Computer Organization Day 5: January 24, 2007 ALUs, Virtualization…
CS294-6 Reconfigurable Computing Day 25 Heterogeneous Systems and Interfacing.
Chapter 6 Memory and Programmable Logic Devices
Caltech CS184b Winter DeHon 1 CS184b: Computer Architecture [Single Threaded Architecture: abstractions, quantification, and optimizations] Day3:
Computer Systems Organization CS 1428 Foundations of Computer Science.
J. Christiansen, CERN - EP/MIC
CprE / ComS 583 Reconfigurable Computing
1 Lecture 22 State encoding  One-hot encoding  Output encoding State partitioning.
Caltech CS184 Winter DeHon 1 CS184a: Computer Architecture (Structure and Organization) Day 7: January 24, 2003 Instruction Space.
Lecture 10: Logic Emulation October 8, 2013 ECE 636 Reconfigurable Computing Lecture 13 Logic Emulation.
Lecture 13: Logic Emulation October 25, 2004 ECE 697F Reconfigurable Computing Lecture 13 Logic Emulation.
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
CprE / ComS 583 Reconfigurable Computing Prof. Joseph Zambreno Department of Electrical and Computer Engineering Iowa State University Lecture #4 – FPGA.
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
Caltech CS184a Fall DeHon1 CS184a: Computer Architecture (Structures and Organization) Day18: November 22, 2000 Control.
Caltech CS184 Winter DeHon 1 CS184a: Computer Architecture (Structure and Organization) Day 20: March 3, 2003 Control.
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
1 Advanced Digital Design Reconfigurable Logic by A. Steininger and M. Delvai Vienna University of Technology.
Caltech CS184 Winter DeHon 1 CS184a: Computer Architecture (Structure and Organization) Day 8: January 27, 2003 Empirical Cost Comparisons.
Caltech CS184 Winter DeHon CS184a: Computer Architecture (Structure and Organization) Day 4: January 15, 2003 Memories, ALUs, Virtualization.
Lecture 17: Dynamic Reconfiguration I November 10, 2004 ECE 697F Reconfigurable Computing Lecture 17 Dynamic Reconfiguration I Acknowledgement: Andre DeHon.
Caltech CS184 Winter DeHon 1 CS184a: Computer Architecture (Structure and Organization) Day 11: January 31, 2005 Compute 1: LUTs.
CBSSS 2002: DeHon Universal Programming Organization André DeHon Tuesday, June 18, 2002.
Penn ESE534 Spring DeHon 1 ESE534: Computer Organization Day 11: February 20, 2012 Instruction Space Modeling.
FPGA-Based System Design: Chapter 3 Copyright  2004 Prentice Hall PTR Topics n FPGA fabric architecture concepts.
Real-World Pipelines Idea –Divide process into independent stages –Move objects through stages in sequence –At any given times, multiple objects being.
Penn ESE534 Spring DeHon 1 ESE534: Computer Organization Day 22: April 16, 2014 Time Multiplexing.
Caltech CS184 Winter DeHon 1 CS184a: Computer Architecture (Structure and Organization) Day 10: January 28, 2005 Empirical Comparisons.
CprE / ComS 583 Reconfigurable Computing Prof. Joseph Zambreno Department of Electrical and Computer Engineering Iowa State University Lecture #22 – Multi-Context.
CS161 – Design and Architecture of Computer
Computer Organization and Architecture + Networks
Introduction Introduction to VHDL Entities Signals Data & Scalar Types
CS184a: Computer Architecture (Structure and Organization)
ESE534: Computer Organization
ESE534: Computer Organization
ESE534: Computer Organization
CS184a: Computer Architecture (Structure and Organization)
ESE534: Computer Organization
ESE534: Computer Organization
Presentation transcript:

CS294-6 Reconfigurable Computing Day 18 October 22, 1998 Control

So Far... Looked broadly at intstruction effects Looked at structural components of computation –interconnect –compute –retiming

Today Control –data-dependent operations Different forms –local –instruction selection Architectural Issues

Control Control: That point where the data affects the instruction stream (operation selection) –Typical manifestation data dependent branching –if (a!=0) OpA else OpB, bne data dependent state transitions –new => goto S0 –else => stay data dependent operation selection +/- addp

Control Viewpoint: can have instruction stream sequence without control –I.e. static/data-independent progression through sequence of instructions is control free C0  C1  C2  C0  C1  C2  C0  … –Similarly, FSM w/ no data inputs

Terminology (reminder) Primitive Instruction (pinst) –Collection of bits which tell a bit-processing element what to do –Includes: select compute operation input sources in space (interconnect) input sources in time (retiming) Configuration Context –Collection of all bits (pinsts) which describe machine’s behavior on one cycle

Why? Why do we need / want control? Static interconnect sufficient? Static sequencing? Static datapath operations?

Back to “Any” Computation Design must handle all potential inputs (computing scenarios) Requires sufficient generality However, computation for any given input may be much smaller than general case.

Two Control Options Local control –unify choices build all options into spatial compute structure and select operation Instruction selection –provide a different instruction (instruction sequence) for each option –selection occurs when chose which instruction(s) to issue

Example: ASCII Hex  Binary If (c>=0x30 && c<=0x39) –res=c-0x30 // 0x30 = ‘0’ If (c>=0x41 && c<=0x46) –res=c-0x41+10 // 0x41 = ‘A’ If (c>=0x61 && c<=0x66) –res=c-0x61+10 // 0x61 = ‘a’ else –res=0

Local Control 01x0? P3*P1+P2*P0 0011? 1  6? <=9? P2*P0 C1*C0*I + C1*/C0 C1*(I *C0+ /C0*(I xor I )) C0*I + /C0*(I xor I *I ) C1*(I ==C0) C1*r2a P3P2P1P0 C1C0 r2aR Which case? 0 I I +0x1001 I

Local Control LUTs used  LUT evaluations produced This example: –each case only requires 4 4-LUTs to produce R –takes 5 4-LUTs once unified => Counting LUTs not tell cycle-by-cycle LUT needs

Instruction Control 01x0? P3*P1+P2*P0 0011? 1  6? <=9? P2*P0 C1*C0*I + C1*/C0 C1*(I *C0+ /C0*(I xor I )) C0*I + /C0*(I xor I *I ) C1*(I ==C0) C1*r2a P3P2P1P0 C1C0 r2aR P3 P2 P1 P0 C1 C0 R3 r2a R1 R0 R2 P3 P2 P1 P0 0 1 C0 1 C R3 R2 R1 R Instruction Control Static Sequence

Static/Control Static sequence –4 4-LUTs –depth 4 –4 context –maybe 3 shuffle r2a w/ C1, C0 execute Control –6 4-LUTs –depth 3 –4 contexts –Example too simple to show big savings...

Local vs. Instruction If can decide early enough –and afford schedule/reload –instruction select => less computation If load too expensive –local instruction faster maybe even less capacity (AT)

Slow Context Switch Profitable only at course grain –Xilinx ms reconfiguration times –HSRA us reconfiguration times still 1000s of cycles E.g. Video decoder –if (packet==FRAME) if (type==I-FRAME) –IF-context else if (type==B-FRAME) –BF-context

Local vs. Instruction For multicontext device –fast (single) cycle switch –factor according to available contexts For conventional devices –factor only for gross differences –and early binding time

Optimization Basic Components –T load -- config. time –T select -- case compute time –T gen -- generalized compute time –A select -- case compute area –A gen -- generalized compute area Minimize Capacity Consumed: –AT local = A gen  T gen –AT select = A select  (T select +T load ) T load  0 if can overlap w/ previous operation –know early enough –background load –have sufficient bandwidth to load

FSM Example FSM -- canonical “control” structure –captures many of these properties –can implement with deep multicontext instruction selection –can implement as multilevel logic unify, use local control Serve to build intuition

FSM Example (local control) 4 4-LUTs 2 LUT Delays

FSM Example (Instruction) 3 4-LUTs 1 LUT Delay

Full Partitioning Experiment Give each state its own context Optimize logic in state separately Tools –mustang, espresso, sis, Chortle Use: –one-hot encodings for single context smallest/fastest –dense for multicontext assume context select needs dense

Look at Assume stay in context for a number of LUT delays to evaluate logic/next state Pick delay from worst-case Assume single LUT-delay for context selection? – savings of 1 LUT-delay => comparable time Count LUTs in worst-case state

Full Partition (Area Target)

Full Partition (Delay Target)

Full Partitioning Full partitioning comes out better –~40% less area Note: full partition may not be optimal area case –e.g. intro example, no reduction in area or time beyond 2-context implementation 4-context (full partition) just more area –(additional contexts)

Partitioning versus Contexts (Area) CSE benchmark

Partitioning versus Contexts (Delay)

Partitioning versus Contexts (Heuristic) Start with dense mustang state encodings Greedily pick state bit which produces –least greatest area split –least greatest delay split Repeat until have desired number of contexts

Partition to Fixed Number of Contexts N.B. - more realistic, device has fixed number of contexts.

Extend Comparison to Memory Fully local => compute with LUTs Fully partitioned => lookup logic (context) in memory and compute logic How compare to fully memory? –Simply lookup result in table?

Memory FSM Compare (small)

Memory FSM Compare (large)

Memory FSM Compare (notes) Memory selected was “optimally” sized to problem –in practice, not get to pick memory allocation/organization for each FSM –no interconnect charged Memory operate in single cycle –but cycle slowing with inputs Smaller for <11 state+input bits Memory size not affected by CAD quality (FPGA/DPGA is)

Control Granularity What if we want to run multiple of these FSMs on the same component? –Local –Instruction

Consider Two network data ports –states: idle, first-datum, receiving, closing –data arrival uncorrelated between ports

Local Control Multi-FSM Not rely on instructions Each wired up independently Easy to have multiple

Instruction Control If FSMs advance orthogonally –(really independent control) –context depth => product of states for full partition –I.e. w/ single controller (PC) must create product FSM which may lead to state explosion –N FSMs, with S states => N S product states

Architectural Questions How many pinsts/controller? Fixed or Configurable assignment of controllers to pinsts? –…what level of granularity?

Architectural Questions Effects of: –Too many controllers? –Too few controllers? –Fixed controller assignment? –Configurable controller assignment?

Architectural Questions Too many: –wasted space on extra controllers –synchronization? Too few: –product state space and/or underuse logic Fixed: –underuse logic if when region too big Configurable: –cost interconnect, slower distribution

Control and FPGAs Local/single not rely on controller Potential advantage of FPGA Easy to breakup capacity and deploy to orthogonal tasks How processor handle? Efficiency?

Control and FPGAs Data dependent selection –potentially fast w/ local control compared to uP –Can examine many bits and perform multi-way branch (output generation) in just a few LUT cycles –uP requires sequence of operations

Architecture Instr. Taxonomy

Summary (1) Control -- where data effects instructions (operation) Two forms: –local control all ops resident => fast selection –instruction selection may allow us to reduce instantaneous work requirements introduce issues –depth, granularity, instruction load time

Summary (2) Intuition => looked at canonical FSM case –few context can reduce LUT requirements considerably (factor dissimilar logic) –similar logic more efficient in local control –overall, moderate contexts (e.g. 8) exploits both properties better than extremes –single context (all local control) –full partition –flat memory (except for smallest FSMs)