Presentation is loading. Please wait.

Presentation is loading. Please wait.

Clockless Computing Lecture 3

Similar presentations


Presentation on theme: "Clockless Computing Lecture 3"— Presentation transcript:

1 Clockless Computing Lecture 3
Montek Singh Thu, Aug 30, 2007

2 Handshaking Example: Asynchronous Pipelines
Pipelining basics Fine-grain pipelining Example Approach: MOUSETRAP pipelines

3 Background: Pipelining
What is Pipelining?: Breaking up a complex operation on a stream of data into simpler sequential operations A “coarse-grain” pipeline (e.g. simple processor) A “fine-grain” pipeline (e.g. pipelined adder) fetch decode execute Storage elements (latches/registers) Performance Impact: + Throughput: significantly increased (#data items processed/second) – Latency: somewhat degraded (#seconds from input to output)

4 Focus of Asynchronous Community
A Key Focus: Extremely fine-grain pipelines “gate-level” pipelining = use narrowest possible stages each stage consists of only a single level of logic gates some of the fastest existing digital pipelines to date Application areas: general-purpose microprocessors instruction pipelines: often stages multimedia hardware (graphics accelerators, video DSP’s, …) naturally pipelined systems, throughput is critical; input “bursty” optical networking serializing/deserializing FIFO’s string matching? KMP style string matching: variable skip lengths

5 MOUSETRAP: Ultra-High-Speed Transition-Signaling Asynchronous Pipelines
Singh and Nowick, Intl. Conf. on Computer Design (ICCD), September 2001 & IEEE Trans. VLSI June 2007

6 MOUSETRAP Pipelines Simple asynchronous implementation style, uses…
standard logic implementation: Boolean gates, transparent latches simple control: 1 gate/pipeline stage MOUSETRAP uses a “capture protocol:” Latches … are normally transparent: before new data arrives become opaque: after data arrives (“capture” data) Control Signaling: transition-signaling = 2-phase simple protocol: req/ack = only 2 events per handshake (not 4) no “return-to-zero” each transition (up/down) signals a distinct operation Our Goal: very fast cycle time simple inter-stage communication

7 MOUSETRAP: A Basic FIFO
Stages communicate using transition-signaling: Latch Controller 1 transition per data item! ackN-1 ackN En reqN doneN reqN+1 Data in Data out Data Latch Stage N-1 Stage N Stage N+1 2nd data item flowing through the pipeline 1st data item flowing through the pipeline 1st data item flowing through the pipeline

8 MOUSETRAP: A Basic FIFO (contd.)
Latch controller (XNOR) acts as “protocol converter”: 2 distinct transitions (up or down)  pulsed latch enable Latch is re-enabled when next stage is “done” Latch is disabled when current stage is “done” Latch Controller 2 transitions per latch cycle ackN-1 ackN En reqN doneN reqN+1 Data in Data out Data Latch Stage N-1 Stage N Stage N+1

9 MOUSETRAP: FIFO Cycle Time
reqN ackN-1 reqN+1 ackN Data Latch Latch Controller doneN Data in Data out Stage N Stage N-1 Stage N+1 En 3 Fast self-loop: N disables itself 2 1 2 N re-enabled to compute N computes N+1 computes Cycle Time =

10 Detailed Controller Operation
Stage N’s Latch Controller ack from N+1 done from N to Latch One pulse per data item flowing through: down transition: caused by “done” of N up transition: caused by “done” of N+1

11 MOUSETRAP: Pipeline With Logic
Simple Extension to FIFO: insert logic block + matching delay in each stage Latch Controller ackN-1 ackN reqN reqN+1 delay delay delay doneN logic logic logic Data Latch Stage N-1 Stage N Stage N+1 Logic Blocks: can use standard single-rail (non-hazard-free) “Bundled Data” Requirement: each “req” must arrive after data inputs valid and stable

12 Complex Pipelining: Forks & Joins
Problems with Linear Pipelining: handles limited applications; real systems are more complex fork join Non-Linear Pipelining: has forks/joins Contribution: introduce efficient circuit structures Forks: distribute data + control to multiple destinations Joins: merge data + control from multiple sources Enabling technology for building complex async systems

13 Forks and Joins: Implementation
req req2 Stage N C req1 ack req ack2 Stage N C ack1 Join: merge multiple requests Fork: merge multiple acknowledges

14 Performance, Timing and Optzn.
MOUSETRAP with Logic: Stage Latency = Cycle Time =

15 Timing Analysis Main Timing Constraint: avoid “data overrun” (hold time) Data must be safely “captured” by Stage N before new inputs arrive from Stage N-1 simple 1-sided timing constraint: fast latch disable Stage N’s “self-loop” faster than entire path thru prior stage Stage N Data Latch Latch Controller doneN logic delay Stage N-1 reqN ackN-1 reqN+1 ackN

16 Experimental Results Simulations of FIFO’s:
~3 GHz (in 0.13u IBM process) Recent fabricated chip: GCD ~2 GHz simulated speed Chips tested to be fully functional Will show demo later

17 In-Class Exercise Modify MOUSETRAP to remove the “data overrun” timing constraint How is the performance affected?

18 Homework #3 (due Tue Sep 11, 2007)
Read MOUSETRAP paper [TVLSI Jun ’07] Modify MOUSETRAP to reduce power consumption Make the latches normally opaque Latches become transparent only when new data arrives at their inputs Prevents glitchy/garbage data from propagation How is the performance (throughput, latency) affected?

19 MOUSETRAP Advanced Topics

20 Special Case: Using “Clocked Logic”
Clocked-CMOS = C2MOS: eliminate explicit latches latch folded into logic itself pull-up network pull-down “keeper” En A General C2MOS gate logic inputs output “keeper” En A B logic output C2MOS AND-gate

21 Gate-Level MOUSETRAP: with C2MOS
Use C2MOS: eliminate explicit latches New Control Optimization = “Dual-Rail XNOR” eliminate 2 inverters from critical path Latch Controller ackN-1 2 ackN 2 2 2 2 En,En 2 doneN 2 2 (En,En’) (done,done’) (ack,ack’) reqN reqN+1 pair of bit latches C2MOS logic Stage N-1 Stage N Stage N+1

22 Timing Optzn: Reducing Cycle Time
Analytical Cycle Time = Goal: shorten (in steady-state operation) Steady-state = no undue pipeline congestion Observation: XNOR switches twice per data item: only 2nd (up) transition critical for performance: Solution: reduce XNOR output swing degrade “slew” for start of pulse allows quick pulse completion: faster rise time Still safe when congested: pulse starts on time pulse maintained until congestion clears

23 Timing Optzn (contd.) “optimized” XNOR output “unoptimized”
N “done” N+1 “done” “optimized” XNOR output latch only partly disabled; recovers quicker! (no pulse width requirement) “unoptimized” XNOR output N’s latch disabled N’s latch re-enabled

24 Comparison with Wave Pipelining
Two Scenarios: Steady State: both MOUSETRAP and wave pipelines act like transparent “flow through” combinational pipelines Congestion: right environment stalls: each MOUSETRAP stage safely captures data internal stage slow: MOUSETRAP stages to its left safely capture data  congestion properly handled in MOUSETRAP Conclusion: MOUSETRAP has potential of… speed of wave pipelining greater robustness and flexibility

25 Timing Issues: Handling Wide Datapaths
Buffers inserted to amplify latch signals (En): reqN reqN+1 doneN Stage N Stage N-1 En reqN reqN+1 doneN Stage N Stage N-1 En Reducing Impact of Buffers: control uses unbuffered signals  buffer delay off of critical path! datapath skewed w.r.t. control Timing assumption: buffer delays roughly equal


Download ppt "Clockless Computing Lecture 3"

Similar presentations


Ads by Google