Adaptive Routing in (Q)NoC Vladimir Zdornov Network-on-Chip Seminar Department of Electrical Engineering Technion – Israel Institute of Technology
Background
Motivation Originally adaptive routing was designed to operate in networks with: Given topology and links capacity Unpredictable traffic patterns The goals of adaptive routing are: Efficient utilization of network resources Load-balancing of traffic Exploiting alternative routing paths Bottom line: Improving performance metrics
Static Supremacy Static routing has two major advantages Simplicity (e.g., XY routing in 2D regular mesh) In-order arrival guarantee Moreover for many SoCs conditions below do not hold: Given topology and links capacity Unpredictable traffic patterns As a result performance can be boosted using: Module placement Smart capacity allocation “Dumb” capacity over-provisioning
Finding the Niche Adaptive routing is worthy in chips with: “Practically finite” capacity budget Yet again unpredictable traffic patterns On of the candidates that satisfies the second condition is CMP (many-core?)
Designing Adaptive Routing Every routing scheme must guarantee eventual packet delivery thus it must avoid two pitfalls Livelock Description: Packet advances indefinitely Solution: Minimal routing Deadlock Description: Packets cannot advance due to circular dependency Solution: Avoid circular dependency
Partially Adaptive Routing Minimal adaptive routing in 2D mesh provides a packet with up to two alternatives at each hop If a single clockwise and counter-clockwise turns are forbidden deadlock is avoided A more sophisticated Odd-Even routing forbids certain turns at certain columns
Fully Adaptive Routing We can invest in hardware complexity to achieve increased routing flexibility Two or more segregated cycle-free sub-networks should do the job
Packet Ordering
Out-of-Order Arrival Packet-level adaptation breaks arrival order However outside the simulator packets sometimes must arrive to a destination in-order Naïve adaptive routing requires reorder buffer at every node Design of reorder buffer is a challenge: Small size is required to reduce the area/power overhead If the buffer is too small it will suffer from frequent overflows
In-order Arrival Paradigm “Sacrifice performance to guarantee arrival order” “Exploit network statistics to reduce performance loss” We propose three schemes: Stop and wait Window control Virtual circuits
Stop and wait Packets are transmitted one at a time A packet is transmitted only when the previous one is acked Suitable for bulk transfer and very long packets
Window control Sources use small, constant-sized sliding window Window is moved upon a receipt of ack Destinations ack sequentially expected packets
Virtual circuits* Environment: Operation: Routers supporting static and adaptive destination routing Sources storing several sets of routing directives, constituting virtual circuits, which can override destination routing Operation: A probe packet is sent to find a good path The probe is adaptively routed and collects routing directives Probe’s ack brings routing information back to the source *Ideas presented in this slide are borrowed from my ongoing thesis work