Interconnection Networks: Flow Control

Slides:



Advertisements
Similar presentations
Interconnection Networks: Flow Control and Microarchitecture.
Advertisements

Prof. Natalie Enright Jerger
What is Flow Control ? Flow Control determines how a network resources, such as channel bandwidth, buffer capacity and control state are allocated to packet.
ECE 1749H: Interconnection Networks for Parallel Computer Architectures: Flow Control Prof. Natalie Enright Jerger.
1 Lecture 12: Interconnection Networks Topics: dimension/arity, routing, deadlock, flow control.
1 Lecture 23: Interconnection Networks Paper: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton.
CSE 291-a Interconnection Networks Lecture 12: Deadlock Avoidance (Cont’d) Router February 28, 2007 Prof. Chung-Kuan Cheng CSE Dept, UC San Diego Winter.
1 Lecture 21: Router Design Papers: Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton A Gracefully Degrading and.
1 Lecture 13: Interconnection Networks Topics: flow control, router pipelines, case studies.
1 Lecture 25: Interconnection Networks Topics: flow control, router microarchitecture Final exam:  Dec 4 th 9am – 10:40am  ~15-20% on pre-midterm  post-midterm:
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control Final exam reminders:  Plan well – attempt every question.
CSE 291-a Interconnection Networks Lecture 15: Router (cont’d) March 5, 2007 Prof. Chung-Kuan Cheng CSE Dept, UC San Diego Winter 2007 Transcribed by Ling.
CSE 291-a Interconnection Networks Lecture 10: Flow Control February 21, 2007 Prof. Chung-Kuan Cheng CSE Dept, UC San Diego Winter 2007 Transcribed by.
1 Lecture 25: Interconnection Networks, Disks Topics: flow control, router microarchitecture, RAID.
Issues in System-Level Direct Networks Jason D. Bakos.
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control.
1 Lecture 26: Interconnection Networks Topics: flow control, router microarchitecture.
Dragonfly Topology and Routing
Switching, routing, and flow control in interconnection networks.
High-Performance Networks for Dataflow Architectures Pravin Bhat Andrew Putnam.
Networks-on-Chips (NoCs) Basics
High-Level Interconnect Architectures for FPGAs An investigation into network-based interconnect systems for existing and future FPGA architectures Nick.
High-Level Interconnect Architectures for FPGAs Nick Barrow-Williams.
Deadlock CEG 4131 Computer Architecture III Miodrag Bolic.
© Sudhakar Yalamanchili, Georgia Institute of Technology (except as indicated) Switch Microarchitecture Basics.
NC2 (No.4) 1 Undeliverable packets & solutions Deadlock: packets are unable to progress –Prevention, avoidance, recovery Livelock: packets cannot reach.
1 Lecture 15: Interconnection Routing Topics: deadlock, flow control.
BZUPAGES.COM Presentation On SWITCHING TECHNIQUE Presented To; Sir Taimoor Presented By; Beenish Jahangir 07_04 Uzma Noreen 07_08 Tayyaba Jahangir 07_33.
Router Architecture. December 21, 2015SoC Architecture2 Network-on-Chip Information in the form of packets is routed via channels and switches from one.
Lecture 16: Router Design
Networks: Routing, Deadlock, Flow Control, Switch Design, Case Studies Alvin R. Lebeck CPS 220.
Virtual-Channel Flow Control William J. Dally
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix F)
Flow Control Ben Abdallah Abderazek The University of Aizu
1 Lecture 29: Interconnection Networks Papers: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton Interconnect Design.
1 Lecture 22: Interconnection Networks Topics: Routing, deadlock, flow control, virtual channels.
Chapter 3 Part 3 Switching and Bridging
The network-on-chip protocol
Lecture 23: Interconnection Networks
Packet Switching Datagram Approach Virtual Circuit Approach
Buffer Management in a Switch
Deadlock.
Azeddien M. Sllame, Amani Hasan Abdelkader
Lecture 23: Router Design
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Lecture 16: On-Chip Networks
Chapter 3 Part 3 Switching and Bridging
Mechanics of Flow Control
Switching, routing, and flow control in interconnection networks
Lecture: Transactional Memory, Networks
Introduction to Scalable Interconnection Networks
Lecture: Interconnection Networks
On-time Network On-chip
Switching Techniques.
Natalie Enright Jerger, Li Shiuan Peh, and Mikko Lipasti
Congestion Control (from Chapter 05)
CEG 4131 Computer Architecture III Miodrag Bolic
Low-Latency Virtual-Channel Routers for On-Chip Networks Robert Mullins, Andrew West, Simon Moore Presented by Sailesh Kumar.
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Lecture: Networks Topics: TM wrap-up, networks.
Lecture: Interconnection Networks
Chapter 3 Part 3 Switching and Bridging
CS 6290 Many-core & Interconnect
Packet Switching Outline Store-and-Forward Switches
Congestion Control (from Chapter 05)
Lecture 25: Interconnection Networks
Switching, routing, and flow control in interconnection networks
Circuit Switched Network
Congestion Control (from Chapter 05)
EE382C Lecture 9 Deadlock 4/26/11 EE 382C - S11- Lecture 9.
Presentation transcript:

Interconnection Networks: Flow Control Prof. Natalie Enright Jerger

Switching/Flow Control Overview Topology: determines connectivity of network Routing: determines paths through network Flow Control: determine allocation of resources to messages as they traverse network Buffers and links Significant impact on throughput and latency of network Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Flow Control Buffer capacity Control State Channel Bandwidth Control state records allocation of channels and buffers to packets current state of packet traversing node Channel bandwidth advances flits from this node to next Buffers hold flits waiting for channel bandwidth Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Packets Messages: composed of one or more packets If message size is <= maximum packet size only one packet created Packets: composed of one or more flits Flit: flow control digit Phit: physical digit Subdivides flit into chunks = to link width Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Packets (2) Message Header Payload Packet Route Seq# Head Flit Body Flit Tail Flit Flit Type VCID Head, Body, Tail, Head & Tail Phit Off-chip: channel width limited by pins Requires phits On-chip: abundant wiring means phit size == flit size Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Packets(3) Cache line RC Type VCID Addr Bytes 0-15 Bytes 16-31 Bytes 32-47 Bytes 48-63 Head Flit Body Flits Tail Flit Coherence Command RC Type VCID Addr Cmd Head & Tail Flit Packet contains destination/route information Flits may not  all flits of a packet must take same route Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Switching Different flow control techniques based on granularity Message-based: allocation made at message granularity (circuit-switching) Packet-based: allocation made to whole packets Flit-based: allocation made on a flit-by-flit basis Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Message-Based Flow Control Coarsest granularity Circuit-switching Pre-allocates resources across multiple hops Source to destination Resources = links Buffers are not necessary Probe sent into network to reserve resources Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Circuit Switching Once probe sets up circuit Message does not need to perform any routing or allocation at each network hop Good for transferring large amounts of data Can amortize circuit setup cost by sending data with very low per-hop overheads No other message can use those resources until transfer is complete Throughput can suffer due setup and hold time for circuits Links are idle until setup is complete Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Circuit Switching Example Configuration Probe 5 Data Circuit Acknowledgement Significant latency overhead prior to data transfer Data transfer does not pay per-hop overhead for routing and allocation Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Circuit Switching Example (2) 1 2 Configuration Probe 5 Data Circuit Acknowledgement When there is contention Significant wait time Message from 1  2 must wait Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Time-Space Diagram: Circuit-Switching S08 A08 D08 D08 D08 D08 T08 1 S08 A08 D08 D08 D08 D08 T08 2 S08 S28 A08 D08 D08 D08 D08 T08 S28 Location 5 S08 A08 D08 D08 D08 D08 T08 S28 S08 A08 D08 D08 D08 D08 T08 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Time Time to setup+ack circuit from 0 to 8 1 2 3 4 5 6 7 8 Time setup from 2 to 8 is blocked Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Packet-based Flow Control Break messages into packets Interleave packets on links Better utilization Requires per-node buffering to store in-flight packets Two types of packet-based techniques start here Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Store and Forward Links and buffers are allocated to entire packet Head flit waits at router until entire packet is received before being forwarded to the next hop Not suitable for on-chip Requires buffering at each router to hold entire packet Packet cannot traverse link until buffering allocated to entire packet Incurs high latencies (pays serialization latency at each hop) Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Store and Forward Example Total delay = 4 cycles per hop x 3 hops = 12 cycles 5 High per-hop latency Serialization delay paid at each hop Larger buffering required Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Time-Space Diagram: Store and Forward H B B B T 1 H B B B T 2 H B B B T Location 5 H B B B T H B B B T 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Time Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Packet-based: Virtual Cut Through Links and Buffers allocated to entire packets Flits can proceed to next hop before tail flit has been received by current router But only if next router has enough buffer space for entire packet Reduces the latency significantly compared to SAF But still requires large buffers Unsuitable for on-chip Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Virtual Cut Through Example Allocate 4 flit-sized buffers before head proceeds Allocate 4 flit-sized buffers before head proceeds Total delay = 1 cycle per hop x 3 hops + serialization = 6 cycles 5 Lower per-hop latency Large buffering required Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Time-Space Diagram: VCT H Location Time 1 2 5 8 3 4 6 7 B T Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Virtual Cut Through Cannot proceed because only 2 flit buffers available Throughput suffers from inefficient buffer allocation Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Time-Space Diagram: VCT (2) H Location Time 1 2 5 8 3 4 6 7 9 10 11 B T Insufficient Buffers Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Flit-Level Flow Control Help routers meet tight area/power constraints Flit can proceed to next router when there is buffer space available for that flit Improves over SAF and VCT by allocating buffers on a flit-by-flit basis Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Wormhole Flow Control Pros More efficient buffer utilization (good for on-chip) Low latency Cons Poor link utilization: if head flit becomes blocked, all links spanning length of packet are idle Cannot be re-allocated to different packet Suffers from head of line (HOL) blocking Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Wormhole Example 6 flit buffers/input port Red holds this channel: channel remains idle until read proceeds Channel idle but red packet blocked behind blue Buffer full: blue cannot proceed Blocked by other packets 6 flit buffers/input port Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Time-Space Diagram: Wormhole Location Time 1 2 5 8 3 4 6 7 9 10 11 B T Contention Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Virtual Channels First proposed for deadlock avoidance We’ll come back to this Can be applied to any flow control First proposed with wormhole Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Virtual Channel Flow Control Virtual channels used to combat HOL blocking in wormhole Virtual channels: multiple flit queues per input port Share same physical link (channel) Link utilization improved Flits on different VC can pass blocked packet Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Virtual Channel Flow Control (2) A (in) B (in) A (out) B (out) Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Virtual Channel Flow Control (3) AH A1 A2 A3 A4 A5 AT In1 1 1 2 2 3 3 3 3 3 2 2 1 1 BH B1 B2 B3 B4 B5 BT In2 1 2 2 3 3 3 3 3 3 3 2 2 1 1 AH BH A1 B1 A2 B2 A3 B3 A4 B4 A5 B5 AT BT Out A Downstream AH A1 A2 A3 A4 A5 AT BH B1 B2 B3 B4 B5 BT B Downstream Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Virtual Channel Flow Control (3) Packets compete for VC on flit by flit basis In example: on downstream links, flits of each packet are available every other cycle Upstream links throttle because of limited buffers Does not mean links are idle May be used by packet allocated to other VCs Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Virtual Channel Example Buffer full: blue cannot proceed Blocked by other packets 6 flit buffers/input port 3 flit buffers/VC Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Summary of techniques Links Buffers Comments Circuit-Switching Messages N/A (buffer-less) Setup & Ack Store and Forward Packet Head flit waits for tail Virtual Cut Through Head can proceed Wormhole Flit HOL Virtual Channel Interleave flits of different packets ----- Meeting Notes (2014-02-04 17:13) ----- stop here Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Deadlock Using flow control to guarantee deadlock freedom give more flexible routing Recall: routing restrictions needed for deadlock freedom If routing algorithm is not deadlock free VCs can break resource cycle Each VC is time-multiplexed onto physical link Holding VC implies holding associated buffer queue Not tying up physical link resource Enforce order on VCs Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Deadlock: Enforce Order B D C Dateline B1 D B0 D0 B D1 C1 A C0 All message sent through VC 0 until cross dateline After dateline, assigned to VC 1 Cannot be allocated to VC 0 again Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Deadlock: Escape VCs Enforcing order lowers VC utilization Previous example: VC 1 underutilized Escape Virtual Channels Provide an escape path for every packet in a potential cycle Have 1 VC that uses a deadlock free routing subfunction Example: VC 0 uses DOR, other VCs use arbitrary routing function Access to VCs arbitrated fairly: packet always has chance of landing on escape VC Once in escape VC, must remain there (routing restrictions) Assign different message classes to different VCs to prevent protocol level deadlock Prevent req-ack message cycles Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Virtual Channel Reallocation When can a VC be reallocated to a new packet? Two options: conservative and aggressive Conservative (atomic): only reallocate empty VC Aggressive (non-atomic): reallocate once tail flit has been received Implications for deadlock here Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

VCT + non-atomic allocation Typically uses non-atomic VC allocation VC allocated only if there is enough free buffer space to hold entire packet Guarantees that whole packet can be received by VC Will entirely vacate upstream VC New packet at head of VC can bid for resources Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

VC Reallocation: Wormhole + Aggressive Turn-model and deterministic routing No cyclic channel dependence Can use aggressive reallocation High VC utilization but low adaptivity P1 can be forwarded immediately Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

VC Reallocation: Wormhole + Adaptive Routing Cyclic channel dependence With aggressive reallocation Packet spans 2 VCs and head flit not at head of one VC Cannot reach escape VC Leads to deadlock Requires conservative reallocation Can only reallocate an empty VC Low VC utilization Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Buffer Backpressure Need mechanism to prevent buffer overflow Avoid dropping packets Upstream nodes need to know buffer availability at downstream routers Significant impact on throughput achieved by flow control Two common mechanisms Credits On-off Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Credit-Based Flow Control Upstream router stores credit counts for each downstream VC Upstream router When flit forwarded Decrement credit count Count == 0, buffer full, stop sending Downstream router When flit forwarded and buffer freed Send credit to upstream router Upstream increments credit count Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Credit Timeline Round-trip credit delay: Node 1 Node 2 t1 Credit Flit departs router t2 Process Credit round trip delay t3 Credit Flit t4 Process t5 Round-trip credit delay: Time between when buffer empties and when next flit can be processed from that buffer entry If only single entry buffer, would result in significant throughput degradation Important to size buffers to tolerate credit turn-around Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) On-Off Flow Control Credit: requires upstream signaling for every flit On-off: decreases upstream signaling Off signal Sent when number of free buffers falls below threshold Foff On signal Sent when number of free buffers rises above threshold Fon Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

On-Off Timeline Less signaling but more buffering Foff threshold reached Node 1 Node 2 t1 Flit Foff set to prevent flits arriving before t4 from overflowing t2 Flit Off Flit t3 Flit t4 Process Flit Flit Flit Flit Fon threshold reached t5 Flit Fon set so that Node 2 does not run out of flits between t5 and t8 On Flit t6 Flit Process t7 Flit Flit Flit t8 Flit Less signaling but more buffering On-chip buffers more expensive than wires Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Buffer Sizing Prevent backpressure from limiting throughput Buffers must hold flits >= turnaround time Assume: 1 cycle propagation delay for data and credits 1 cycle credit processing delay 3 cycle router pipeline At least 6 flit buffers Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

Actual Buffer Usage & Turnaround Delay 1 1 3 1 Credit propagation delay Credit pipeline delay flit propagation delay Actual buffer usage flit pipeline delay Flit leaves node 1 and credit is sent to node 0 Node 0 processes credit, freed buffer reallocated to new flit New flit arrives at Node 1 and reuses buffer Flit arrives at node 1 and uses buffer Node 0 receives credit New flit leaves Node 0 for Node 1 Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)

ECE 1749H: Interconnection Networks (Enright Jerger) Flow Control Summary On-chip networks require techniques with lower buffering requirements Wormhole or Virtual Channel flow control Avoid dropping packets in on-chip environment Requires buffer backpressure mechanism Complexity of flow control impacts router microarchitecture (next) Fall 2014 ECE 1749H: Interconnection Networks (Enright Jerger)