Lecture 25: Interconnection Networks

Slides:



Advertisements
Similar presentations
Prof. Natalie Enright Jerger
Advertisements

What is Flow Control ? Flow Control determines how a network resources, such as channel bandwidth, buffer capacity and control state are allocated to packet.
ECE 1749H: Interconnection Networks for Parallel Computer Architectures: Flow Control Prof. Natalie Enright Jerger.
1 Lecture 17: On-Chip Networks Today: background wrap-up and innovations.
1 Lecture 12: Interconnection Networks Topics: dimension/arity, routing, deadlock, flow control.
1 Lecture 15: PCM, Networks Today: PCM wrap-up, projects discussion, on-chip networks background.
1 Lecture 23: Interconnection Networks Paper: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton.
CSE 291-a Interconnection Networks Lecture 12: Deadlock Avoidance (Cont’d) Router February 28, 2007 Prof. Chung-Kuan Cheng CSE Dept, UC San Diego Winter.
1 Lecture 16: On-Chip Networks Today: on-chip networks background.
1 Lecture 21: Router Design Papers: Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton A Gracefully Degrading and.
1 Lecture 13: Interconnection Networks Topics: flow control, router pipelines, case studies.
1 Lecture 25: Interconnection Networks Topics: flow control, router microarchitecture Final exam:  Dec 4 th 9am – 10:40am  ~15-20% on pre-midterm  post-midterm:
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control Final exam reminders:  Plan well – attempt every question.
CSE 291-a Interconnection Networks Lecture 15: Router (cont’d) March 5, 2007 Prof. Chung-Kuan Cheng CSE Dept, UC San Diego Winter 2007 Transcribed by Ling.
CSE 291-a Interconnection Networks Lecture 10: Flow Control February 21, 2007 Prof. Chung-Kuan Cheng CSE Dept, UC San Diego Winter 2007 Transcribed by.
1 Lecture 25: Interconnection Networks, Disks Topics: flow control, router microarchitecture, RAID.
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control.
1 Lecture 26: Interconnection Networks Topics: flow control, router microarchitecture.
1 Lecture 13: Interconnection Networks Topics: lots of background, recent innovations for power and performance.
1 Lecture 25: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) Review session,
Performance and Power Efficient On-Chip Communication Using Adaptive Virtual Point-to-Point Connections M. Modarressi, H. Sarbazi-Azad, and A. Tavakkol.
1 Lecture 23: Interconnection Networks Topics: Router microarchitecture, topologies Final exam next Tuesday: same rules as the first midterm Next semester:
1 The Turn Model for Adaptive Routing. 2 Summary Introduction to Direct Networks. Deadlocks in Wormhole Routing. System Model. Partially Adaptive Routing.
On-Chip Networks and Testing
Networks-on-Chips (NoCs) Basics
Deadlock CEG 4131 Computer Architecture III Miodrag Bolic.
Sami Al-wakeel 1 Data Transmission and Computer Networks The Switching Networks.
Computer Networks with Internet Technology William Stallings
1 Lecture 26: Networks, Storage Topics: router microarchitecture, disks, RAID (Appendix D) Final exam: Monday 30 th Apr 10:30-12:30 Same rules as the midterm.
© Sudhakar Yalamanchili, Georgia Institute of Technology (except as indicated) Switch Microarchitecture Basics.
1 Lecture 15: Interconnection Routing Topics: deadlock, flow control.
BZUPAGES.COM Presentation On SWITCHING TECHNIQUE Presented To; Sir Taimoor Presented By; Beenish Jahangir 07_04 Uzma Noreen 07_08 Tayyaba Jahangir 07_33.
Lecture 16: Router Design
Networks: Routing, Deadlock, Flow Control, Switch Design, Case Studies Alvin R. Lebeck CPS 220.
1 Lecture 15: NoC Innovations Today: power and performance innovations for NoCs.
1 Lecture 22: Router Design Papers: Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton A Gracefully Degrading and.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix F)
1 Switching and Forwarding Sections Connecting More Than Two Hosts Multi-access link: Ethernet, wireless –Single physical link, shared by multiple.
1 Lecture 14: Interconnection Networks Topics: dimension vs. arity, deadlock.
Predictive High-Performance Architecture Research Mavens (PHARM), Department of ECE The NoX Router Mitchell Hayenga Mikko Lipasti.
Flow Control Ben Abdallah Abderazek The University of Aizu
1 Lecture 29: Interconnection Networks Papers: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton Interconnect Design.
1 Lecture 22: Interconnection Networks Topics: Routing, deadlock, flow control, virtual channels.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2000 Muhammad Waseem Iqbal Lecture # 20 Data Communication.
The network-on-chip protocol
Lecture 23: Interconnection Networks
Physical constraints (1/2)
Lecture 14: Large Cache Design III
Interconnection Networks: Flow Control
Lecture 23: Router Design
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Lecture 16: On-Chip Networks
NoC Switch: Basic Design Principles &
Chapter 3 Part 3 Switching and Bridging
Lecture 17: NoC Innovations
Mechanics of Flow Control
Switching, routing, and flow control in interconnection networks
Lecture 14: Interconnection Networks
Lecture: Transactional Memory, Networks
Lecture: Interconnection Networks
Data Communication Networks
CEG 4131 Computer Architecture III Miodrag Bolic
Low-Latency Virtual-Channel Routers for On-Chip Networks Robert Mullins, Andrew West, Simon Moore Presented by Sailesh Kumar.
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Lecture: Networks Topics: TM wrap-up, networks.
Lecture: Interconnection Networks
Chapter 3 Part 3 Switching and Bridging
CS 6290 Many-core & Interconnect
EE382C Lecture 9 Deadlock 4/26/11 EE 382C - S11- Lecture 9.
Multiprocessors and Multi-computers
Presentation transcript:

Lecture 25: Interconnection Networks Topics: flow control, router microarchitecture, deadlocks

Packets/Flits A message is broken into multiple packets (each packet has header information that allows the receiver to re-construct the original message) A packet may itself be broken into flits – flits do not contain additional headers Two packets can follow different paths to the destination Flits are always ordered and follow the same path Such an architecture allows the use of a large packet size (low header overhead) and yet allows fine-grained resource allocation on a per-flit basis

Flow Control The routing of a message requires allocation of various resources: the channel (or link), buffers, control state Bufferless: flits are dropped if there is contention for a link, NACKs are sent back, and the original sender has to re-transmit the packet Circuit switching: a request is first sent to reserve the channels, the request may be held at an intermediate router until the channel is available (hence, not truly bufferless), ACKs are sent back, and subsequent packets/flits are routed with little effort (good for bulk transfers)

Buffered Flow Control A buffer between two channels decouples the resource allocation for each channel Packet-buffer flow control: channels and buffers are allocated per packet Store-and-forward Cut-through Wormhole routing: same as cut-through, but buffers in each router are allocated on a per-flit basis, not per-packet Time-Space diagrams 1 2 3 H B B B T H B B B T Channel H B B B T 1 2 3 H B B B T Channel H B B B T H B B B T 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Cycle

Virtual Channels channel Buffers Buffers Flits do not carry headers. Once a packet starts going over a channel, another packet cannot cut in (else, the receiving buffer will confuse the flits of the two packets). If the packet is stalled, other packets can’t use the channel. With virtual channels, the flit can be received into one of N buffers. This allows N packets to be in transit over a given physical channel. The packet must carry an ID to indicate its virtual channel. Buffers Buffers Physical channel Buffers Buffers

Example Wormhole: Virtual channel: Traffic Analogy: A is going from Node-1 to Node-4; B is going from Node-0 to Node-5 Node-0 B idle idle Node-1 A B Traffic Analogy: B is trying to make a left turn; A is trying to go straight; there is no left-only lane with wormhole, but there is one with VC Node-2 Node-3 Node-4 Node-5 (blocked, no free VCs/buffers) Virtual channel: Node-0 B Node-1 A A A B Node-2 Node-3 Node-4 Node-5 (blocked, no free VCs/buffers)

Virtual Channel Flow Control Incoming flits are placed in buffers For this flit to jump to the next router, it must acquire three resources: A free virtual channel on its intended hop We know that a virtual channel is free when the tail flit goes through Free buffer entries for that virtual channel This is determined with credit or on/off management A free cycle on the physical channel Competition among the packets that share a physical channel

Buffer Management Credit-based: keep track of the number of free buffers in the downstream node; the downstream node sends back signals to increment the count when a buffer is freed; need enough buffers to hide the round-trip latency On/Off: the upstream node sends back a signal when its buffers are close to being full – reduces upstream signaling and counters, but can waste buffer space

Breaking Deadlock Consider the eight possible turns in a 2-d array (note that turns lead to cycles) By preventing just two turns, cycles can be eliminated Dimension-order routing disallows four turns Helps avoid deadlock even in adaptive routing West-First North-Last Negative-First Can allow deadlocks

Deadlock-Free Proofs Number edges and show that all routes will traverse edges in increasing (or decreasing) order – therefore, it will be impossible to have cyclic dependencies Example: k-ary 2-d array with dimension routing: first route along x-dimension, then along y 1 2 3 2 1 17 18 1 2 3 2 1 18 17 1 2 3 2 1 19 16 1 2 3 2 1

Deadlock Avoidance with VCs VCs provide another way to number the links such that a route always uses ascending link numbers 102 101 100 2 1 117 118 17 18 1 2 3 2 1 118 117 18 17 101 102 103 1 2 3 2 1 119 202 201 200 116 19 217 16 218 1 2 3 2 1 218 217 201 202 203 Alternatively, use West-first routing on the 1st plane and cross over to the 2nd plane in case you need to go West again (the 2nd plane uses North-last, for example) 219 216

Router Functions Crossbar, buffer, arbiter, VC state and allocation, buffer management, ALUs, control logic, routing Typical on-chip network power breakdown: 30% link 30% buffers 30% crossbar

Router Pipeline Four typical stages: RC routing computation: the head flit indicates the VC that it belongs to, the VC state is updated, the headers are examined and the next output channel is computed (note: this is done for all the head flits arriving on various input channels) VA virtual-channel allocation: the head flits compete for the available virtual channels on their computed output channels SA switch allocation: a flit competes for access to its output physical channel ST switch traversal: the flit is transmitted on the output channel A head flit goes through all four stages, the other flits do nothing in the first two stages (this is an in-order pipeline and flits can not jump ahead), a tail flit also de-allocates the VC

Router Pipeline Four typical stages: RC routing computation: compute the output channel VA virtual-channel allocation: allocate VC for the head flit SA switch allocation: compete for output physical channel ST switch traversal: transfer data on output physical channel STALL Cycle 1 2 3 4 5 6 7 Head flit Body flit 1 Body flit 2 Tail flit RC VA SA ST RC VA SA SA ST -- -- SA ST -- -- -- SA ST -- -- SA ST -- -- -- SA ST -- -- SA ST -- -- -- SA ST

Speculative Pipelines Perform VA, SA, and ST in parallel (can cause collisions and re-tries) Typically, VA is the critical path – can possibly perform SA and ST sequentially Perform VA and SA in parallel Note that SA only requires knowledge of the output physical channel, not the VC If VA fails, the successfully allocated channel goes un-utilized Cycle 1 2 3 4 5 6 7 Head flit Body flit 1 Body flit 2 Tail flit RC VA SA ST RC VA SA ST -- SA ST SA ST -- SA ST SA ST -- SA ST SA ST Router pipeline latency is a greater bottleneck when there is little contention When there is little contention, speculation will likely work well! Single stage pipeline?

Recent Intel Router Used for a 6x6 mesh 16 B, > 3 GHz Wormhole with VC flow control Source: Partha Kundu, “On-Die Interconnects for Next-Generation CMPs”, talk at On-Chip Interconnection Networks Workshop, Dec 2006

Recent Intel Router Source: Partha Kundu, “On-Die Interconnects for Next-Generation CMPs”, talk at On-Chip Interconnection Networks Workshop, Dec 2006

Current Trends Growing interest in eliminating the area/power overheads of router buffers; traffic levels are also relatively low, so virtual-channel buffered routed networks may be overkill Option 1: use a bus for short distances (16 cores) and use a hierarchy of buses to travel long distances Option 2: hot-potato or bufferless routing

Title Bullet