Lecture 23: Interconnection Networks

Slides:



Advertisements
Similar presentations
Flattened Butterfly: A Cost-Efficient Topology for High-Radix Networks ______________________________ John Kim, William J. Dally &Dennis Abts Presented.
Advertisements

1 Lecture 17: On-Chip Networks Today: background wrap-up and innovations.
1 Lecture 12: Interconnection Networks Topics: dimension/arity, routing, deadlock, flow control.
1 Lecture 23: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Appendix E)
1 Lecture 15: PCM, Networks Today: PCM wrap-up, projects discussion, on-chip networks background.
1 Lecture 23: Interconnection Networks Paper: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton.
1 Lecture 16: On-Chip Networks Today: on-chip networks background.
NUMA Mult. CSE 471 Aut 011 Interconnection Networks for Multiprocessors Buses have limitations for scalability: –Physical (number of devices that can be.
1 Lecture 21: Router Design Papers: Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton A Gracefully Degrading and.
1 Lecture 13: Interconnection Networks Topics: flow control, router pipelines, case studies.
1 Lecture 25: Interconnection Networks Topics: flow control, router microarchitecture Final exam:  Dec 4 th 9am – 10:40am  ~15-20% on pre-midterm  post-midterm:
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control Final exam reminders:  Plan well – attempt every question.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Sections 8.1 – 8.5)
1 Lecture 25: Interconnection Networks, Disks Topics: flow control, router microarchitecture, RAID.
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control.
1 Lecture 26: Interconnection Networks Topics: flow control, router microarchitecture.
1 Lecture 25: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) Review session,
Interconnect Network Topologies
Interconnection Networks. Applications of Interconnection Nets Interconnection networks are used everywhere! ◦ Supercomputers – connecting the processors.
1 Lecture 23: Interconnection Networks Topics: Router microarchitecture, topologies Final exam next Tuesday: same rules as the first midterm Next semester:
Computer Science Department
Interconnect Networks
Network Topologies Topology – how nodes are connected – where there is a wire between 2 nodes. Routing – the path a message takes to get from one node.
PPC Spring Interconnection Networks1 CSCI-4320/6360: Parallel Programming & Computing (PPC) Interconnection Networks Prof. Chris Carothers Computer.
1 Lecture 26: Networks, Storage Topics: router microarchitecture, disks, RAID (Appendix D) Final exam: Monday 30 th Apr 10:30-12:30 Same rules as the midterm.
© Sudhakar Yalamanchili, Georgia Institute of Technology (except as indicated) Switch Microarchitecture Basics.
1 Lecture 13: LRC & Interconnection Networks Topics: LRC implementation, interconnection characteristics.
1 Lecture 15: Interconnection Routing Topics: deadlock, flow control.
Anshul Kumar, CSE IITD ECE729 : Advanced Computer Architecture Lecture 27, 28: Interconnection Mechanisms In Multiprocessors 29 th, 31 st March, 2010.
Lecture 16: Router Design
Super computers Parallel Processing
1 Lecture 15: NoC Innovations Today: power and performance innovations for NoCs.
1 Lecture 22: Router Design Papers: Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton A Gracefully Degrading and.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix F)
1 Lecture 14: Interconnection Networks Topics: dimension vs. arity, deadlock.
Interconnection Networks Communications Among Processors.
1 Lecture 29: Interconnection Networks Papers: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton Interconnect Design.
1 Lecture 22: Interconnection Networks Topics: Routing, deadlock, flow control, virtual channels.
COMP8330/7330/7336 Advanced Parallel and Distributed Computing Communication Costs in Parallel Machines Dr. Xiao Qin Auburn University
INTERCONNECTION NETWORK
Overview Parallel Processing Pipelining
Parallel Architecture
Interconnect Networks
CS 704 Advanced Computer Architecture
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Interconnection Networks (Part 2) Dr.
Course Outline Introduction in algorithms and applications
Physical constraints (1/2)
Lecture 14: Large Cache Design III
Lecture 23: Router Design
Lecture 16: On-Chip Networks
Lecture 17: NoC Innovations
Static and Dynamic Networks
Switching, routing, and flow control in interconnection networks
Lecture 14: Interconnection Networks
Interconnection Network Design Lecture 14
Mesh-Connected Illiac Networks
Introduction to Scalable Interconnection Networks
Lecture: Interconnection Networks
High Performance Computing & Bioinformatics Part 2 Dr. Imad Mahgoub
Low-Latency Virtual-Channel Routers for On-Chip Networks Robert Mullins, Andrew West, Simon Moore Presented by Sailesh Kumar.
Interconnection Networks Contd.
Embedded Computer Architecture 5SAI0 Interconnection Networks
Lecture: Networks Topics: TM wrap-up, networks.
Lecture: Interconnection Networks
CS 6290 Many-core & Interconnect
Advanced Computer and Parallel Processing
Lecture 25: Interconnection Networks
Multiprocessors and Multi-computers
Chapter 2 from ``Introduction to Parallel Computing'',
Presentation transcript:

Lecture 23: Interconnection Networks Topics: Router microarchitecture, topologies

Router Functions Crossbar, buffer, arbiter, VC state and allocation, buffer management, ALUs, control logic, routing The on-chip network can contribute 10-35% of total chip power; network delays can add tens of cycles to cache and memory access Typical on-chip network power breakdown: 30% link 30% buffers 30% crossbar

Router Pipeline Four typical stages: RC routing computation: the head flit indicates the VC that it belongs to, the VC state is updated, the headers are examined and the next output channel is computed (note: this is done for all the head flits arriving on various input channels) VA virtual-channel allocation: the head flits compete for the available virtual channels on their computed output channels SA switch allocation: a flit competes for access to its output physical channel ST switch traversal: the flit is transmitted on the output channel A head flit goes through all four stages, the other flits do nothing in the first two stages (this is an in-order pipeline and flits can not jump ahead), a tail flit also de-allocates the VC

Router Pipeline Four typical stages: RC routing computation: compute the output channel VA virtual-channel allocation: allocate VC for the head flit SA switch allocation: compete for output physical channel ST switch traversal: transfer data on output physical channel STALL Cycle 1 2 3 4 5 6 7 Head flit Body flit 1 Body flit 2 Tail flit RC VA SA ST RC VA SA SA ST -- -- SA ST -- -- -- SA ST -- -- SA ST -- -- -- SA ST -- -- SA ST -- -- -- SA ST

Speculative Pipelines Perform VA, SA, and ST in parallel (can cause collisions and re-tries) Typically, VA is the critical path – can possibly perform SA and ST sequentially Perform VA and SA in parallel Note that SA only requires knowledge of the output physical channel, not the VC If VA fails, the successfully allocated channel goes un-utilized Cycle 1 2 3 4 5 6 7 Head flit Body flit 1 Body flit 2 Tail flit RC VA SA ST RC VA SA ST -- SA ST SA ST -- SA ST SA ST -- SA ST SA ST Router pipeline latency is a greater bottleneck when there is little contention When there is little contention, speculation will likely work well! Single stage pipeline?

Current Trends Growing interest in eliminating the area/power overheads of router buffers; traffic levels are also relatively low, so virtual-channel buffered routed networks may be overkill Option 1: use a bus for short distances (16 cores) and use a hierarchy of buses to travel long distances Option 2: hot-potato or bufferless routing

Centralized Crossbar Switch P0 P1 P2 P3 P4 P5 P6 P7

Crossbar Properties Assuming each node has one input and one output, a crossbar can provide maximum bandwidth: N messages can be sent as long as there are N unique sources and N unique destinations Maximum overhead: WN2 internal switches, where W is data width and N is number of nodes To reduce overhead, use smaller switches as building blocks – trade off overhead for lower effective bandwidth

Switch with Omega Network 000 P0 000 001 P1 001 010 P2 010 011 P3 011 100 P4 100 101 P5 101 110 P6 110 111 P7 111

Omega Network Properties The switch complexity is now O(N log N) Contention increases: P0  P5 and P1  P7 cannot happen concurrently (this was possible in a crossbar) To deal with contention, can increase the number of levels (redundant paths) – by mirroring the network, we can route from P0 to P5 via N intermediate nodes, while increasing complexity by a factor of 2

Tree Network Complexity is O(N) Can yield low latencies when communicating with neighbors Can build a fat tree by having multiple incoming and outgoing links P0 P1 P2 P3 P4 P5 P6 P7

Bisection Bandwidth Split N nodes into two groups of N/2 nodes such that the bandwidth between these two groups is minimum: that is the bisection bandwidth Why is it relevant: if traffic is completely random, the probability of a message going across the two halves is ½ – if all nodes send a message, the bisection bandwidth will have to be N/2 The concept of bisection bandwidth confirms that the tree network is not suited for random traffic patterns, but for localized traffic patterns

Distributed Switches: Ring Each node is connected to a 3x3 switch that routes messages between the node and its two neighbors Effectively a repeated bus: multiple messages in transit Disadvantage: bisection bandwidth of 2 and N/2 hops on average

Distributed Switch Options Performance can be increased by throwing more hardware at the problem: fully-connected switches: every switch is connected to every other switch: N2 wiring complexity, N2 /4 bisection bandwidth Most commercial designs adopt a point between the two extremes (ring and fully-connected): Grid: each node connects with its N, E, W, S neighbors Torus: connections wrap around Hypercube: links between nodes whose binary names differ in a single bit

Topology Examples Hypercube Grid Torus Criteria 64 nodes Bus Ring Fully connected Performance Bisection bandwidth 1 2 16 32 1024 Cost Ports/switch Total links 3 128 5 192 7 256 64 2080

k-ary d-Cube Consider a k-ary d-cube: a d-dimension array with k elements in each dimension, there are links between elements that differ in one dimension by 1 (mod k) Number of nodes N = kd (with no wraparound) Number of switches : Switch degree : Number of links : Pins per node : N Avg. routing distance: Diameter : Bisection bandwidth : Switch complexity : d(k-1)/2 2d + 1 d(k-1) Nd 2wkd-1 2wd (2d + 1)2 Should we minimize or maximize dimension?

Title Bullet