Multiprocessor Interconnection Networks Todd C

Slides:



Advertisements
Similar presentations
Shantanu Dutt Univ. of Illinois at Chicago
Advertisements

1 Lecture 12: Interconnection Networks Topics: dimension/arity, routing, deadlock, flow control.
1 Lecture 23: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Appendix E)
NUMA Mult. CSE 471 Aut 011 Interconnection Networks for Multiprocessors Buses have limitations for scalability: –Physical (number of devices that can be.
CS 258 Parallel Computer Architecture Lecture 4 Network Topology and Routing February 4, 2008 Prof John D. Kubiatowicz
CS252 Graduate Computer Architecture Lecture 21 Multiprocessor Networks (con’t) John Kubiatowicz Electrical Engineering and Computer Sciences University.
CS 258 Parallel Computer Architecture Lecture 5 Routing February 6, 2008 Prof John D. Kubiatowicz
EECS 570: Fall rev1 1 Chapter 10: Scalable Interconnection Networks.
Interconnection Network Topology Design Trade-offs
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control.
1 Lecture 25: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) Review session,
1 Static Interconnection Networks CEG 4131 Computer Architecture III Miodrag Bolic.
ECE669 L16: Interconnection Topology March 30, 2004 ECE 669 Parallel Computer Architecture Lecture 16 Interconnection Topology.
John Kubiatowicz Electrical Engineering and Computer Sciences
Switching, routing, and flow control in interconnection networks.
Interconnect Network Topologies
CS252 Graduate Computer Architecture Lecture 15 Multiprocessor Networks March 14 th, 2011 John Kubiatowicz Electrical Engineering and Computer Sciences.
CS252 Graduate Computer Architecture Lecture 15 Multiprocessor Networks March 12 th, 2012 John Kubiatowicz Electrical Engineering and Computer Sciences.
1 Lecture 23: Interconnection Networks Topics: Router microarchitecture, topologies Final exam next Tuesday: same rules as the first midterm Next semester:
1 The Turn Model for Adaptive Routing. 2 Summary Introduction to Direct Networks. Deadlocks in Wormhole Routing. System Model. Partially Adaptive Routing.
Interconnect Networks
Topics –Network design space –Contention –Active messages Multiprocessor Interconnection Networks Todd C. Mowry CS 740 November 19, 1998.
Multiprocessor Interconnection Networks Todd C. Mowry CS 740 November 3, 2000 Topics Network design issues Network Topology.
ECE669 L21: Routing April 15, 2004 ECE 669 Parallel Computer Architecture Lecture 21 Routing.
Anshul Kumar, CSE IITD CSL718 : Multiprocessors Interconnection Mechanisms Performance Models 20 th April, 2006.
InterConnection Network Topologies to Minimize graph diameter: Low Diameter Regular graphs and Physical Wire Length Constrained networks Nilesh Choudhury.
Anshul Kumar, CSE IITD ECE729 : Advanced Computer Architecture Lecture 27, 28: Interconnection Mechanisms In Multiprocessors 29 th, 31 st March, 2010.
Interconnect Networks Basics. Generic parallel/distributed system architecture On-chip interconnects (manycore processor) Off-chip interconnects (clusters.
Super computers Parallel Processing
INTERCONNECTION NETWORKS Work done as part of Parallel Architecture Under the guidance of Dr. Edwin Sha By Gomathy Gowri Narayanan Karthik Alagu Dynamic.
Networks: Routing, Deadlock, Flow Control, Switch Design, Case Studies Alvin R. Lebeck CPS 220.
Topology How the components are connected. Properties Diameter Nodal degree Bisection bandwidth A good topology: small diameter, small nodal degree, large.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix F)
1 Lecture 14: Interconnection Networks Topics: dimension vs. arity, deadlock.
INTERCONNECTION NETWORK
Topologies.
Network Connected Multiprocessors
Overview Parallel Processing Pipelining
Parallel Architecture
Interconnect Networks
Network Properties, Scalability and Requirements For Parallel Processing Scalable Parallel Performance: Continue to achieve good parallel performance.
Lecture 23: Interconnection Networks
Interconnection topologies
Course Outline Introduction in algorithms and applications
Azeddien M. Sllame, Amani Hasan Abdelkader
Prof John D. Kubiatowicz
Static and Dynamic Networks
Interconnection Network Routing, Topology Design Trade-offs
John Kubiatowicz Electrical Engineering and Computer Sciences
Interconnection Network Design Contd.
Cache Coherence and Interconnection Network Design
Introduction to Scalable Interconnection Network Design
Switching, routing, and flow control in interconnection networks
Lecture 14: Interconnection Networks
Interconnection Network Design Lecture 14
Introduction to Scalable Interconnection Networks
Storage area network and System area network (SAN)
Lecture: Interconnection Networks
Static Interconnection Networks
Interconnection Network Design
Advanced Computer Architecture 5MD00 / 5Z032 Multi-Processing 2
Interconnection Networks Contd.
Embedded Computer Architecture 5SAI0 Interconnection Networks
CS 6290 Many-core & Interconnect
Static Interconnection Networks
Networks: Routing and Design
Switching, routing, and flow control in interconnection networks
Topology What are the consequences of using fewer than n2 connections?
Multiprocessors and Multi-computers
Chapter 2 from ``Introduction to Parallel Computing'',
Presentation transcript:

Multiprocessor Interconnection Networks Todd C Multiprocessor Interconnection Networks Todd C. Mowry CS 495 October 30, 2002 Topics Network design issues Network Topology Performance

Networks How do we move data between processors? Design Options: Topology Routing Physical implementation

Evaluation Criteria • Latency • Bisection Bandwidth • Contention and hot-spot behavior • Partitionability • Cost and scalability • Fault tolerance

Communication Perf: Latency Time(n)s-d = overhead + routing delay + channel occupancy + contention delay occupancy = (n + ne) / b Routing delay? Contention?

Link Design/Engineering Space Cable of one or more wires/fibers with connectors at the ends attached to switches or interfaces Synchronous: - source & dest on same clock Narrow: - control, data and timing multiplexed on wire Short: - single logical value at a time Long: - stream of logical values at a time Asynchronous: - source encodes clock in signal Wide: - control, data and timing on separate wires

Buses • Simple and cost-effective for small-scale multiprocessors • Not scalable (limited bandwidth; electrical complications)

Crossbars • Each port has link to every other port + Low latency and high throughput - Cost grows as O(N^2) so not very scalable. - Difficult to arbitrate and to get all data lines into and out of a centralized crossbar. • Used in small-scale MPs (e.g., C.mmp) and as building block for other networks (e.g., Omega).

Rings • Cheap: Cost is O(N). • Point-to-point wires and pipelining can be used to make them very fast. + High overall bandwidth - High latency O(N) • Examples: KSR machine, Hector

(Multidimensional) Meshes and Tori 3D Cube 2D Grid O(N) switches (but switches are more complicated) Latency : O(k*n) (where N=kn) High bisection bandwidth Good fault tolerance Physical layout hard for multiple dimensions

Real World 2D mesh 1824 node Paragon: 16 x 114 array

Hypercubes • Also called binary n-cubes. # of nodes = N = 2^n. • Latency is O(logN); Out degree of PE is O(logN) • Minimizes hops; good bisection BW; but tough to layout in 3-space • Popular in early message-passing computers (e.g., intel iPSC, NCUBE) • Used as direct network ==> emphasizes locality

k-ary d-cubes • Generalization of hypercubes (k nodes in every dimension) • Total # of nodes = N = k^d. • k > 2 reduces # of channels at bisection, thus allowing for wider channels but more hops.

Embeddings in two dimensions 6 x 3 x 2 3 x 3 x 3 Embed multiple logical dimension in one physical dimension using long wires

Trees • Cheap: Cost is O(N). • Latency is O(logN). • Easy to layout as planar graphs (e.g., H-Trees). • For random permutations, root can become bottleneck. • To avoid root being bottleneck, notion of Fat-Trees (used in CM-5)

Multistage Logarithmic Networks Key Idea: have multiple layers of switches between destinations. • Cost is O(NlogN); latency is O(logN); throughput is O(N). • Generally indirect networks. • Many variations exist (Omega, Butterfly, Benes, ...). • Used in many machines: BBN Butterfly, IBM RP3, ...

Omega Network • All stages are same, so can use recirculating network. • Single path from source to destination. • Can add extra stages and pathways to minimize collisions and increase fault tolerance. • Can support combining. Used in IBM RP3.

Butterfly Network • Equivalent to Omega network. Easy to see routing of messages. • Also very similar to hypercubes (direct vs. indirect though). • Clearly see that bisection of network is (N / 2) channels. • Can use higher-degree switches to reduce depth.

Properties of Some Topologies Topology Degree Diameter Ave Dist Bisection D (D ave) @ P=1024 1D Array 2 N-1 N / 3 1 huge 1D Ring 2 N/2 N/4 2 huge 2D Mesh 4 2 (N1/2 - 1) 2/3 N1/2 N1/2 63 (21) 2D Torus 4 N1/2 1/2 N1/2 2N1/2 32 (16) k-ary d-cube 2d dk/2 dk/4 dk/4 15 (7.5) @d=3 Hypercube n=logN n n/2 N/2 10 (5)

Real Machines In general, wide links => smaller routing delay Tremendous variation

How many dimensions? d = 2 or d = 3 d >= 4 Short wires, easy to build Many hops, low bisection bandwidth Requires traffic locality d >= 4 Harder to build, more wires, longer average length Fewer hops, better bisection bandwidth Can handle non-local traffic k-ary d-cubes provide a consistent framework for comparison N = kd scale dimension (d) or nodes per dimension (k) assume cut-through

Scaling k-ary d-cubes What is equal cost? Equal number of nodes? Equal number of pins/wires? Equal bisection bandwidth? Equal area? Equal wire length? Each assumption leads to a different optimum Recall: switch degree: d diameter = d(k-1) total links = N*d pins per node = 2wd bisection = kd-1 = N/d links in each directions 2Nw/d wires cross the middle

Scaling: Latency Assumes equal channel width

Average Distance with Equal Width Avg. distance = d (k-1)/2 Assumes equal channel width but, equal channel width is not equal cost! Higher dimension => more channels

Latency with Equal Width total links(N) = Nd

Latency with Equal Pin Count Baseline d=2, has w = 32 (128 wires per node) fix 2dw pins => w(d) = 64/d distance up with d, but channel time down

Latency with Equal Bisection Width N-node hypercube has N bisection links 2d torus has 2N 1/2 Fixed bisection => w(d) = N 1/d / 2 = k/2 1 M nodes, d=2 has w=512!

Larger Routing Delay (w/ equal pin) Conclusions strongly influenced by assumptions of routing delay

Latency under Contention Optimal packet size? Channel utilization?

Saturation Fatter links shorten queuing delays

Data transferred per cycle Higher degree network has larger available bandwidth cost?

Advantages of Low-Dimensional Nets What can be built in VLSI is often wire-limited LDNs are easier to layout: more uniform wiring density (easier to embed in 2-D or 3-D space) mostly local connections (e.g., grids) Compared with HDNs (e.g., hypercubes), LDNs have: shorter wires (reduces hop latency) fewer wires (increases bandwidth given constant bisection width) increased channel width is the major reason why LDNs win! LDNs have better hot-spot throughput more pins per node than HDNs

Routing Recall: routing algorithm determines Issues: which of the possible paths are used as routes how the route is determined R: N x N -> C, which at each switch maps the destination node nd to the next channel on the route Issues: Routing mechanism arithmetic source-based port select table driven general computation Properties of the routes Deadlock free

Store&Forward vs Cut-Through Routing h(n/b + D) vs n/b + h D h: hops, n: packet length, b: badwidth, D: routing delay at each switch

Routing Mechanism need to select output port for each input packet in a few cycles Reduce relative address of each dimension in order Dimension-order routing in k-ary d-cubes e-cube routing in n-cube

Routing Mechanism (cont) P3 P2 P1 P0 Source-based message header carries series of port selects used and stripped en route CRC? Packet Format? CS-2, Myrinet, MIT Artic Table-driven message header carried index for next port at next switch o = R[i] table also gives index for following hop o, I’ = R[i ] ATM, HPPI

Properties of Routing Algorithms Deterministic route determined by (source, dest), not intermediate state (i.e. traffic) Adaptive route influenced by traffic along the way Minimal only selects shortest paths Deadlock free no traffic pattern can lead to a situation where no packets mover forward

Deadlock Freedom How can it arise? How do you avoid it? necessary conditions: shared resource incrementally allocated non-preemptible think of a channel as a shared resource that is acquired incrementally source buffer then dest. buffer channels along a route How do you avoid it? constrain how channel resources are allocated ex: dimension order How do you prove that a routing algorithm is deadlock free

Proof Technique Resources are logically associated with channels Messages introduce dependences between resources as they move forward Need to articulate possible dependences between channels Show that there are no cycles in Channel Dependence Graph find a numbering of channel resources such that every legal route follows a monotonic sequence => no traffic pattern can lead to deadlock Network need not be acyclic, only channel dependence graph

Examples Why is the obvious routing on X deadlock free? butterfly? tree? fat tree? Any assumptions about routing mechanism? amount of buffering? What about wormhole routing on a ring? 2 1 3 7 4 6 5

Flow Control What do you do when push comes to shove? ethernet: collision detection and retry after delay FDDI, token ring: arbitration token TCP/WAN: buffer, drop, adjust rate any solution must adjust to output rate Link-level flow control

Examples Short Links Long links several messages on the wire

Link vs global flow control Hot Spots Global communication operations Natural parallel program dependences

Case study: Cray T3E 3-dimensional torus, with 1024 switches each connected to 2 processors Short, wide, synchronous links Dimension order, cut-through, packet-switched routing Variable sized packets, in multiples of 16 bits

Case Study: SGI Origin Hypercube-like topologies with up to 256 switches Each switch supports 4 processors and connects to 4 other switches Long, wide links Table-driven routing: programmable, allowing for flexible topologies and fault-avoidance