Topics –Network design space –Contention –Active messages Multiprocessor Interconnection Networks Todd C. Mowry CS 740 November 19, 1998.

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Super computers Parallel Processing By: Lecturer \ Aisha Dawood.
Multiple Processor Systems
Cache Coherent Distributed Shared Memory. Motivations Small processor count –SMP machines –Single shared memory with multiple processors interconnected.
Latency Tolerance: what to do when it just won’t go away CS 258, Spring 99 David E. Culler Computer Science Division U.C. Berkeley.
1 Lecture 23: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Appendix E)
1 CSE 591-S04 (lect 14) Interconnection Networks (notes by Ken Ryu of Arizona State) l Measure –How quickly it can deliver how much of what’s needed to.
NUMA Mult. CSE 471 Aut 011 Interconnection Networks for Multiprocessors Buses have limitations for scalability: –Physical (number of devices that can be.
CS252/Patterson Lec /28/01 CS 213 Lecture 9: Multiprocessor: Directory Protocol.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Sections 8.1 – 8.5)
Course Outline Introduction in algorithms and applications Parallel machines and architectures Overview of parallel machines, trends in top-500 Cluster.
CSCI 8150 Advanced Computer Architecture Hwang, Chapter 7 Multiprocessors and Multicomputers 7.1 Multiprocessor System Interconnects.
Interconnection Network Topology Design Trade-offs
7/2/2015 slide 1 PCOD: Scalable Parallelism (ICs) Per Stenström (c) 2008, Sally A. McKee (c) 2011 Scalable Multiprocessors What is a scalable design? (7.1)
CS252/Patterson Lec /28/01 CS 213 Lecture 10: Multiprocessor 3: Directory Organization.
ECE669 L16: Interconnection Topology March 30, 2004 ECE 669 Parallel Computer Architecture Lecture 16 Interconnection Topology.
1 Shared-memory Architectures Adapted from a lecture by Ian Watson, University of Machester.
MULTICOMPUTER 1. MULTICOMPUTER, YANG DIPELAJARI Multiprocessors vs multicomputers Interconnection topologies Switching schemes Communication with messages.
Switching, routing, and flow control in interconnection networks.
Interconnect Network Topologies
Interconnect Basics 1. Where Is Interconnect Used? To connect components Many examples  Processors and processors  Processors and memories (banks) 
Interconnect Networks
Synchronization (Barriers) Parallel Processing (CS453)
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
MIMD Shared Memory Multiprocessors. MIMD -- Shared Memory u Each processor has a full CPU u Each processors runs its own code –can be the same program.
CSE Advanced Computer Architecture Week-11 April 1, 2004 engr.smu.edu/~rewini/8383.
August 15, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 12: Multiprocessors: Non-Uniform Memory Access * Jeremy R. Johnson.
Dynamic Interconnect Lecture 5. COEN Multistage Network--Omega Network Motivation: simulate crossbar network but with fewer links Components: –N.
Parallel Computer Architecture and Interconnect 1b.1.
Course Wrap-Up Miodrag Bolic CEG4136. What was covered Interconnection network topologies and performance Shared-memory architectures Message passing.
Multiprocessor Interconnection Networks Todd C. Mowry CS 740 November 3, 2000 Topics Network design issues Network Topology.
Switches and indirect networks Computer Architecture AMANO, Hideharu Textbook pp. 92~13 0.
Anshul Kumar, CSE IITD CSL718 : Multiprocessors Interconnection Mechanisms Performance Models 20 th April, 2006.
An Overview of Parallel Computing. Hardware There are many varieties of parallel computing hardware and many different architectures The original classification.
The Performance of Spin Lock Alternatives for Shared-Memory Multiprocessors THOMAS E. ANDERSON Presented by Daesung Park.
ECE200 – Computer Organization Chapter 9 – Multiprocessors.
Course Outline Introduction in algorithms and applications Parallel machines and architectures Overview of parallel machines, trends in top-500, clusters,
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster computers –shared memory model ( access nsec) –message passing multiprocessor.
Anshul Kumar, CSE IITD ECE729 : Advanced Computer Architecture Lecture 27, 28: Interconnection Mechanisms In Multiprocessors 29 th, 31 st March, 2010.
Outline Why this subject? What is High Performance Computing?
Interconnect Networks Basics. Generic parallel/distributed system architecture On-chip interconnects (manycore processor) Off-chip interconnects (clusters.
Super computers Parallel Processing
Intel Slide 1 A Comparative Study of Arbitration Algorithms for the Alpha Pipelined Router Shubu Mukherjee*, Federico Silla !, Peter Bannon $, Joel.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
INTERCONNECTION NETWORKS Work done as part of Parallel Architecture Under the guidance of Dr. Edwin Sha By Gomathy Gowri Narayanan Karthik Alagu Dynamic.
Networks: Routing, Deadlock, Flow Control, Switch Design, Case Studies Alvin R. Lebeck CPS 220.
Topology How the components are connected. Properties Diameter Nodal degree Bisection bandwidth A good topology: small diameter, small nodal degree, large.
Spring EE 437 Lillevik 437s06-l22 University of Portland School of Engineering Advanced Computer Architecture Lecture 22 Distributed computer Interconnection.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
CMSC 611: Advanced Computer Architecture Shared Memory Most slides adapted from David Patterson. Some from Mohomed Younis.
COMP8330/7330/7336 Advanced Parallel and Distributed Computing Tree-Based Networks Cache Coherence Dr. Xiao Qin Auburn University
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
COMP8330/7330/7336 Advanced Parallel and Distributed Computing Communication Costs in Parallel Machines Dr. Xiao Qin Auburn University
Overview Parallel Processing Pipelining
Parallel Architecture
Lecture 23: Interconnection Networks
Multiprocessor Interconnection Networks Todd C
Course Outline Introduction in algorithms and applications
Overview Parallel Processing Pipelining
CMSC 611: Advanced Computer Architecture
Interconnect Basics.
CS 213 Lecture 11: Multiprocessor 3: Directory Organization
Advanced Computer Architecture 5MD00 / 5Z032 Multi-Processing 2
Interconnection Networks Contd.
Embedded Computer Architecture 5SAI0 Interconnection Networks
CS 6290 Many-core & Interconnect
Multiprocessors and Multi-computers
Chapter 2 from ``Introduction to Parallel Computing'',
Presentation transcript:

Topics –Network design space –Contention –Active messages Multiprocessor Interconnection Networks Todd C. Mowry CS 740 November 19, 1998

CS 740 F’98– 2 – Networks Design Options: Topology Routing Direct vs. Indirect Physical implementation Evaluation Criteria: Latency Bisection Bandwidth Contention and hot-spot behavior Partitionability Cost and scalability Fault tolerance

CS 740 F’98– 3 – Buses Simple and cost-effective for small-scale multiprocessors Not scalable (limited bandwidth; electrical complications) PPP Bus

CS 740 F’98– 4 – Crossbars Each port has link to every other port + Low latency and high throughput - Cost grows as O(N^2) so not very scalable. - Difficult to arbitrate and to get all data lines into and out of a centralized crossbar. Used in small-scale MPs (e.g., C.mmp) and as building block for other networks (e.g., Omega).

CS 740 F’98– 5 – Rings Cheap: Cost is O(N). Point-to-point wires and pipelining can be used to make them very fast. + High overall bandwidth - High latency O(N) Examples: KSR machine, Hector

CS 740 F’98– 6 – Trees Cheap: Cost is O(N). Latency is O(logN). Easy to layout as planar graphs (e.g., H-Trees). For random permutations, root can become bottleneck. To avoid root being bottleneck, notion of Fat-Trees (used in CM-5) channels are wider as you move towards root.

CS 740 F’98– 7 – Hypercubes Also called binary n-cubes. # of nodes = N = 2^n. Latency is O(logN); Out degree of PE is O(logN) Minimizes hops; good bisection BW; but tough to layout in 3-space Popular in early message-passing computers (e.g., intel iPSC, NCUBE) Used as direct network ==> emphasizes locality

CS 740 F’98– 8 – Multistage Logarithmic Networks Cost is O(NlogN); latency is O(logN); throughput is O(N). Generally indirect networks. Many variations exist (Omega, Butterfly, Benes,...). Used in many machines: BBN Butterfly, IBM RP3,...

CS 740 F’98– 9 – Omega Network All stages are same, so can use recirculating network. Single path from source to destination. Can add extra stages and pathways to minimize collisions and increase fault tolerance. Can support combining. Used in IBM RP3.

CS 740 F’98– 10 – Butterfly Network Equivalent to Omega network. Easy to see routing of messages. Also very similar to hypercubes (direct vs. indirect though). Clearly see that bisection of network is (N / 2) channels. Can use higher-degree switches to reduce depth. Used in BBN machines.

CS 740 F’98– 11 – k-ary n-cubes Generalization of hypercubes (k-nodes in a string) Total # of nodes = N = k^n. k > 2 reduces # of channels at bisection, thus allowing for wider channels but more hops.

CS 740 F’98– 12 – Routing Strategies and Latency Store-and-Forward routing: Tsf = Tc ( D L / W) L = msg length, D = # of hops, W = width, Tc = hop delay Wormhole routing: Twh = Tc (D + L / W) # of hops is an additive rather than multiplicative factor Virtual Cut-Through routing: Older and similar to wormhole. When blockage occurs, however, message is removed from network and buffered. Deadlock are avoided through use of virtual channels and by using a routing strategy that does not allow channel-dependency cycles.

CS 740 F’98– 13 – Advantages of Low-Dimensional Nets What can be built in VLSI is often wire-limited LDNs are easier to layout: –more uniform wiring density (easier to embed in 2-D or 3-D space) –mostly local connections (e.g., grids) Compared with HDNs (e.g., hypercubes), LDNs have: –shorter wires (reduces hop latency) –fewer wires (increases bandwidth given constant bisection width) »increased channel width is the major reason why LDNs win! Factors that limit end-to-end latency: –LDNs: number of hops –HDNs: length of message going across very narrow channels LDNs have better hot-spot throughput –more pins per node than HDNs

Performance Under Contention

CS 740 F’98– 15 – Types of Hot Spots Module Hot Spots: Lots of PEs accessing the same PE's memory at the same time. Possible solutions: suitable distribution or replication of data high BW memory system design Location Hot Spots: Lots of PEs accessing the same memory location at the same time Possible solutions: caches for read-only data, updates for R-W data software or hardware combining

CS 740 F’98– 16 – NYU Ultracomputer/ IBM RP3 Focus on scalable bandwidth and synchronization in presence of hot-spots. Machine model: Paracomputer (or WRAM model of Borodin) Autonomous PEs sharing a central memory Simultaneous reads and writes to the same location can all be handled in a single cycle. Semantics given by the serialization principle:... as if all operations occurred in some (unspecified) serial order. Obviously the above is a very desirable model. Question is how well can it be realized in practise? To achieve scalable synchronization, further extended read (write) operations with atomic read-modify-write (fetch-&-op) primitives.

CS 740 F’98– 17 – The Fetch-&-Add Primitive F&A(V,e) returns old value of V and atomically sets V = V + e; If V = k, and X = F&A(V, a) and Y = F&A(V, b) done at same time One possible result: X = k, Y = k+a, and V = k+a+b. Another possible result: Y = k, X = k+b, and V = k+a+b. Example use: Implementation of task queues. Insert: myI = F&A(qi, 1); Q[myI] = data; full[myI] = 1; Delete: myI = F&A(qd, 1); while (!full[myI]) ; data = Q[myI]; full[myI] = 0;

CS 740 F’98– 18 – The IBM RP3 (1985) Design Plan: 512 RISC processors (IBM 801s) Distributed main memory with software cache coherence Two networks: Low latency Banyan and a combining Omega ==> Goal was to build the NYU Ultracomputer model Interesting aspects: Data distribution scheme to address locality and module hot spots Combining network design to address synchronization bottlenecks

CS 740 F’98– 19 – Combining Network Omega topology; 64-port network resulting from 6-levels of 2x2 switches. Request and response networks are integrated together. Key observation: To any destination module, the paths from all sources form a tree.

CS 740 F’98– 20 – Combining Network Details Requests must come together locationally (to same location), spatially (in queue of same switch), and temporally (within a small time window) for combining to happen.

CS 740 F’98– 21 – Contention for the Network Location Hot Spot: Higher accesses to a single location imposed on a uniform background traffic. May arise due to synch accesses or other heavily shared data Not only are accesses to hot-spot location delayed, they found all other accesses were delayed too. (Tree Saturation effect.)

CS 740 F’98– 22 – Saturation Model Parameters: p = # of PEs; r = # of refs / PE / cycle; h = % refs from PE to hot spot Total traffic to hot-spot memory module = rhp + r(1-h) "rhp" is hot-spot refs and "r(1-h)" is due to uniform traffic Latencies for all refs rise suddenly when [rhp + r(1-h)] = 1, assuming memory handles one request per cycle. Tree Saturation Effect: Buffers at all switches in the shaded area fill up, and even non-hot-spot requests have to pass through there. They found that combining helped in handling such location hot spots.

CS 740 F’98– 23 – Bandwidth Issues: Summary Network Bandwidth Memory Bandwidth local bandwidth global bandwidth Hot-Spot Issues module hot spots location hot spots

Active Messages (slide content courtesy of David Culler)

CS 740 F’98– 25 – Problems with Blocking Send/Receive 3-way latency Remember: back-to-back DMA hardware... compute send compute receive Node 1Node 2 Time

CS 740 F’98– 26 – Problems w/ Non-blocking Send/Rec With receive hints: ``waiting-matching store´´ problem expensive buffering send receive compute send receive Node 1 send receive compute send receive Node 2 Time

CS 740 F’98– 27 – Problems with Shared Memory Local storage hierarchy: access several levels before communication starts (DASH: 30cycles) resources reserved by outstanding requests difficulty in suspending threads DRAM L2-$ L1-$ CPU NI ld miss send recv fill data Inappropriate semantics in some cases: only read/write cache lines signals turn into consistency issues Example: broadcast tree while(!me->flag); left->data = me->data; left->flag = 1; right->data = me->data; right->flag = 1;

CS 740 F’98– 28 – Active Messages Head of the message is the address of its handler Handler executes immediately upon arrival extracts msg from network and integrates it with computation, possibly replies handler does not ``compute´´ No buffering beyond transport data stored in pre-allocated storage quick service and reply, e.g., remote-fetch dataIPNode Handler Primary Computation Idea: Associate a small amount of remote computation with each message Note: user-level handler

CS 740 F’98– 29 – Active Message Example: Fetch&Add int FetchNAdd(int proc, int *addr, int inc) { flag = 0; AM(proc, FetchNAdd_h, addr, inc, MYPROC); while(!flag) ; return value; } void FetchNAdd_h(int *addr, int inc, int retproc) { int v = inc + *addr; *addr = v; AM_reply(retproc, FetchNAdd_rh, v); } void FetchNAdd_rh(int data) { value = data; flag++; } static volatile int value; static volatile int flag;

CS 740 F’98– 30 – Send/Receive Using Active Messages MatchTable compute send compute receive Node 1Node 2 send handler recv handler data handler Req To Send Ready To Recv DATA –> use handlers to implement protocol Reduces send+recv overhead from 95  sec to 3  sec on CM-5.