Download presentation
Presentation is loading. Please wait.
1
Parallel Computer Architectures Chapter 8
2
Parallel Computer Architectures (a) On-chip parallelism. (b) A coprocessor. (c) A multiprocessor. (d) A multicomputer. (e) A grid.
3
Parallelism a)Introduced at various levels b)Within CPU chip (multiple instructions per cycle) –Instruction level VLIW (Very Long Instruction word) –Superscalar –On Chip Multithreading –Single chip multiprocessors c)Extra CPU boards ( d)Multiprocessor/Multicomputer e)Computer grids f)Tightly Coupled – computationally intimate g)Loosely Coupled – computationally remote
4
Instruction-Level Parallelism (a) A CPU pipeline. (b) A sequence of VLIW instructions. (c) An instruction stream with bundles marked.
5
The TriMedia VLIW CPU (1) A typical TriMedia instruction, showing five possible operations.
6
The TriMedia VLIW CPU (2) The TM3260 functional units, their quantity, latency, and which instruction slots they can use.
7
The TriMedia VLIW CPU (3) The major groups of TriMedia custom operations.
8
The TriMedia VLIW CPU (4) (a) An array of 8-bit elements. (b) The transposed array. (c) The original array fetched into four registers. (d) The transposed array in four registers.
9
Multithreading a)Fine-grained multithreading –Run multiple threads one instruction from each –Will never stall if enough active threads –Requires hardware to track which instruction is from which thread b)Coarse-grain multithreading –Run thread until stall and switch (one cycle wasted) c)Simultaneous multithreading –Coarse grain with no cycle wasted d)Hyperthreading –5% increase in size give 25% gain –Resource sharing Partitionedfull resource sharing Threshold sharing
10
On-Chip Multithreading (1) (a) – (c) Three threads. The empty boxes indicated that the thread has stalled waiting for memory. (d) Fine-grained multithreading. (e) Coarse-grained multithreading.
11
On-Chip Multithreading (2) Multithreading with a dual-issue superscalar CPU. (a) Fine-grained multithreading. (b) Coarse-grained multithreading. (c) Simultaneous multithreading.
12
Hyperthreading on the Pentium 4 Sharing between two thread white and gray Resource sharing between threads in the Pentium 4 NetBurst microarchitecture.
13
Single-Chip Multiprocessor a)Two areas of interest servers and consumer electronics b)Homogeneous chips –2 piplines, one CPU –2 CPU (same design) c)Hetrogeneous chips –CPU’s for DVD player or CELL phones –More software => slower but cheaper –Many different cores (essentially libraries)
14
Sample chip a)Cores on a chip for DVD player: –Control –MPEG video –Audio decoder –Video encoder –Disk controller –Cache Cores require interconnect IBM CoreConnect AMBA Advanced Microcontroller Bus Architecture VCI Virtual Component Interconnect OCP-IP Open Core Protocol
15
Homogeneous Multiprocessors on a Chip Single-chip multiprocessors. (a) A dual-pipeline chip. (b) A chip with two cores.
16
Heterogeneous Multiprocessors on a Chip (1) The logical structure of a simple DVD player contains a heterogeneous multiprocessor containing multiple cores for different functions.
17
Heterogeneous Multiprocessors on a Chip (2) An example of the IBM CoreConnect architecture.
18
Coprocessors a)Come in a variety of sizes –Separate cabinets for mainframes –Separate boards –Separate chips b)Primary purpose to offload work and assist main processor c)Different types –I/O –DMA –Floating point –Network –Graphics –Encryption
19
Introduction to Networking (1) How users are connected to servers on the Internet.
20
Networks a)LAN – local area network b)WAN – Wide area network c)Packet – chunk of data on network 64-1500 bytes d)Store-and-forward packet switching – what a router does e)Internet – series of WAN’s linked by routers f)ISP – Internet service provider g)Firewall – specialized computer that filters traffic h)Protocols – set of formats, exchange sequences, and rules i)HTTP – HyperText Transfer Protocol j)TCP – Transmission Control Protocol k)IP – Internet protocol
21
Networks a)CRC – Cyclic Redundancy Check b)TCP Header – information about data for TCP level c)IP header – routing header source, destination, hops d)Ethernet Header Next hop address, address, CRC e)ASIC – Application Specific Integrated Circuit f)FPGA – Field programmable Gate Array g)Network processor – programmable device that handles incoming and outgoing packets a wire speed h)PPE – Protocol/Programmable/Packet Processing Engines
22
Introduction to Networking (2) A packet as it appears on the Ethernet.
23
Introduction to Network Processors A typical network processor board and chip.
24
Packet Processing a)Checksum verification b)Field Extraction c)Packet Classification d)Path Selection e)Destination network determination f)Route Lookup g)Fragmentation and reassembly h)Computation (compression/ encryption) i)Header Management j)Queue management k)Checksum generation l)Accounting m)Statistics gathering
25
Improving Performance a)Performance is name of game. b)How to measure it. –Packets per second –Bytes per second c)Ways to speed up –Performance is not linear with clock speed –Introduce more PPE’s –Specialized processors –More internal busses –Widen existing busses –Replace SDRAM with SRAM
26
The Nexperia Media Processor The Nexperia heterogeneous multiprocessor on a chip.
27
Multiprocessors (a) A multiprocessor with 16 CPUs sharing a common memory. (b) An image partitioned into 16 sections, each being analyzed by a different CPU.
28
Shared-Memory Multiprocessors a)Multiprocessor – has shared memory b)SMP (Symetric Multiprocessor) – every multiprocessor can access any I/O device c)Multicomputer – (distributed memory system) – each computer has it’s own memory d)Multiprocessor – has one address space e)Multicomputer has one address space per computer f)Multicomputers pass messages to communicate g)Ease or programming vs ease of construction h)DSM – distributed shared memory page fault memory for distributed computers
29
Multicomputers (1) ( a) A multicomputer with 16 CPUs, each with its own private memory. (b) The bit-map image of Fig. 8-17 split up among the 16 memories.
30
Multicomputers (2) Various layers where shared memory can be implemented. (a) The hardware. (b) The operating system. (c) The language runtime system.
31
Taxonomy of Parallel Computers (1) Flynn’s taxonomy of parallel computers.
32
Taxonomy of Parallel Computers (2) A taxonomy of parallel computers.
33
Tanenbaum, Structured Computer Organization, Fifth Edition, (c) 2006 Pearson Education, Inc. All rights reserved. 0-13-148521-0 MIMD categories a)UMA – uniform memory access b)NUMA – NonUniform Memory Access c)COMA – Cache only memory Access d)Multicomputers are NORMA ( No remote Memory Access) –MPP Massive Parallel processor
34
Tanenbaum, Structured Computer Organization, Fifth Edition, (c) 2006 Pearson Education, Inc. All rights reserved. 0-13-148521-0 Consistency Models a)How hardware and software will work with memory b)Strict Consistency any read of location X returns the most recent value written to location X c)Sequential Consistency – values will be returned in the order they are written (true order d)Processor consistency – Writes by any CPU are seen in the order they are written e)For every memory word, all CPU see all writes to it in the same order f)Weak Consistency – no guarantee unless synchronization is used. g)Release Consistency writes must occur before critical section is reentered.
35
Sequential Consistency (a) Two CPUs writing and two CPUs reading a common memory word. (b) - (d) Three possible ways the two writes and four reads might be interleaved in time.
36
Weak Consistency Weakly consistent memory uses synchronization operations to divide time into sequential epochs.
37
UMA Symmetric Multiprocessor Architectures Three bus-based multiprocessors. (a) Without caching. (b) With caching. (c) With caching and private memories.
38
Tanenbaum, Structured Computer Organization, Fifth Edition, (c) 2006 Pearson Education, Inc. All rights reserved. 0-13-148521-0 Cache as Cache can a)Cache coherence protocol keep memory in maximum of one cache (eg. Write through) b)Snooping cache monitor bus fir access to cache memory c)Choose between update strategy or invalidate strategy d)MESI protocol named after states –Invalid Shared Exclusive Modified
39
Snooping Caches The write through cache coherence protocol. The empty boxes indicate that no action is taken.
40
The MESI Cache Coherence Protocol The MESI cache coherence protocol.
41
UMA Multiprocessors Using Crossbar Switches (a) An 8 × 8 crossbar switch. (b) An open crosspoint. (c) A closed crosspoint.
42
UMA Multiprocessors Using Multistage Switching Networks (1) (a) A 2 × 2 switch. (b) A message format.
43
UMA Multiprocessors Using Multistage Switching Networks (2) An omega switching network.
44
NUMA Multiprocessors A NUMA machine based on two levels of buses. The Cm* was the first multiprocessor to use this design.
45
Cache Coherent NUMA Multiprocessors (a) A 256-node directory-based multiprocessor. (b) Division of a 32-bit memory address into fields. (c) The directory at node 36.
46
The Sun Fire E25K NUMA Multiprocessor (1) The Sun Microsystems E25K multiprocessor.
47
The Sun Fire E25K NUMA Multiprocessor (2) The SunFire E25K uses a four-level interconnect. Dashed lines are address paths. Solid lines are data paths.
48
Message-Passing Multicomputers A generic multicomputer.
49
Topology Various topologies. The heavy dots represent switches. The CPUs and memories are not shown. (a) A star. (b) A complete interconnect. (c) A tree. (d) A ring. (e) A grid. (f) A double torus. (g) A cube. (h) A 4D hypercube.
50
BlueGene (1) The BlueGene/L custom processor chip.
51
BlueGene (2) The BlueGene/L. (a) Chip. (b) Card. (c) Board. (d) Cabinet. (e) System.
52
Red Storm (1) Packaging of the Red Storm components.
53
Red Storm (2) The Red Storm system as viewed from above.
54
A Comparison of BlueGene/L and Red Storm A comparison of BlueGene/L and Red Storm.
55
Google (1) Processing of a Google query.
56
Google (2) A typical Google cluster.
57
Scheduling Scheduling a cluster. (a) FIFO. (b) Without head-of-line blocking. (c) Tiling. The shaded areas indicate idle CPUs.
58
Distributed Shared Memory (1) A virtual address space consisting of 16 pages spread over four nodes of a multicomputer. (a) The initial situation. ….
59
Distributed Shared Memory (2) A virtual address space consisting of 16 pages spread over four nodes of a multicomputer. … (b) After CPU 0 references page 10. …
60
Distributed Shared Memory (3) A virtual address space consisting of 16 pages spread over four nodes of a multicomputer. … (c) After CPU 1 references page 10, here assumed to be a read-only page.
61
Linda Three Linda tuples.
62
Orca A simplified ORCA stack object, with internal data and two operations.
63
Software Metrics (1) Real programs achieve less than the perfect speedup indicated by the dotted line.
64
Software Metrics (2) (a) A program has a sequential part and a parallelizable part. (b) Effect of running part of the program in parallel.
65
Achieving High Performance (a) A 4-CPU bus-based system. (b) A 16-CPU bus-based system. (c) A 4-CPU grid-based system. (d) A 16-CPU grid-based system.
66
Grid Computing The grid layers.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.