1 Lecture 7: Implementing Cache Coherence Topics: implementation details.

Slides:



Advertisements
Similar presentations
Cache Coherence. Memory Consistency in SMPs Suppose CPU-1 updates A to 200. write-back: memory and cache-2 have stale values write-through: cache-2 has.
Advertisements

L.N. Bhuyan Adapted from Patterson’s slides
1 Lecture 20: Synchronization & Consistency Topics: synchronization, consistency models (Sections )
1 Lecture 6: Directory Protocols Topics: directory-based cache coherence implementations (wrap-up of SGI Origin and Sequent NUMA case study)
1 Lecture 4: Directory Protocols Topics: directory-based cache coherence implementations.
Cache Optimization Summary
5/8/2015 slide 1 PCOD: MIMD II Lecture (Coherence) Per Stenström (c) 2008, Sally A. McKee (c) 2011 Snoop-based multiprocessor design Correctness issues.
1 Lecture 12: Hardware/Software Trade-Offs Topics: COMA, Software Virtual Memory.
CS 7810 Lecture 19 Coherence Decoupling: Making Use of Incoherence J.Huh, J. Chang, D. Burger, G. Sohi Proceedings of ASPLOS-XI October 2004.
1 Lecture 4: Directory-Based Coherence Details of memory-based (SGI Origin) and cache-based (Sequent NUMA-Q) directory protocols.
1 Lecture 21: Synchronization Topics: lock implementations (Sections )
CS252/Patterson Lec /23/01 CS213 Parallel Processing Architecture Lecture 7: Multiprocessor Cache Coherency Problem.
1 Lecture 2: Snooping and Directory Protocols Topics: Snooping wrap-up and directory implementations.
1 Lecture 20: Coherence protocols Topics: snooping and directory-based coherence protocols (Sections )
1 Lecture 1: Introduction Course organization:  4 lectures on cache coherence and consistency  2 lectures on transactional memory  2 lectures on interconnection.
1 Lecture 3: Snooping Protocols Topics: snooping-based cache coherence implementations.
Lecture 13: Consistency Models
1 Lecture 5: Directory Protocols Topics: directory-based cache coherence implementations.
1 Lecture 13: Cache Innovations Today: cache access basics and innovations, DRAM (Sections )
1 Lecture 15: Consistency Models Topics: sequential consistency, requirements to implement sequential consistency, relaxed consistency models.
Logical Protocol to Physical Design CS 258, Spring 99 David E. Culler Computer Science Division U.C. Berkeley.
1 Lecture 3: Directory-Based Coherence Basic operations, memory-based and cache-based directories.
1 Lecture: Cache Hierarchies Topics: cache innovations (Sections B.1-B.3, 2.1)
Logical Protocol to Physical Design
CS252/Patterson Lec /28/01 CS 213 Lecture 10: Multiprocessor 3: Directory Organization.
1 Lecture 20: Protocols and Synchronization Topics: distributed shared-memory multiprocessors, synchronization (Sections )
Cache Organization of Pentium
Cache Control and Cache Coherence Protocols How to Manage State of Cache How to Keep Processors Reading the Correct Information.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
Computer Architecture System Interface Units Iolanthe II approaches Coromandel Harbour.
1 Lecture 12: Hardware/Software Trade-Offs Topics: COMA, Software Virtual Memory.
1 Lecture: Large Caches, Virtual Memory Topics: cache innovations (Sections 2.4, B.4, B.5)
1 Lecture 19: Scalable Protocols & Synch Topics: coherence protocols for distributed shared-memory multiprocessors and synchronization (Sections )
1 Lecture 3: Coherence Protocols Topics: consistency models, coherence protocol examples.
1 Lecture 7: PCM Wrap-Up, Cache coherence Topics: handling PCM errors and writes, cache coherence intro.
1 Lecture: Coherence Topics: snooping-based coherence, directory-based coherence protocols (Sections )
1 Lecture 8: Snooping and Directory Protocols Topics: 4/5-state snooping protocols, split-transaction implementation details, directory implementations.
Lecture 8: Snooping and Directory Protocols
Lecture 20: Consistency Models, TM
Cache Organization of Pentium
תרגול מס' 5: MESI Protocol
Lecture: Cache Hierarchies
Lecture 18: Coherence and Synchronization
A Study on Snoop-Based Cache Coherence Protocols
Jason F. Cantin, Mikko H. Lipasti, and James E. Smith
Lecture: Cache Hierarchies
Lecture 9: Directory-Based Examples II
Lecture 2: Snooping-Based Coherence
Lecture 5: Snooping Protocol Design Issues
Lecture: Cache Innovations, Virtual Memory
Computer Science Division
Lecture 21: Synchronization and Consistency
Lecture: Coherence and Synchronization
Chapter 5 Exploiting Memory Hierarchy : Cache Memory in CMP
Lecture 25: Multiprocessors
Lecture 9: Directory-Based Examples
Lecture 10: Consistency Models
Lecture 8: Directory-Based Examples
Lecture 25: Multiprocessors
Computer Science Division
Lecture 24: Virtual Memory, Multiprocessors
Lecture 23: Virtual Memory, Multiprocessors
Lecture 24: Multiprocessors
Lecture: Coherence, Synchronization
Lecture: Coherence Topics: wrap-up of snooping-based coherence,
Lecture: Coherence and Synchronization
Lecture 19: Coherence and Synchronization
Lecture 18: Coherence and Synchronization
Lecture 10: Directory-Based Examples II
Lecture 11: Consistency Models
Presentation transcript:

1 Lecture 7: Implementing Cache Coherence Topics: implementation details

2 Implementing Coherence Protocols Correctness and performance are not the only metrics Deadlock: a cycle of resource dependencies, where each process holds shared resources in a non-preemptible fashion Livelock: similar to deadlock, but transactions continue in the system without each process making forward progress Starvation: an extreme case of unfairness

3 Basic Implementation Assume single level of cache, atomic bus transactions It is simpler to implement a processor-side cache controller that monitors requests from the processor and a bus-side cache controller that services the bus Both controllers are constantly trying to read tags  tags can be duplicated (moderate area overhead)  unlike data, tags are rarely updated  tag updates stall the other controller

4 Reporting Snoop Results Uniprocessor system: initiator places address on bus, all devices monitor address, one device acks by raising a wired-OR signal, data is transferred In a multiprocessor, memory has to wait for the snoop result before it chooses to respond – need 3 wired-OR signals: (i) indicates that a cache has a copy, (ii) indicates that a cache has a modified copy, (iii) indicates that the snoop has not completed Ensuring timely snoops: the time to respond could be fixed or variable (with the third wired-OR signal), or the memory could track if a cache has a block in M state

5 Non-Atomic State Transitions Note that a cache controller’s actions are not all atomic: tag look-up, bus arbitration, bus transaction, data/tag update Consider this: block A in shared state in P1 and P2; both issue a write; the bus controllers are ready to issue an upgrade request and try to acquire the bus; is there a problem? The controller can keep track of additional intermediate states so it can react to bus traffic (e.g. S  M, I  M, I  S,E) Alternatively, eliminate upgrade request; use the shared wire to suppress memory’s response to an exclusive-rd

6 Serialization Write serialization is an important requirement for coherence and sequential consistency – writes must be seen by all processors in the same order On a write, the processor hands the request to the cache controller and some time elapses before the bus transaction happens (the external world sees the write) If the writing processor continues its execution after handing the write to the controller, the same write order may not be seen by all processors – hence, the processor is not allowed to continue unless the write has completed

7 Livelock Livelock can happen if the processor-cache handshake is not designed correctly Before the processor can attempt the write, it must acquire the block in exclusive state If all processors are writing to the same block, one of them acquires the block first – if another exclusive request is seen on the bus, the cache controller must wait for the processor to complete the write before releasing the block -- else, the processor’s write will fail again because the block would be in invalid state

8 Atomic Instructions A test&set instruction acquires the block in exclusive state and does not release the block until the read and write have completed Should an LL bring the block in exclusive state to avoid bus traffic during the SC? Note that for the SC to succeed, a bit associated with the cache block must be set (the bit is reset when a write to that block is observed or when the block is evicted) What happens if an instruction between the LL and SC causes the LL-SC block to always be replaced?

9 Multilevel Cache Hierarchies Ideally, the snooping protocol employed for L2 must be duplicated for L1 – redundant work because of blocks common to L1 and L2 Inclusion greatly simplifies the implementation

10 Maintaining Inclusion Assuming equal block size, if L1 is 8KB 2-way and L2 is 256KB 8-way, is the hierarchy inclusive? (assume that an L1 miss brings a block into L1 and L2) Assuming equal block size, if L1 is 8KB direct-mapped and L2 is 256KB 8-way, is the hierarchy inclusive? To maintain inclusion, L2 replacements must also evict relevant blocks in L1

11 Intra-Hierarchy Protocol Some coherence traffic needs to be propagated to L1; likewise, L1 write traffic needs to be propagated to L2 What is the best way to implement the above? More traffic? More state? In general, external requests propagate upward from L3 to L1 and processor requests percolate down from L1 to L3 Dual tags are not as important as the L2 can filter out bus transactions and the L1 can filter out processor requests

12 Split Transaction Bus What would it take to implement the protocol correctly while assuming a split transaction bus? Split transaction bus: a cache puts out a request, releases the bus (so others can use the bus), receives its response much later Assumptions:  only one request per block can be outstanding  separate lines for addr (request) and data (response)

13 Split Transaction Bus Proc 1 Cache Proc 2 Cache Proc 3 Cache Request lines Response lines

14 Design Issues When does the snoop complete? What if the snoop takes a long time? What if the buffer in a processor/memory is full? When does the buffer release an entry? Are the buffers identical? How does each processor ensure that a block does not have multiple outstanding requests? What determines the write order – requests or responses?

15 Design Issues II What happens if a processor is arbitrating for the bus and witnesses another bus transaction for the same address? If the processor issues a read miss and there is already a matching read in the request table, can we reduce bus traffic?

16 Shared Cache Designs There are benefits to sharing the first level cache among many processors (for example, in a CMP):  no coherence protocol  low cost communication between processors  better prefetching by processors  working set overlap allows shared cache size to be smaller than combined size of private caches  improves utilization Disadvantages:  high contention for ports  longer hit latency (size and proximity)  more conflict misses

17 TLBs Recall that a TLB caches virtual to physical page translations While swapping a page out, can we have a problem in a multiprocessor system? All matching entries in every processor’s TLB must be removed TLB shootdown: the initiating processor sends a special instruction to other TLBs asking them to invalidate a page table entry

18 Case Study: SGI Challenge Supports 18 or 36 MIPS processors Employs a 1.2 GB/s 47.6 MHz system bus (Powerpath-2) The bus has 256-bit-wide data, 40-bit-wide address, plus 33 other signals (non multiplexed) Split transaction, supporting eight outstanding requests Employs the MESI protocol by default – also supports update transactions

19 Processor Board Each board has four processors (to reduce the number of slots on the bus from 36 to 9) A-chip has request tables, arbitration logic, etc. MIPS L2 CCTags A-chipD-chip MIPS L2 CCTags MIPS L2 CCTags MIPS L2 CCTags

20 Latencies 75ns for an L2 cache hit 300ns for a cache miss to percolate down to the A-chip Additional 400ns for the data to be delivered to the D-chips across the bus (includes 250ns memory latency) Another 300ns for the data to reach the processor Note that the system bus can accommodate 256 bits of data, while the CC-chip to processor interface can handle 64 bits at a time

21 Sun Enterprise 6000 Supports 30 UltraSparcs 2.67 GB/s 83.5 MHz Gigaplane system bus Non multiplexed bus with 256 bits of data, 41 bits of address, and 91 bits of control/error correction, etc. Split transaction bus with up to 112 outstanding requests Each node speculatively drives the bus (in parallel with arbitration) L2 hits are 40 ns, memory access is 300 ns

22 Title Bullet