Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 12, 2003 Topics: 1. Cache Performance (concl.) 2. Cache Coherence.

Similar presentations


Presentation on theme: "1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 12, 2003 Topics: 1. Cache Performance (concl.) 2. Cache Coherence."— Presentation transcript:

1 1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 12, 2003 Topics: 1. Cache Performance (concl.) 2. Cache Coherence

2 2 Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

3 3 1. Fast Hit Times via Small, Simple Caches  Simple caches can be faster cache hit time increasingly a bottleneck to CPU performance cache hit time increasingly a bottleneck to CPU performance  set associativity requires complex tag matching  slower  direct-mapped are simpler  faster  shorter CPU cycle times –tag check can be overlapped with transmission of data  Smaller caches can be faster can fit on the same chip as CPU can fit on the same chip as CPU  avoid penalty of going off-chip for L2 caches: compromise for L2 caches: compromise  keep tags on chip, and data off chip –fast tag check, yet greater cache capacity L1 data cache reduced from 16KB in Pentium III to 8KB in Pentium IV L1 data cache reduced from 16KB in Pentium III to 8KB in Pentium IV

4 4 2. Fast Hits by Avoiding Addr. Translation  For Virtual Memory: can send virtual address to cache? Called Virtually Addressed Cache or just Virtual Cache, vs. Physical Cache Benefits: avoid translation from virtual to real address; saves time Benefits: avoid translation from virtual to real address; saves time Problems: Problems:  Every time process is switched logically must flush the cache; otherwise get false hits –Cost is time to flush + “compulsory” misses from empty cache  Dealing with aliases (sometimes called synonyms); Two different virtual addresses map to same physical address  I/O must interact with cache, so need mapping to virtual address  Some Solutions partially address these issues HW guarantee: each cache frame holds unique physical address HW guarantee: each cache frame holds unique physical address SW guarantee: lower n bits must have same address; as long as covers index field & direct mapped, they must be unique; called page coloring SW guarantee: lower n bits must have same address; as long as covers index field & direct mapped, they must be unique; called page coloring  Solution to cache flush Add process identifier tag that identifies process as well as address within process: can’t get a hit if wrong process Add process identifier tag that identifies process as well as address within process: can’t get a hit if wrong process

5 5 Virtually Addressed Caches CPU TLB Cache MEM VA PA Conventional Organization CPU Cache TLB MEM VA PA Virtually Addressed Cache Translate only on miss VA Tags

6 6 3. Pipeline Write Hits  Write Hits take slightly longer than Read Hits: cannot parallelize tag matching with data transfer cannot parallelize tag matching with data transfer  must match tags before data is written!  Summary of Key Idea: pipeline the writes pipeline the writes  check tag first; if match, let CPU resume  let the actual write take its time

7 7 TechniqueMRMPHT Complexity Larger Block Size+–0 Higher Associativity+–1 Victim Caches+2 Pseudo-Associative Caches +2 HW Prefetching of Instr/Data+2 Compiler Controlled Prefetching+3 Compiler Reduce Misses+0 Priority to Read Misses+1 Subblock Placement ++1 Early Restart & Critical Word 1st +2 Non-Blocking Caches+3 Second Level Caches+2 Small & Simple Caches–+0 Avoiding Address Translation+2 Cache Optimization Summary

8 8 Impact of Caches  1960-1985: Speed = ƒ(no. operations)  1997 Pipelined Execution & Fast Clock Rate Pipelined Execution & Fast Clock Rate Out-of-Order completion Out-of-Order completion Superscalar Instruction Issue Superscalar Instruction Issue  1999: Speed = ƒ(non-cached memory accesses)  Has impact on: Compilers, Architects, Algorithms, Data Structures? Compilers, Architects, Algorithms, Data Structures?

9 9 Cache Coherence Section 6.3 & Appendix I

10 10 Cache Coherence  Common problem with multiple copies of mutable information (in both hardware and software) “If a datum is copied and the copy is to match the original at all times, then all changes to the original must cause the copy to be immediately updated or invalidated.” (Richard L. Sites, co-architect of DEC Alpha) “If a datum is copied and the copy is to match the original at all times, then all changes to the original must cause the copy to be immediately updated or invalidated.” (Richard L. Sites, co-architect of DEC Alpha) 1234AAAC-ABB1234AAAC-ABB Copy becomes stale Copies diverge; hard to recover from 1234AAAB-ABB1234AAAB-ABB Write update 1234AAA--ABB1234AAA--ABB Write invalidate

11 11 Example of Cache Coherence  I/O in uniprocessor with primary unified cache MM copy and cache copy of memory block not always coherent MM copy and cache copy of memory block not always coherent WT cache WT cache  MM copy stale while write update to MM in transit WB cache WB cache  MM copy stale while cache copy Dirty Inconsistency of no concern if no one reads/writes MM copy Inconsistency of no concern if no one reads/writes MM copy If I/O directed to main memory, need to maintain coherence If I/O directed to main memory, need to maintain coherence

12 12 Example of Cache Coherence (contd)  Uniprocessor with a split primary cache I-cache contains instruction I-cache contains instruction D-cache contains data D-cache contains data Often contents are disjoint Often contents are disjoint If self-modifying code is allowed, then same cache block may appear in both caches, and consistency must be enforced If self-modifying code is allowed, then same cache block may appear in both caches, and consistency must be enforced MS-DOS allows self-modifying code MS-DOS allows self-modifying code  Strong motivation for unified caches in Intel i386 and i486  Pentium has split primary cache, and supports SMC by enforcing coherence between I and D caches  Coordinating primary and secondary caches in uniprocessor  Shared memory multiprocessors

13 13 Two “Snoopy” Protocols  We will discuss two protocols A simple three-state protocol A simple three-state protocol  Section 6.3 & Appendix I of HP3 The MESI protocol The MESI protocol  IEEE standard  Used by many machines, including Pentium and PowerPC 601  Snooping: monitor memory bus activity by individual caches monitor memory bus activity by individual caches taking some actions based on this activity taking some actions based on this activity introduces a fourth category of miss to the 3C model: coherence misses introduces a fourth category of miss to the 3C model: coherence misses  First, we need some notation to discuss the protocols

14 14 Notation: Write-Through Cache

15 15 Notation: Write-Back Cache

16 16 Three-State Write-Invalidate Protocol  Minor modification of WB cache  Assumptions Single bus and MM Single bus and MM Two or more CPUs, each with WB cache Two or more CPUs, each with WB cache Every cache block in one of three states: Invalid, Clean, Dirty (called Invalid, Shared, Exclusive in Figure 6.10 of HP3) Every cache block in one of three states: Invalid, Clean, Dirty (called Invalid, Shared, Exclusive in Figure 6.10 of HP3) MM copies of blocks have no state MM copies of blocks have no state At any moment, a single cache owns bus (is bus master) At any moment, a single cache owns bus (is bus master) Bus master does not obey bus command Bus master does not obey bus command All misses (reads or writes) serviced by All misses (reads or writes) serviced by  MM if all cache copies are Clean  the only Dirty cache copy (which is no longer Dirty ), and MM copy is written instead of being read

17 17 Understanding the Protocol MM C1C2 A A A A A -- A B Bus owner Clean Another Clean copy exists Can read without notifying other caches Bus owner Clean Another Clean copy exists Can read without notifying other caches Bus owner Clean No other cache copies Can read without notifying other caches Bus owner Dirty No other cache copies Can read or write without notifying other caches Only two global states Most up-to-date copy is MM copy, and all cache copies are Clean Most up-to-date copy is a single unique cache copy in state Dirty

18 18 State Diagram of Cache Block (Part 1)

19 19 State Diagram of Cache Block (Part 2)

20 20 Comparison with Single WB Cache  Similarities Read hit invisible on bus Read hit invisible on bus All misses visible on bus All misses visible on bus  Differences In single WB cache, all misses are serviced by MM; in three- state protocol, misses are serviced either by MM or by unique cache block holding only Dirty copy In single WB cache, all misses are serviced by MM; in three- state protocol, misses are serviced either by MM or by unique cache block holding only Dirty copy In single WB cache, write hit is invisible on bus; in three-state protocol, write hit of Clean block: In single WB cache, write hit is invisible on bus; in three-state protocol, write hit of Clean block:  invalidates all other Clean blocks by a Bus Write Miss (necessary action)

21 21 Correctness of Three-State Protocol  Problem: State transition of FSM is supposed to be atomic, but they are not in this protocol, because of the bus  Example: CPU read miss in Dirty state CPU access to cache detects a miss CPU access to cache detects a miss Request bus Request bus Acquire bus, and change state of cache block Acquire bus, and change state of cache block Evict dirty block to MM Evict dirty block to MM Put Bus Read Miss on bus Put Bus Read Miss on bus Receive requested block from MM or another cache Receive requested block from MM or another cache Release bus, and read from cache block just received Release bus, and read from cache block just received  Bus arbitration may cause gap between steps 2 and 3 Whole sequence of operations no longer atomic Whole sequence of operations no longer atomic App. I.1 argues that protocol will work correctly if steps 3-7 are atomic, i.e., bus is not a split-transaction bus App. I.1 argues that protocol will work correctly if steps 3-7 are atomic, i.e., bus is not a split-transaction bus

22 22 Adding More Bits to Protocols  Add third bit, called Shared, to Valid and Dirty bits Get five states (M, O, E, S, I) Get five states (M, O, E, S, I) Developed in context of Futurebus+, with intention of explaining all snoopy protocols, all of which use 3, 4, or 5 states Developed in context of Futurebus+, with intention of explaining all snoopy protocols, all of which use 3, 4, or 5 states

23 23 MESI Protocol  Four-state, write-invalidate  Improved version of three-state protocol Clean state split into Exclusive and Shared states Clean state split into Exclusive and Shared states Dirty state equivalent to Modified state Dirty state equivalent to Modified state  Several slightly different versions of MESI protocol Will describe version implemented by Futurebus+ Will describe version implemented by Futurebus+ PowerPC 601 MESI protocol does not support cache-to-cache transfer of blocks PowerPC 601 MESI protocol does not support cache-to-cache transfer of blocks

24 24 State Diag. of MESI Cache Block (Part 1)

25 25 State Diag. of MESI Cache Block (Part 2)

26 26 Comparison with Three-State Protocol  Similarities Read hit invisible on bus Read hit invisible on bus All misses handled the same way All misses handled the same way  Differences Big improvement in handling write hits Big improvement in handling write hits  Write hit in Exclusive state invisible on bus  Write hit in Shared state involves no block transfer, only a control signal A A -- A A A A B Exclusive state Can be read or written Shared state Can be read only Modified state Can be read and written

27 27 Comments on Write-Invalidate Protocols  Performance Processor can lose cache block through invalidation by another processor Processor can lose cache block through invalidation by another processor Average memory access time goes up, since writes to shared blocks take more time (other copies have to be invalidated) Average memory access time goes up, since writes to shared blocks take more time (other copies have to be invalidated)  Implementation Bus and CPU want to simultaneously access same cache Bus and CPU want to simultaneously access same cache  Either same block or different blocks, but conflict nonetheless Three possible solutions Three possible solutions  Use a single tag array, and accept structural hazards  Use two separate tag arrays for bus and CPU, which must now be kept coherent at all times  Use a multiported tag array (both Intel Pentium and PowerPC 601 use this solution)


Download ppt "1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 12, 2003 Topics: 1. Cache Performance (concl.) 2. Cache Coherence."

Similar presentations


Ads by Google