Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 2 Memory Hierarchy Design

Similar presentations


Presentation on theme: "Chapter 2 Memory Hierarchy Design"— Presentation transcript:

1 Chapter 2 Memory Hierarchy Design
Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines Conclusion

2 The University of Adelaide, School of Computer Science
7 November 2018 Introduction Introduction Programmers want unlimited amounts of memory with low latency to feed processors Fast memory technology is more expensive per bit than slower memory Solution: organize memory system into a hierarchy Entire addressable memory space available in largest, slowest memory Incrementally smaller and faster memories, each containing a subset of the memory below it, proceed in steps up toward the processor Temporal and spatial locality insures that nearly all references can be found in smaller memories Gives the allusion of a large, fast memory being presented to the processor Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

3 Many Levels in Memory Hierarchy
Pipeline registers Invisible to high-level language programmers Register file There can also be a 3rd (or more) cache levels here Usually made invisible to the programmer (even assembly programmers) 1st-level cache (on-chip) 2nd-level cache (on same MCM as CPU) Our focus Physical memory (usu. mounted on same board as CPU) Virtual memory (on hard disk, often in same enclosure as CPU) Disk files (on hard disk often in same enclosure as CPU) Network-accessible disk files (often in the same building as the CPU) Tape backup/archive system (often in the same building as the CPU) Data warehouse: Robotically-accessed room full of shelves of tapes (usually on the same planet as the CPU)

4 The University of Adelaide, School of Computer Science
7 November 2018 Memory Hierarchy Introduction Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

5 Memory Performance Gap
The University of Adelaide, School of Computer Science 7 November 2018 Memory Performance Gap Introduction Per-core basis Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

6 Memory Hierarchy Design
The University of Adelaide, School of Computer Science 7 November 2018 Memory Hierarchy Design Introduction Memory hierarchy design becomes more crucial with recent multi-core processors: Aggregate peak bandwidth grows with # cores: Intel Core i7 can generate two references per core per clock Four cores and 3.2 GHz clock 25.6 billion 64-bit data references/second + 12.8 billion 128-bit instruction references = GB/s! (1/3.2GHz = 3.2billion*4(32bits) = 12.8 b) DRAM bandwidth is only 6% of this (25 GB/s, 3 channels) Requires: Multi-port, pipelined caches Two levels of cache per core Shared third-level cache on chip Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

7 The University of Adelaide, School of Computer Science
7 November 2018 Performance and Power Introduction High-end microprocessors have >10 MB on-chip last-level cache (i7 has 15MB) to hide memory latency Consumes large amount of area and power budget Can be even bigger with off-chip DRAM cache Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

8 Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 7 November 2018 Memory Hierarchy Basics Introduction When a word is not found in the cache, a miss occurs: Fetch word from lower level in hierarchy, requiring a higher latency reference Lower level may be another cache or the main memory Also fetch the other words contained within the block Takes advantage of spatial locality Place block into cache in any location within its set, determined by address block address MOD number of sets Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

9 Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 7 November 2018 Memory Hierarchy Basics Introduction n sets => n-way set associative Direct-mapped cache => one block per set Fully associative => one set Writing to cache: two strategies Write-through Immediately update lower levels of hierarchy Write-back Only update lower levels of hierarchy when an updated block is replaced Both strategies use write buffer to make writes asynchronous Write miss: Write-allocate, write-around Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

10 Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 7 November 2018 Memory Hierarchy Basics Introduction Miss rate Fraction of cache access that result in a miss Causes of misses Compulsory First reference to a block Capacity Blocks discarded and later retrieved Conflict Program makes repeated references to multiple addresses from different blocks that map to the same location in the cache Coherence miss (in multi-core, later!) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

11 Misses by Type Conflict Fully-asso. Capacity Conflict misses are significant in a direct-mapped cache. From direct-mapped to 2-way helps as much as doubling cache size. Going from direct-mapped to 4-way is better than doubling cache size.

12 Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 7 November 2018 Memory Hierarchy Basics Introduction Cache performance metrics: Note that speculative and multithreaded processors may execute other instructions during a miss Reduces performance impact of misses Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

13 Cache Performance Consider memory delays in calculating CPU time:
Example: Ideal CPI=1, 1.5 ref / inst, miss rate=2%, miss penalty=100 CPU cycles Note, In-order pipeline is assumed The lower the ideal CPI, the higher the impact of cache miss Measured by CPU cycles, fast cycle time has more penalty

14 Cache Performance Example
Ideal-L1 CPI=2.0, ref / inst=1.5, cache size=64KB, miss penalty=75ns, hit time=1 clock cycle (Note, instead of hit time = cycle time, hit can take >1 cycles to make fast cycle time) Compare performance of two caches: Direct-mapped (1-way): cycle time=1ns, miss rate=1.4% 2-way: cycle time=1.25ns, miss rate=1.0%

15 Out-Of-Order Processor
Define new “miss penalty” considering overlap Need to decide memory latency and overlapped latency Not straight forward (normally using cycle simulation) Example (from previous slide) Assume 30% of 75ns penalty can be overlapped, but with longer (1.25ns) cycle on 1-way design due to OOO

16 An Alternative Metric CPU Cache
(Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty) The times Tacc, Thit, and T+miss can be either: Real time (e.g., nanoseconds), or, number of clock cycles T+miss means the extra (not total) time (or cycle) for a miss in addition to Thit, which is incurred by all accesses The Ave. mem access time does not take other instructions into the formula, the same example shows different results, but it isolates the performance metrics for cache design! Hit time Lower levels of hierarchy CPU Cache Miss penalty

17 Cache Performance Consider the cache performance equation:
It obviously follows that there are three basic ways to improve cache performance: Reducing miss penalty Reducing miss rate Reducing hit time Also, Reducing miss penalty/rate via parallelism (exploit MLP) Note that by Amdahl’s Law, there will be diminishing returns from reducing only hit time or amortized miss penalty by itself, instead of both together. (Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty) “Amortized miss penalty”

18 Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 7 November 2018 Memory Hierarchy Basics Introduction Six basic cache optimizations: Larger block size Reduces compulsory misses Increases capacity and conflict misses, increases miss penalty Larger total cache capacity to reduce miss rate Increases hit time, increases power consumption Higher associativity Reduces conflict misses Higher number of cache levels Reduces miss penalty and overall memory access time Giving priority to read misses over writes Reduces miss penalty Avoiding address translation in cache indexing Reduces hit time (see next slide) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

19 Fast Cache Access Note, the number of bits depends on the architecture
Parallel index to reduce hit time; (May not have enough index) 28 To CPU Late select Tag path (critical path hit time) Note, the number of bits depends on the architecture and implementation 19

20 Multiple-Level Caches – Reduce Miss Penalty
Avg mem access time = Hit time (L1) + Miss rate (L1) x Miss Penalty (L1) Miss penalty (L1) = Hit time (L2) + Miss rate (L2) x Miss Penalty (L2) Can plug 2nd equation into the first: Avg mem access time = Hit time(L1) + Miss rate(L1) x (Hit time(L2) Miss rate(L2)x Miss penalty(L2))

21 Multi-level Cache Terminology
“Local miss rate” The miss rate of one hierarchy level by itself. # of misses at that level / # accesses to that level e.g. Miss rate(L1), Miss rate(L2) “Global miss rate” The miss rate of a whole group of hierarchy levels # of accesses coming out of that group (to lower levels) / # of accesses to that group Generally this is the product of the miss rates at each level in the group. Global L2 Miss rate = Miss rate(L1) × Local Miss rate(L2),  more related to overall cache performance

22 Effect of 2-level Caching
L2 size usually much bigger than L1 Provide reasonable hit rate Decreases miss penalty of 1st-level cache Increase overall miss penalty Multiple-level cache inclusion property Inclusive cache: L1 is a subset of L2; simplify cache coherence mechanism, effective cache size = L2 Exclusive cache: L1, L2 are exclusive; increase effect cache sizes = L1 + L2 Enforce inclusion property: Backward invalidation to L1 on L2 replacement Without inclusion, coherence must lookup L1 on L2 miss

23 Ten Advanced Optimizations
The University of Adelaide, School of Computer Science 7 November 2018 Ten Advanced Optimizations Advanced Optimizations Small and simple first level caches Critical cache timing path: addressing tag memory, then comparing tags, then selecting correct set Direct-mapped caches can overlap tag compare and transmission of data Lower associativity reduces power because fewer cache lines are accessed Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

24 L1 Size and Associativity
The University of Adelaide, School of Computer Science 7 November 2018 L1 Size and Associativity (Due to Cacti optimization effect) Advanced Optimizations Access time vs. size and associativity (Cacti) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

25 L1 Size and Associativity
The University of Adelaide, School of Computer Science 7 November 2018 L1 Size and Associativity Advanced Optimizations Energy per read vs. size and associativity Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

26 The University of Adelaide, School of Computer Science
7 November 2018 Way Prediction To improve hit time, predict the way to pre-set mux Mis-prediction gives longer hit time Prediction accuracy > 90% for two-way > 80% for four-way I-cache has better accuracy than D-cache First used on MIPS R10000 in mid-90s Used on ARM Cortex-A8 Extend to predict block as well “Way selection” Increases mis-prediction penalty Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

27 The University of Adelaide, School of Computer Science
7 November 2018 Pipelining Cache Advanced Optimizations Pipeline cache access to improve bandwidth Examples: Pentium: 1 cycle Pentium Pro – Pentium III: 2 cycles Pentium 4 – Core i7: 4 cycles Increases branch mis-prediction penalty due to longer pipeline stages Makes it easier to increase associativity Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

28 The University of Adelaide, School of Computer Science
7 November 2018 Nonblocking Caches Allow hits before previous misses complete “Hit under miss” “Hit under multiple miss” Use MSHRs record multiple misses L2 must support this (MLP) In general, processors can hide L1 miss penalty but not L2 miss penalty Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

29 The University of Adelaide, School of Computer Science
7 November 2018 Multibanked Caches Organize cache as independent banks to support simultaneous accesses ARM Cortex-A8 supports 1-4 banks for L2 Intel i7 supports 4 banks for L1 and 8 banks for L2 Interleave banks according to block address Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

30 Critical Word First, Early Restart
The University of Adelaide, School of Computer Science 7 November 2018 Critical Word First, Early Restart Critical word first Request missed word from memory first Send it to the processor as soon as it arrives Early restart Request words in normal order Send missed work to the processor as soon as it arrives Effectiveness of these strategies depends on block size and likelihood of another access to the portion of the block that has not yet been fetched Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

31 The University of Adelaide, School of Computer Science
7 November 2018 Merging Write Buffer When storing to a block that is already pending in the write buffer, update write buffer Reduces stalls due to full write buffer Do not apply to I/O addresses Advanced Optimizations No write buffering Write buffering Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

32 Compiler Optimizations
The University of Adelaide, School of Computer Science 7 November 2018 Compiler Optimizations Loop Interchange Swap nested loops to access memory in sequential order Blocking Instead of accessing entire rows or columns, subdivide matrices into blocks Requires more memory accesses but improves locality of accesses Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

33 Blocking Example Two Inner Loops:
/* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; Two Inner Loops: Read all NxN elements of z[] Read N elements of 1 row of y[] repeatedly Write N elements of 1 row of x[] Capacity Misses a function of N & Cache Size: 2N3 + N2 => (assuming no conflict; otherwise …) Idea: compute on BxB submatrix that fits

34 Blocking Example B called Blocking Factor
/* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; B called Blocking Factor Capacity Misses from 2N3 + N2 to 2N3/B +N2 Conflict Misses Too?

35 Loop Blocking – Matrix Multiply
X = Y * Z Before: After:

36 The University of Adelaide, School of Computer Science
7 November 2018 Hardware Prefetching Fetch two blocks on miss (include next sequential block) Advanced Optimizations Pentium 4 Pre-fetching Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

37 Example - Stride Prefetcher
Training a stride: 3 consecutive block misses from the SAME PC in a small region with the same distances (address) PC-based history table Hit the prefetch block trigger the next prefetch: 3 consecutive block misses 100 102 104 miss sequence (same PC) Two constant distance (2) Trained! 100 102 104 1st miss 2nd miss 3rd miss

38 The University of Adelaide, School of Computer Science
7 November 2018 Compiler Prefetching Insert prefetch instructions before data is needed Non-faulting: prefetch doesn’t cause exceptions Register prefetch Loads data into register Cache prefetch Loads data into cache Combine with loop unrolling and software pipelining Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

39 The University of Adelaide, School of Computer Science
7 November 2018 Summary Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

40 The University of Adelaide, School of Computer Science
7 November 2018 Memory Technology Memory Technology Performance metrics Latency is concern of cache Bandwidth is concern of multiprocessors and I/O Access time Time between read request and when desired word arrives Cycle time Minimum time between unrelated requests to memory DRAM used for main memory, SRAM used for cache Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

41 The University of Adelaide, School of Computer Science
7 November 2018 Memory Technology Memory Technology SRAM Requires low power to retain bit Requires 6 transistors/bit DRAM Must be re-written after being read Must also be periodically refeshed Every ~ 8 ms Each row can be refreshed simultaneously One transistor/bit Address lines are multiplexed: Upper half of address: row access strobe (RAS) Lower half of address: column access strobe (CAS) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

42 (DRAM) Memory Bank Organization
Read access sequence: 1. Decode row address & drive word-lines 2. Selected bits drive bit-lines • Entire row read 3. Amplify row data 4. Decode column address & select subset of row • Send to output 5. Precharge bit-lines • For next access

43 Memory subsystem organization
Channel DIMM Rank Chip Bank Row/Column

44 DIMM (Dual in-line memory module)
Memory subsystem “Channel” DIMM (Dual in-line memory module) Processor Memory channel Memory channel

45 DIMM (Dual in-line memory module)
Breaking down a DIMM DIMM (Dual in-line memory module) Side view Front of DIMM Back of DIMM Rank 0: collection of 8 chips Rank 1

46 Rank Rank 0 (Front) Rank 1 (Back) <0:63> <0:63> Addr/Cmd
CS <0:1> Data <0:63> Memory channel

47 . . . Breaking down a Rank Chip 0 Chip 1 Chip 7 Rank 0 <0:63>
<0:7> <8:15> <56:63> Data <0:63>

48 Breaking down a Chip ... 8 banks Chip 0 Bank 0 <0:7> <0:7>

49 DRAM Bank Operation Access Address: (Row 0, Column 0) Columns
Row address 0 Row address 1 Row decoder Rows Row 0 Empty Row 1 Row Buffer CONFLICT ! HIT HIT Column address 0 Column address 1 Column address 0 Column address 85 Column mux Data

50 128M x 8-bit DRAM Chip

51 A 64-bit Wide DIMM

52 A 64-bit Wide DIMM Advantages: Disadvantages:
Acts like a high-capacity DRAM chip with a wide interface Flexibility: memory controller does not need to deal with individual chips Disadvantages: Granularity: Accesses cannot be smaller than the interface width

53 DRAM Channels 2 Independent Channels: 2 Memory Controllers (Above)
2 Dependent/Lockstep Channels: 1 Memory Controller with wide interface (Not Shown above)

54 Generalized Memory Structure

55 How Multiple Banks/Channels Help

56 Address Mapping (Single Channel)
Single-channel system with 8-byte memory bus 2GB memory, 8 banks, 16K rows & 2K columns per bank Row interleaving Consecutive rows of memory in consecutive banks Cache block interleaving Consecutive cache block addresses in consecutive banks 64 byte cache blocks Accesses to consecutive cache blocks can be serviced in parallel How about random accesses? Strided accesses? Row (14 bits) Bank (3 bits) Column (11 bits) Byte in bus (3 bits) Row (14 bits) High Column Bank (3 bits) Low Col. Byte in bus (3 bits) 8 bits 3 bits

57 The University of Adelaide, School of Computer Science
7 November 2018 Memory Technology Memory Technology Amdahl: Memory capacity should grow linearly with processor speed Unfortunately, memory capacity and speed has not kept pace with processors Some optimizations: Multiple accesses to same row Synchronous DRAM Added clock to DRAM interface Burst mode with critical word first Wider interfaces Double data rate (DDR) Multiple banks on each DRAM device Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

58 The University of Adelaide, School of Computer Science
7 November 2018 Memory Optimizations Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

59 The University of Adelaide, School of Computer Science
7 November 2018 Memory Optimizations Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

60 The University of Adelaide, School of Computer Science
7 November 2018 Memory Optimizations Memory Technology DDR: DDR2 Lower power (2.5 V -> 1.8 V) Higher clock rates (266 MHz, 333 MHz, 400 MHz) DDR3 1.5 V 800 MHz DDR4 1-1.2 V 1600 MHz GDDR5 is graphics memory based on DDR3 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

61 The University of Adelaide, School of Computer Science
7 November 2018 Memory Optimizations Memory Technology Graphics memory: Achieve 2-5 X bandwidth per DRAM vs. DDR3 Wider interfaces (32 vs. 16 bit) Higher clock rate Possible because they are attached via soldering instead of socketted DIMM modules Reducing power in SDRAMs: Lower voltage Low power mode (ignores clock, continues to refresh) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

62 Memory Power Consumption
The University of Adelaide, School of Computer Science 7 November 2018 Memory Power Consumption Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

63 The University of Adelaide, School of Computer Science
7 November 2018 Flash Memory Memory Technology Type of EEPROM Must be erased (in blocks) before being overwritten Non volatile Limited number of write cycles Cheaper than SDRAM, more expensive than disk Slower than SRAM, faster than disk Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

64 The University of Adelaide, School of Computer Science
7 November 2018 Memory Dependability Memory Technology Memory is susceptible to cosmic rays Soft errors: dynamic errors Detected and fixed by error correcting codes (ECC) Hard errors: permanent errors Use sparse rows to replace defective rows Chipkill: a RAID-like error recovery technique Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

65 The University of Adelaide, School of Computer Science
7 November 2018 Virtual Memory Protection via virtual memory Keeps processes in their own memory space Role of architecture: Provide user mode and supervisor mode Protect certain aspects of CPU state Provide mechanisms for switching between user mode and supervisor mode Provide mechanisms to limit memory accesses Provide TLB to translate addresses Virtual Memory and Virtual Machines Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

66 The University of Adelaide, School of Computer Science
7 November 2018 Virtual Machines Supports isolation and security Sharing a computer among many unrelated users Enabled by raw speed of processors, making the overhead more acceptable Allows different ISAs and operating systems to be presented to user programs “System Virtual Machines” SVM software is called “virtual machine monitor” or “hypervisor” Individual virtual machines run under the monitor are called “guest VMs” Virtual Memory and Virtual Machines Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer


Download ppt "Chapter 2 Memory Hierarchy Design"

Similar presentations


Ads by Google