The University of Adelaide, School of Computer Science

Slides:



Advertisements
Similar presentations
The University of Adelaide, School of Computer Science
Advertisements

Main MemoryCS510 Computer ArchitecturesLecture Lecture 15 Main Memory.
Miss Penalty Reduction Techniques (Sec. 5.4) Multilevel Caches: A second level cache (L2) is added between the original Level-1 cache and main memory.
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 2, 2005 Mon, Nov 7, 2005 Topic: Caches (contd.)
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 (and Appendix B) Memory Hierarchy Design Computer Architecture A Quantitative Approach,
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
ENGS 116 Lecture 141 Caches and Main Memory Vincent H. Berk November 5 th, 2008 Reading for Today: Sections C.4 – C.7 Reading for Wednesday: Sections 5.1.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
The University of Adelaide, School of Computer Science
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Lecture 10: Memory Hierarchy Design Kai Bu
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
CPE 731 Advanced Computer Architecture Advanced Memory Hierarchy Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of California,
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 Memory Hierarchy Design Computer Architecture A Quantitative Approach, Fifth Edition.
Lecture 19 Today’s topics Types of memory Memory hierarchy.
Main Memory CS448.
Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
– 1 – CSCE 513 Fall 2015 Lec08 Memory Hierarchy IV Topics Pipelining Review Load-Use Hazard Memory Hierarchy Review Terminology review Basic Equations.
CS136, Advanced Architecture Cache and Memory Performance.
Lecture 12: Memory Hierarchy— Five Ways to Reduce Miss Penalty (Second Level Cache) Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
– 1 – CSCE 513 Fall 2015 Lec07 Memory Hierarchy II Topics Pipelining Review Load-Use Hazard Memory Hierarchy Review Terminology review Basic Equations.
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
CSE 502 Graduate Computer Architecture Lec – Advanced Memory Hierarchy Larry Wittie Computer Science, StonyBrook University
1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
Memory Hierarchy—Improving Performance Professor Alvin R. Lebeck Computer Science 220 Fall 2008.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 M EMORY H IERARCHY D ESIGN Computer Architecture A Quantitative Approach, Fifth Edition.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
Microprocessor Microarchitecture Memory Hierarchy Optimization Lynn Choi Dept. Of Computer and Electronics Engineering.
CS203 – Advanced Computer Architecture Cache. Memory Hierarchy Design Memory hierarchy design becomes more crucial with recent multi-core processors:
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 通信邮箱: 提交作业邮箱: 2013 年 1.
CS 704 Advanced Computer Architecture
/ Computer Architecture and Design
COMP 740: Computer Architecture and Implementation
Reducing Hit Time Small and simple caches Way prediction Trace caches
The University of Adelaide, School of Computer Science
CSC 4250 Computer Architectures
CS 704 Advanced Computer Architecture
The University of Adelaide, School of Computer Science
Lecture 9: Memory Hierarchy (3)
现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:
5.2 Eleven Advanced Optimizations of Cache Performance
Lec07 Memory Hierarchy II
Lec09 Memory Hierarchy yet again
Lec07 Memory Hierarchy II
The University of Adelaide, School of Computer Science
Chapter 2 Memory Hierarchy Design
Lec07 Memory Hierarchy II
Memory Hierarchy Design
Lec07 Memory Hierarchy II
Lecture 10: Memory Hierarchy Design
Memory Hierarchy.
Siddhartha Chatterjee
/ Computer Architecture and Design
Main Memory Background
Cache Performance Improvements
Areg Melik-Adamyan, PhD
The University of Adelaide, School of Computer Science
Presentation transcript:

The University of Adelaide, School of Computer Science Computer Architecture A Quantitative Approach, Fifth Edition The University of Adelaide, School of Computer Science 16 April 2017 Chapter 2 (and Appendix B) Memory Hierarchy Design Cont. Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Cache Optimization Metrics Reduce hit time Increase cache bandwidth Reduce miss penalty Reduce miss rate Copyright © 2012, Elsevier Inc. All rights reserved.

Ten Advanced Optimizations Small and simple 1st-level caches (hit time) Way Prediction (hit time) Pipelining Cache (bandwidth) Nonblocking Caches (bandwidth) Multibanked Caches (bandwidth) Critical Word First, Early Restart (miss penalty) Merging Write Buffer (miss penalty) Compiler Optimizations* (miss rate) Hardware Prefetching (miss penalty, miss rate) Compiler Prefetching (miss penalty, miss rate) * Indicates topic of interest Copyright © 2012, Elsevier Inc. All rights reserved.

Small and simple 1st-level caches The University of Adelaide, School of Computer Science 16 April 2017 Small and simple 1st-level caches Critical timing path: addressing tag memory, then comparing tags, then selecting correct set Direct-mapped caches can overlap tag compare and transmission of data Lower associativity reduces power because fewer cache lines are accessed Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

L1 Size and Associativity The University of Adelaide, School of Computer Science 16 April 2017 L1 Size and Associativity Advanced Optimizations Access time vs. size and associativity Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

L1 Size and Associativity The University of Adelaide, School of Computer Science 16 April 2017 L1 Size and Associativity Advanced Optimizations Energy per read vs. size and associativity Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Way Prediction To improve hit time, predict the way Similar to branch prediction (chapter 3) Correct predictions have reduced hit times, but misprediction gives longer hit time Prediction accuracy > 90% for two-way > 80% for four-way I-cache has better accuracy than D-cache First used on MIPS R10000 in mid-90s Used on ARM Cortex-A8 Extend to access only the predicted block “Way selection” Lower power, but increases misprediction penalty Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Pipelining Cache Pipeline cache access to improve bandwidth Examples: Pentium: 1 cycle Pentium Pro – Pentium III: 2 cycles Pentium 4 – Core i7: 4 cycles Similar to processor pipelining (chapter 3) Access requires several cycles but cycles can be faster When pipeline is full, one request completes per cycle, improving bandwidth Increases branch misprediction penalty Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Nonblocking Caches Allow hits before previous misses complete “Hit under miss” “Hit under multiple miss” L2 must support this In general, processors can hide L1 miss penalty but not L2 miss penalty Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Multibanked Caches Organize cache as independent banks to support simultaneous access ARM Cortex-A8 supports 1-4 banks for L2 Intel i7 supports 4 banks for L1 and 8 banks for L2 Interleave banks according to block address Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Critical Word First, Early Restart The University of Adelaide, School of Computer Science 16 April 2017 Critical Word First, Early Restart Critical word first Request missed word from memory first Send it to the processor as soon as it arrives Early restart Request words in normal order Send missed word to the processor as soon as it arrives Effectiveness of these strategies depends on block size and likelihood of another access to the portion of the block that has not yet been fetched Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Merging Write Buffer When storing to a block that is already pending in the write buffer, update write buffer Reduces stalls due to full write buffer Advanced Optimizations No buffer merging Buffer merging Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Compiler Optimizations The University of Adelaide, School of Computer Science 16 April 2017 Compiler Optimizations Merging arrays Make data more sequential Loop Fusion Combine loops Loop Interchange Swap nested loops to access memory in sequential order Blocking Instead of accessing entire rows or columns, subdivide matrices into blocks Requires more memory accesses but improves locality of accesses Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Compiler Optimizations The University of Adelaide, School of Computer Science 16 April 2017 Compiler Optimizations Merging arrays /* Before: 2 sequential arrays */ int val[SIZE]; int key[SIZE]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Compiler Optimizations The University of Adelaide, School of Computer Science 16 April 2017 Compiler Optimizations Loop fusion /* Before */ for (i = 0; i < N; i = i + 1) for (j = 0; j < N; j = j + 1) a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j]; /* After */ for (j = 0; j < N; j = j + 1) { d[i][j] = a[i][j] + c[i][j]; } Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Compiler Optimizations The University of Adelaide, School of Computer Science 16 April 2017 Compiler Optimizations Loop interchange Rearrange to maximize sequential access Assume row-major matrices /* Before */ for (k = 0; k < 100; k = k + 1) for (j = 0; j < 100; j = j + 1) for (i = 0; i < 5000; i = I + 1) x[i][j] = 2 * x[i][j]; /* After */ for (i = 0; i < 5000; i = i + 1) Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Compiler Optimizations The University of Adelaide, School of Computer Science 16 April 2017 Compiler Optimizations Blocking (before) Remember matrix multiplication? Calculating each entry in x requires requires reading a row of y and a column of z Entries across the entire matrices in memory! /* Before */ for (i = 0; i < N; i = i + 1) for (j = 0; j < N; j = j + 1) { r = 0; for (k = 0; k < N; k = k + 1) r = r + y[i][k]*z[k][j]; x[i][j] = r; } Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Compiler Optimizations The University of Adelaide, School of Computer Science 16 April 2017 Compiler Optimizations Blocking (before) Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Compiler Optimizations The University of Adelaide, School of Computer Science 16 April 2017 Compiler Optimizations Blocking (after) Divide matrices into blocks (of size B) Read only elements within the block, then move on to next block Blocks in x, y, and z are not always coincident /* After */ for (jj = 0; jj < N; jj = jj + B) for (kk = 0; kk < N; kk = kk + B) for (i = 0; i < N; i = i + 1) for (j = jj; j < min(jj + B - 1, N); j = j + 1) { r = 0; for (k = kk; k < min(kk + B - 1, N); k = k + 1) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r; } Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Compiler Optimizations The University of Adelaide, School of Computer Science 16 April 2017 Compiler Optimizations Blocking (after) Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Hardware Prefetching Fetch two blocks on miss (include next sequential block) Advanced Optimizations Pentium 4 Pre-fetching Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Compiler Prefetching Insert prefetch instructions before data is needed Register prefetch Loads data into register Cache prefetch Loads data into cache Combine with loop unrolling and software pipelining (chapter 3) Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Summary Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Copyright © 2012, Elsevier Inc. All rights reserved. Core i7 Cache Hiearachy L1 32KB I/32KB D, 4-way I/8-way D, 4-cycle access time, pipelined, Pseudo LRU L2 256KB, 8-way, 10-cycle access time, Pseudo LRU L3 (shared) 2MB per core, 16-way, 35-cycle access time, Pseudo LRU Copyright © 2012, Elsevier Inc. All rights reserved.

The University of Adelaide, School of Computer Science 16 April 2017 Memory Technology Memory Technology Performance metrics Latency is concern of cache Bandwidth is concern of multiprocessors and I/O Access time Time between read request and when desired word arrives Cycle time Minimum time between unrelated requests to memory DRAM used for main memory, SRAM used for cache Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Memory Technology Memory Technology SRAM Requires low power to retain bit Requires 6 transistors/bit DRAM Must be re-written after being read Must also be periodically refreshed Every ~ 8 ms Each row can be refreshed simultaneously One transistor/bit Address lines are multiplexed: Upper half of address: row access strobe (RAS) Lower half of address: column access strobe (CAS) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Memory Technology Memory Technology Amdahl: Memory capacity should grow linearly with processor speed Unfortunately, memory capacity and speed has not kept pace with processors Some optimizations: Multiple accesses to same row Synchronous DRAM Added clock to DRAM interface Burst mode with critical word first Wider interfaces Double data rate (DDR) Multiple banks on each DRAM device Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Memory Optimizations Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Memory Optimizations Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Memory Optimizations Memory Technology DDR: DDR2 Lower power (2.5 V -> 1.8 V) Higher clock rates (266 MHz, 333 MHz, 400 MHz) DDR3 1.5 V 800 MHz DDR4 1-1.2 V 1600 MHz GDDR5 is graphics memory based on DDR3 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Memory Optimizations Memory Technology Graphics memory: Achieve 2-5 X bandwidth per DRAM vs. DDR3 Wider interfaces (32 vs. 16 bit) Higher clock rate Possible because they are attached via soldering instead of socketted DIMM modules Reducing power in SDRAMs: Lower voltage Low power mode (ignores clock, continues to refresh) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Memory Power Consumption The University of Adelaide, School of Computer Science 16 April 2017 Memory Power Consumption Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Flash Memory Memory Technology Type of EEPROM Must be erased (in blocks) before being overwritten Non volatile Limited number of write cycles Cheaper than SDRAM, more expensive than disk Slower than SRAM, faster than disk Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Memory Dependability Memory Technology Memory is susceptible to cosmic rays Soft errors: dynamic errors Detected and fixed by error correcting codes (ECC) Hard errors: permanent errors Use spare rows to replace defective rows Chipkill: a RAID-like error recovery technique Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 16 April 2017 Pitfalls Memory Technology Predicting cache performance of one program for another Not simulating enough instructions to get an accurate performance measure of the memory hierarchy Not delivering high memory bandwidth in a cache-based system Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer