The University of Adelaide, School of Computer Science

Slides:



Advertisements
Similar presentations
1
Advertisements

Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 4 Computing Platforms.
Processes and Operating Systems
Copyright © 2013 Elsevier Inc. All rights reserved.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 3 CPUs.
SE-292 High Performance Computing
Figure 12–1 Basic computer block diagram.
PP Test Review Sections 6-1 to 6-6
Chapter 10: Virtual Memory
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a cache for secondary (disk) storage – Managed jointly.
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
SE-292 High Performance Computing
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 23, 2002 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Memory Memory is used to store programs and data. CPU Memory
Lecture 12 Reduce Miss Penalty and Hit Time
Miss Penalty Reduction Techniques (Sec. 5.4) Multilevel Caches: A second level cache (L2) is added between the original Level-1 cache and main memory.
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 (and Appendix B) Memory Hierarchy Design Computer Architecture A Quantitative Approach,
The University of Adelaide, School of Computer Science
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
S.1 Review: The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
The University of Adelaide, School of Computer Science
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Lecture 10: Memory Hierarchy Design Kai Bu
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
Lecture 19 Today’s topics Types of memory Memory hierarchy.
Main Memory CS448.
– 1 – CSCE 513 Fall 2015 Lec08 Memory Hierarchy IV Topics Pipelining Review Load-Use Hazard Memory Hierarchy Review Terminology review Basic Equations.
– 1 – CSCE 513 Fall 2015 Lec07 Memory Hierarchy II Topics Pipelining Review Load-Use Hazard Memory Hierarchy Review Terminology review Basic Equations.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 M EMORY H IERARCHY D ESIGN Computer Architecture A Quantitative Approach, Fifth Edition.
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
CS203 – Advanced Computer Architecture Cache. Memory Hierarchy Design Memory hierarchy design becomes more crucial with recent multi-core processors:
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 通信邮箱: 提交作业邮箱: 2013 年 1.
CS 704 Advanced Computer Architecture
Memory COMPUTER ARCHITECTURE
Reducing Hit Time Small and simple caches Way prediction Trace caches
CS352H: Computer Systems Architecture
The University of Adelaide, School of Computer Science
CSC 4250 Computer Architectures
CS 704 Advanced Computer Architecture
The University of Adelaide, School of Computer Science
5.2 Eleven Advanced Optimizations of Cache Performance
Lec07 Memory Hierarchy II
Lec09 Memory Hierarchy yet again
Lec07 Memory Hierarchy II
The University of Adelaide, School of Computer Science
Lec07 Memory Hierarchy II
CMSC 611: Advanced Computer Architecture
Lec07 Memory Hierarchy II
Main Memory Background
Areg Melik-Adamyan, PhD
The University of Adelaide, School of Computer Science
Presentation transcript:

The University of Adelaide, School of Computer Science Computer Architecture A Quantitative Approach, Fifth Edition The University of Adelaide, School of Computer Science 5 April 2017 Chapter 2 Memory Hierarchy Design Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Introduction Introduction Programmers want very large memory with low latency Fast memory technology is more expensive per bit than slower memory Solution: organize memory system into a hierarchy Entire addressable memory space available in largest, slowest memory Incrementally smaller and faster memories, each containing a subset of the memory below it, proceed in steps up toward the processor Temporal and spatial locality insures that nearly all references can be found in smaller memories Gives the allusion of a large, fast memory being presented to the processor Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Memory Hierarchy Processor L1 Cache L2 Cache L3 Cache Main Memory Hard Drive or Flash Latency Capacity (KB, MB, GB, TB)

PROCESSOR I-Cache D-Cache U-Cache I-Cache D-Cache U-Cache Main Memory L1: U-Cache L2: I-Cache D-Cache L2: U-Cache L3: Main Memory Main:

The University of Adelaide, School of Computer Science 5 April 2017 Memory Hierarchy Introduction Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Memory Performance Gap The University of Adelaide, School of Computer Science 5 April 2017 Memory Performance Gap Introduction Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Memory Hierarchy Design The University of Adelaide, School of Computer Science 5 April 2017 Memory Hierarchy Design Introduction Memory hierarchy design becomes more crucial with recent multi-core processors: Aggregate peak bandwidth grows with # cores: Intel Core i7 can generate two references per core per clock Four cores and 3.2 GHz clock 25.6 billion 64-bit data references/second + 12.8 billion 128-bit instruction references = 409.6 GB/s! DRAM bandwidth is only 6% of this (25 GB/s) Requires: Multi-port, pipelined caches Two levels of cache per core Shared third-level cache on chip Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Intel Processors (3rd Generation Intel Core) Intel Core i7 4 cores 8 threads 2.5-3.5 GHz (Normal); 3.7 or 3.9GHz (Turbo) Intel Core i5 4 cores 4 threads (or 2 cores 4 threads) 2.3-3.4GHz (Normal); 3.2-3.8Ghz (Turbo) Intel Core i3 2 cores 4 threads 3.3 or 3.4 GHz

The University of Adelaide, School of Computer Science 5 April 2017 Performance and Power Introduction High-end microprocessors have >10 MB on-chip cache Consumes large amount of area and power budget Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Memory Hierarchy Basics The University of Adelaide, School of Computer Science 5 April 2017 Memory Hierarchy Basics Introduction When a word is not found in the cache, a miss occurs: Fetch word from lower level in hierarchy, requiring a higher latency reference Lower level may be another cache or the main memory Also fetch the other words contained within the block Takes advantage of spatial locality Place block into cache in any location within its set, determined by address block address MOD number of sets Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Placement Problem Main Memory Cache Memory

Placement Policies WHERE to put a block in cache Mapping between main and cache memories. Main memory has a much larger capacity than cache memory.

Memory Fully Associative Cache 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 6 7 Block number Block can be placed in any location in cache.

Memory Direct Mapped Cache 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 6 7 Block number (Block address) MOD (Number of blocks in cache) 12 MOD 8 = 4 Block can be placed ONLY in a single location in cache.

Memory Set Associative Cache 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Set no. 1 2 3 4 5 6 7 1 2 3 Block number Block number (Block address) MOD (Number of sets in cache) 12 MOD 4 = 0 Block can be placed in one of n locations in n-way set associative cache.

Memory Hierarchy Basics The University of Adelaide, School of Computer Science 5 April 2017 Memory Hierarchy Basics Introduction n sets => n-way set associative Direct-mapped cache => one block per set Fully associative => one set Writing to cache: two strategies Write-through Immediately update lower levels of hierarchy Write-back Only update lower levels of hierarchy when an updated block is replaced Both strategies use write buffer to make writes asynchronous Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Dirty bit(s) Indicates if the block has been written to. No need in I-caches. No need in write through D-cache. Write back D-cache needs it.

Write back C P U D cache Main memory

Write through C P U cache Main memory

Memory Hierarchy Basics The University of Adelaide, School of Computer Science 5 April 2017 Memory Hierarchy Basics Introduction Miss rate Fraction of cache access that result in a miss Causes of misses (Three Cs) Compulsory First reference to a block Capacity Blocks discarded and later retrieved Conflict Program makes repeated references to multiple addresses from different blocks that map to the same location in the cache Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Memory Hierarchy Basics The University of Adelaide, School of Computer Science 5 April 2017 Memory Hierarchy Basics Introduction Note that speculative and multithreaded processors may execute other instructions during a miss Reduces performance impact of misses Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Cache organization CPU : = MUX Tag Index blk address Data Tag Index blk <21> <8> <5> Valid Tag Data <1> <21> <256> : = MUX

Two-way cache (Alpha)

Memory Hierarchy Basics The University of Adelaide, School of Computer Science 5 April 2017 Memory Hierarchy Basics Introduction Six basic cache optimizations: Larger block size Reduces compulsory misses Increases capacity and conflict misses, increases miss penalty Larger total cache capacity to reduce miss rate Increases hit time, increases power consumption Higher associativity Reduces conflict misses Higher number of cache levels Reduces overall memory access time Giving priority to read misses over writes Reduces miss penalty Avoiding address translation in cache indexing Reduces hit time Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Ten Advanced Optimizations The University of Adelaide, School of Computer Science 5 April 2017 Ten Advanced Optimizations Advanced Optimizations Metrics: Reducing the hit time Increase cache bandwidth Reducing miss penalty Reducing miss rate Reducing miss penalty or miss rate via parallelism Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

1) Small and simple L1 caches Critical timing path: addressing tag memory, then comparing tags, then selecting correct set Direct-mapped caches can overlap tag compare and transmission of data Lower associativity reduces power because fewer cache lines are accessed Copyright © 2012, Elsevier Inc. All rights reserved.

L1 Size and Associativity The University of Adelaide, School of Computer Science 5 April 2017 L1 Size and Associativity Advanced Optimizations Access time vs. size and associativity Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

L1 Size and Associativity The University of Adelaide, School of Computer Science 5 April 2017 L1 Size and Associativity Advanced Optimizations Energy per read vs. size and associativity Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 2) Way Prediction To improve hit time, predict the way to pre-set mux Mis-prediction gives longer hit time Prediction accuracy > 90% for two-way > 80% for four-way I-cache has better accuracy than D-cache First used on MIPS R10000 in mid-90s Used on ARM Cortex-A8 Extend to predict block as well “Way selection” Increases mis-prediction penalty Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 3) Pipelining Cache Pipeline cache access to improve bandwidth Examples: Pentium: 1 cycle Pentium Pro – Pentium III: 2 cycles Pentium 4 – Core i7: 4 cycles Increases branch mis-prediction penalty Makes it easier to increase associativity Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 4) Nonblocking Caches Allow hits before previous misses complete “Hit under miss” “Hit under multiple miss” L2 must support this In general, processors can hide L1 miss penalty but not L2 miss penalty Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 5) Multibanked Caches Organize cache as independent banks to support simultaneous access ARM Cortex-A8 supports 1-4 banks for L2 Intel i7 supports 4 banks for L1 and 8 banks for L2 Interleave banks according to block address Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

6) Critical Word First, Early Restart The University of Adelaide, School of Computer Science 5 April 2017 6) Critical Word First, Early Restart Critical word first Request missed word from memory first Send it to the processor as soon as it arrives Early restart Request words in normal order Send missed work to the processor as soon as it arrives Effectiveness of these strategies depends on block size and likelihood of another access to the portion of the block that has not yet been fetched Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 7) Merging Write Buffer When storing to a block that is already pending in the write buffer, update write buffer Reduces stalls due to full write buffer Do not apply to I/O addresses Advanced Optimizations No write buffering Write buffering Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

8) Compiler Optimizations The University of Adelaide, School of Computer Science 5 April 2017 8) Compiler Optimizations Loop Interchange Swap nested loops to access memory in sequential order Blocking Instead of accessing entire rows or columns, subdivide matrices into blocks Requires more memory accesses but improves locality of accesses Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

9) Hardware Prefetching The University of Adelaide, School of Computer Science 5 April 2017 9) Hardware Prefetching Fetch two blocks on miss (include next sequential block) Advanced Optimizations Pentium 4 Pre-fetching Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

10) Compiler Prefetching The University of Adelaide, School of Computer Science 5 April 2017 10) Compiler Prefetching Insert prefetch instructions before data is needed Non-faulting: prefetch doesn’t cause exceptions Register prefetch Loads data into register Cache prefetch Loads data into cache Combine with loop unrolling and software pipelining Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Summary Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

3rd Generation Intel Core i7 I: 32KB 4-way D: 32KB 4-way I D I D I D I D 256KB 8MB

The University of Adelaide, School of Computer Science 5 April 2017 Memory Technology Memory Technology Performance metrics Latency is concern of cache Bandwidth is concern of multiprocessors and I/O Access time Time between read request and when desired word arrives Cycle time Minimum time between unrelated requests to memory DRAM used for main memory, SRAM used for cache Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Memory Technology SRAM Requires low power to retain bit Requires 6 transistors/bit DRAM Must be re-written after being read Must also be periodically refreshed Every ~ 8 ms Each row can be refreshed simultaneously One transistor/bit Address lines are multiplexed: Upper half of address: row access strobe (RAS) Lower half of address: column access strobe (CAS) Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Memory Technology Memory Technology Amdahl: Memory capacity should grow linearly with processor speed Unfortunately, memory capacity and speed has not kept pace with processors Some optimizations: Multiple accesses to same row Synchronous DRAM Added clock to DRAM interface Burst mode with critical word first Wider interfaces Double data rate (DDR) Multiple banks on each DRAM device Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Memory Optimizations Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Memory Optimizations Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Memory Optimizations Memory Technology DDR: DDR2 Lower power (2.5 V -> 1.8 V) Higher clock rates (266 MHz, 333 MHz, 400 MHz) DDR3 1.5 V 800 MHz DDR4 1-1.2 V 1600 MHz GDDR5 is graphics memory based on DDR3 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Memory Optimizations Memory Technology Graphics memory: Achieve 2-5 X bandwidth per DRAM vs. DDR3 Wider interfaces (32 vs. 16 bit) Higher clock rate Possible because they are attached via soldering instead of socketted DIMM modules Reducing power in SDRAMs: Lower voltage Low power mode (ignores clock, continues to refresh) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Memory Power Consumption The University of Adelaide, School of Computer Science 5 April 2017 Memory Power Consumption Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Flash Memory Memory Technology Type of EEPROM Must be erased (in blocks) before being overwritten Non volatile Limited number of write cycles Cheaper than SDRAM, more expensive than disk Slower than SRAM, faster than disk Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Memory Dependability Memory Technology Memory is susceptible to cosmic rays Soft errors: dynamic errors Detected and fixed by error correcting codes (ECC) Hard errors: permanent errors Use sparse rows to replace defective rows Chipkill: a RAID-like error recovery technique (Hamming codes) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Virtual Memory Protection via virtual memory Keeps processes in their own memory space Role of architecture: Provide user mode and supervisor mode Protect certain aspects of CPU state Provide mechanisms for switching between user mode and supervisor mode Provide mechanisms to limit memory accesses Provide TLB to translate addresses Virtual Memory and Virtual Machines Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 5 April 2017 Virtual Machines Supports isolation and security Sharing a computer among many unrelated users Enabled by raw speed of processors, making the overhead more acceptable Allows different ISAs and operating systems to be presented to user programs “System Virtual Machines” SVM software is called “virtual machine monitor” or “hypervisor” Individual virtual machines run under the monitor are called “guest VMs” Virtual Memory and Virtual Machines Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Impact of VMs on Virtual Memory The University of Adelaide, School of Computer Science 5 April 2017 Impact of VMs on Virtual Memory Each guest OS maintains its own set of page tables VMM adds a level of memory between physical and virtual memory called “real memory” VMM maintains shadow page table that maps guest virtual addresses to physical addresses Requires VMM to detect guest’s changes to its own page table Occurs naturally if accessing the page table pointer is a privileged operation Virtual Memory and Virtual Machines Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer