L/O/G/O www.themegallery.com Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

Cache and Virtual Memory Replacement Algorithms
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Chapter 6 Computer Architecture
1 Recap: Memory Hierarchy. 2 Memory Hierarchy - the Big Picture Problem: memory is too slow and or too small Solution: memory hierarchy Fastest Slowest.
© Karen Miller, What do we want from our computers?  correct results we assume this feature, but consider... who defines what is correct?  fast.
CS2100 Computer Organisation Cache II (AY2014/2015) Semester 2.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Associative Cache Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word (or sub-address in line) Tag.
Computer Architecture, Memory Hierarchy & Virtual Memory
CS 300 – Lecture 20 Intro to Computer Architecture / Assembly Language Caches.
Memory Organization.
1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.
Cache memory Direct Cache Memory Associate Cache Memory Set Associative Cache Memory.
Computer Organization and Architecture
Cache Memories Effectiveness of cache is based on a property of computer programs called locality of reference Most of programs time is spent in loops.
Memory Systems Architecture and Hierarchical Memory Systems
CSNB123 coMPUTER oRGANIZATION
Faculty of Information Technology Department of Computer Science Computer Organization and Assembly Language Chapter 4 Cache Memory.
CMPE 421 Parallel Computer Architecture
Five Components of a Computer
Lecture 19: Virtual Memory
Memory Hierarchy and Cache Memory Jennifer Tsay CS 147 Section 3 October 8, 2009.
Chapter Twelve Memory Organization
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
Cache Memory.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
Computer Architecture Memory organization. Types of Memory Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory)
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
2007 Sept. 14SYSC 2001* - Fall SYSC2001-Ch4.ppt1 Chapter 4 Cache Memory 4.1 Memory system 4.2 Cache principles 4.3 Cache design 4.4 Examples.
CSE 241 Computer Engineering (1) هندسة الحاسبات (1) Lecture #3 Ch. 6 Memory System Design Dr. Tamer Samy Gaafar Dept. of Computer & Systems Engineering.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
Memory Hierarchy. Hierarchy List Registers L1 Cache L2 Cache Main memory Disk cache Disk Optical Tape.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Cache Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module.
CACHE MEMORY CS 147 October 2, 2008 Sampriya Chandra.
نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache Memory.
Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
Cache memory. Cache memory Overview CPU Cache Main memory Transfer of words Transfer of blocks of words.
Associative Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of.
CS161 – Design and Architecture of Computer
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Cache Memory.
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
The Goal: illusion of large, fast, cheap memory
CSC 4250 Computer Architectures
Cache Memory Presentation I
William Stallings Computer Organization and Architecture 7th Edition
Cache memory Direct Cache Memory Associate Cache Memory
Chapter 6 Memory System Design
Chap. 12 Memory Organization
How can we find data in the cache?
Cache - Optimization.
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization

Cache memory is a critical component of the memory hierarchy –Compared to the size of main memory, cache is relatively small –Operates at or near the speed of the processor –Very expensive compared to main memory –Cache contains copies of sections of main memory Cache Memory (1/7)

Cache Memory (2/7) Cache -- main memory interface –Assume an access to main memory causes a block of K words to be transferred to the cache –The block transferred from main memory is stored in the cache as a single unit called a slot, line, or page –Once copied to the cache, individual words within a line can be accessed by the CPU –Because of the high speeds involved with the cache, management of the data transfer and storage in the cache is done in hardware -- the O/S does not “know” about the cache

Cache Memory (3/7)

Cache Memory (4/7)

Cache Read Operation

Cache Memory (6/7) –If there are 2 n words in the main memory, then there will be M= 2 n /K blocks in the memory –M will be much greater than the number of lines, C, in the cache –Every line of data in the cache must be tagged in some way to identify what main memory block it is The line of data and its tag are stored in the cache

Cache Memory (7/7) –Factors in the cache design Mapping function between main memory and the cache Line replacement algorithm Write policy Block size Number and type of caches –Mapping functions -- since M>>C, how are blocks mapped to specific lines in cache

Direct Mapping (1/6) Direct mapping –Each main memory block is assigned to a specific line in the cache: i = j modulo C where i is the cache line number assigned to main memory block j –If M=64, C=4: Line 0 can hold blocks 0, 4, 8, 12,... Line 1 can hold blocks 1, 5, 9, 13,... Line 2 can hold blocks 2, 6, 10, 14,... Line 3 can hold blocks 3, 7, 11, 15,...

Direct Mapping (2/6) –Direct mapping cache treats a main memory address as 3 distinct fields Tag identifier Line number identifier Word identifier (offset) –Word identifier specifies the specific word (or addressable unit) in a cache line that is to be read

Direct Mapping (3/6) –Line identifier specifies the physical line in cache that will hold the referenced address –The tag is stored in the cache along with the data words of the line For every memory reference that the CPU makes, the specific line that would hold the reference (if it is has already been copied into the cache) is determined The tag held in that line is checked to see if the correct block is in the cache

Figure 4.18 Direct mapping cache organization

Direct Mapping (6/7) Example: Memory size of 1 MB (20 address bits) addressable to the individual byte Cache size of 1 K lines, each holding 8 bytes Word id = 3 bits (8 word = 2 3 ) Line id = 10 bits (1 K = 2 10 ) Tag id = 7 bits (20 – 10 – 3 = 7) Where is the byte stored at main memory location $ABCDE stored? $ABCDE= Cache tag $55, line $39B, word offset $6

Direct Mapping (6/7) –Advantages of direct mapping Easy to implement Relatively inexpensive to implement Easy to determine where a main memory reference can be found in cache

Direct Mapping (7/7) –Disadvantage Each main memory block is mapped to a specific cache line Through locality of reference, it is possible to repeatedly reference to blocks that map to the same line number These blocks will be constantly swapped in and out of cache, causing the hit ratio to be low

Associative Mapping (1/5) –Let a block be stored in any cache line that is not in use Overcomes direct mapping’s main weakness –Must examine each line in the cache to find the right memory block Examine the line tag id for each line Slow process for large caches!

Associative Mapping (2/5) –Line numbers (ids) have no meaning in the cache Parse the main memory address into 2 fields (tag and word offset) rather than 3 as in direct mapping –Implement cache in 2 parts The lines themselves in SRAM The tag storage in associative memory –Perform an associative search over all tags to find the desired line (if its in the cache)

Figure 4.20 Associate cache organization

Associative Mapping (4/5) –Our memory example again... Word id = 3 bits Tag id = 17 bits Where is the byte stored at main memory location $ABCDE stored? $ABCDE= Cache line unknown, tag $1579D, word offset $6

Associative Mapping (5/5) –Advantages Fast Flexible –Disadvantage Implementation cost Example above has 8 KB cache and requires 1024 x 17 = 17,408 bits of associative memory for the tags!

Set Associative Mapping (1/4) –Compromise between direct and fully associative mappings that builds on the strengths of both –Divide cache into a number of sets (v), each set holding a number of lines (k)

Set Associative Mapping (2/4) –A main memory block can be stored in any one of the k lines in a set such that set number = j modulo v –If a set can hold X lines, the cache is referred to as an X-way set associative cache Most cache systems today that use set associative mapping are 2- or 4-way set associative

Figure 4.22 Set associative cache organization (k-Way)

Set Associative Mapping (4/4) –Our memory example again: Assume the 1024 lines are 4-way set associative 1024/4 = 256 sets Word id = 3 bits Set id = 8 bits Tag id = 9 bits Where is the byte stored at main memory location $ABCDE stored? $ABCDE= Cache tag $157, set $9B, word offset $6

Line replacement algorithms –When an associative cache or a set associative cache set is full, which line should be replaced by the new line that is to be read from memory? Not a problem for direct mapping since each block has a predetermined line it must use –Least Recently Used (LRU) –First In First Out (FIFO) –Least Frequently Used (LFU) –Random

Write Policy (1/2) –When a line is to be replaced, must update the original copy of the line in main memory if any addressable unit in the line has been changed –Write through Anytime a word in cache is changed, it is also changed in main memory Both copies always agree Generates lots of memory writes to main memory

Write Policy (2/2) –Write back During a write, only change the contents of the cache Update main memory only when the cache line is to be replaced Causes “cache coherency” problems -- different values for the contents of an address are in the cache and the main memory Complex circuitry to avoid this problem

Block / Line Sizes (1/2) –How much data should be transferred from main memory to the cache in a single memory reference –Complex relationship between block size and hit ratio as well as the operation of the system bus itself

Block / Line Sizes (2/2) –As block size increases, Locality of reference predicts that the additional information transferred will likely be used and thus increases the hit ratio (good) Number of blocks in cache goes down, limiting the total number of blocks in the cache (bad) As the block size gets big, the probability of referencing all the data in it goes down (hit ratio goes down) (bad) Size of 4-8 addressable units seems about right for current systems

Number of Caches (1/3) –Single vs. 2-level Modern CPU chips have on-chip cache (L1) – KB –Pentium KB –Power PC -- up to 64 KB L1 provides best performance gains Secondary, off-chip cache (L2) provides higher speed access to main memory L2 is generally 512KB or less -- more than this is not cost-effective

Number of Caches (2/3) –Unified vs. split cache Unified cache stores data and instructions in 1 cache –Only 1 cache to design and operate –Cache is flexible and can balance “allocation” of space to instructions or data to best fit the execution of the program -- higher hit ratio

Number of Caches (3/3) Split cache uses 2 caches -- 1 for instructions and 1 for data –Must build and manage 2 caches –Static allocation of cache sizes –Can out perform unified cache in systems that support parallel execution and pipelining (reduces cache contention)

Pentium and PowerPC Cache (1/3) In the last 10 years, we have seen the introduction of cache into microprocessor chips On-chip cache (L1) is supplemented with external fast SRAM cache (L2) –While off-chip, it can provide zero wait state performance compared to a relatively slow main memory access

Pentium and PowerPC Cache (2/3) Intel family –‘386 had no internal cache –‘486 had 8KB unified cache –Pentium has 16KB split cache: 8 KB data and 8 KB instruction Pentium support 256 or 512 KB external L2 cache that is 2-way set associative PowerPC –601 had 1 32 KB cache –603/604/620 has split cache of size 16/32/64 KB

MESI Cache Coherency Protocol (1/2) MESI protocol provides cache coherency in both the Pentium and the PowerPC Stands for –Modified –Exclusive –Shared –Invalid

MESI Cache Coherency Protocol (2/2) Implemented with an additional 2-bit field for each cache line Becomes interesting in the interactions of the L1 and the L2 caches -- each track the “local” MESI status as a line moves from main memory to L2 and then to L1 PowerPC adds an addition state “A” for allocated –A line is marked as A while its data is being swapped out

Operation of 2-Level Memory (1/2) Recall the goal of the memory system: –Provide an average access time to all memory locations that is approximately the same as that of the fastest memory component –Provide a system memory with an average cost approximately equal to the cost/bit of the cheapest memory component

Operation of 2-Level Memory (2/2) Simplistic approach, Ts = H1xT1 + H2(T1 + T2 + Tb21) H2 = 1 - H1 T1, T2 are the access times to level 1 and 2 Tb21 is the block transfer time from level 2 to level 1 Can be generalized to 3 or more levels

L/O/G/O Question! Do you have any