Presentation is loading. Please wait.

Presentation is loading. Please wait.

Basic Cache Operation Prof. Eric Rotenberg

Similar presentations


Presentation on theme: "Basic Cache Operation Prof. Eric Rotenberg"— Presentation transcript:

1 Basic Cache Operation Prof. Eric Rotenberg
ECE 463/521 Fall `18 Basic Cache Operation Prof. Eric Rotenberg Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

2 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg
MAIN MEMORY 32-bit address 1 byte of data byte #0 00… byte #1 00… byte #2 00… 00… 00… 00… 00… 00… 00… 00… 00… 00… 00… byte #(232-1) 11… 232 bytes Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

3 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg
1 block = 8 bytes MAIN MEMORY 32-bit address block address block offset 000 001 010 011 100 101 110 111 mem. block #0 00…00000xxx mem. block #1 00…00001xxx mem. block #2 00…00010xxx 00…00011xxx 00…00100xxx 00…00101xxx 00…00110xxx 00…00111xxx 00…01000xxx 11…11111xxx Block offset Low-order bits of address that specify a byte within the block Since cache is managed at granularity of blocks, the block offset bits are irrelevant for determining hit or miss 232 bytes (229 memory blocks) Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

4 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg
1 block = 8 bytes MAIN MEMORY CACHE 32-bit address block address block offset Example: SIZE = 32 bytes BLOCKSIZE = 8 bytes 1 block per cache set 000 001 010 011 100 101 110 111 mem. block #0 00…00000xxx mem. block #1 00…00001xxx mem. block #2 00…00010xxx 00…00011xxx 00…00100xxx 00…00101xxx 00…00110xxx 00…00111xxx 00…01000xxx 11…11111xxx cache set #0 cache set #1 cache set #2 cache set #3 Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

5 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg
1 block = 8 bytes MAIN MEMORY CACHE 32-bit address block address block offset index 000 001 010 011 100 101 110 111 mem. block #0 00…00000xxx mem. block #1 00…00001xxx mem. block #2 00…00010xxx 00…00011xxx 00…00100xxx 00…00101xxx 00…00110xxx 00…00111xxx 00…01000xxx 11…11111xxx cache set #0 cache set #1 cache set #2 cache set #3 Index Low-order bits of block address indicate which cache set the block will be placed in and where it will be searched for later Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

6 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg
1 block = 8 bytes MAIN MEMORY CACHE 32-bit address block address block offset tag index 000 001 010 011 100 101 110 111 mem. block #0 00…00000xxx mem. block #1 00…00001xxx mem. block #2 00…00010xxx 00…00011xxx 00…00100xxx 00…00101xxx 00…00110xxx 00…00111xxx 00…01000xxx 11…11111xxx cache set #0 cache set #1 cache set #2 cache set #3 v tag data Tag Many memory blocks have the same index, i.e., they map to the same cache set High-order bits of block address differentiate memory blocks that map to the same cache set Record the tag alongside the cached memory block, to identify which memory block is cached in the set Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

7 Cache design considerations
Block size What is the atomic unit of storage in the cache? Block placement Where can a block be placed in the cache? Block identification How is the block found in the cache? Block replacement Which block should be replaced on a miss? Write strategy What happens on a write? Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

8 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg
Block size Typical values for L1 cache: 16 to 64 bytes Exploit spatial locality: Bring in larger blocks Slows down time it takes to fix a miss (“miss penalty”) Too large and “hog” storage (“cache pollution”) 32-bit address 31 b b-1 block address block offset # block offset bits = log2(Block Size) Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

9 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg
A direct-mapped cache 31 6 5 3 2 tag index block offset MAR (26) (3) (3) set (holds 1 block) tag (26 bits) blocks (8 bytes) TAG STORE DATA STORE row dec LATCH LATCH =? word select (mux) hit/miss MDR Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

10 A (2-way) set-associative cache
31 5 4 3 2 tag index (2) block offset (3) MAR (27) set (holds 2 blocks) tag (27 bits) blocks (8 bytes) row dec =? =? block select (mux) Notes: this cache is the same size as the (previous) direct-mapped cache, but the index field is 1 bit shorter (2 bits total) A direct-mapped cache is really a 1-way set-associative cache word select (mux) MDR Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

11 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg
A fully-associative cache (also called a content addressable memory or CAM) 31 3 2 tag block offset (3) MAR (29) tag (29 bits) =? tag =? tag =? tag =? tag =? tag =? tag =? tag =? tag Notes: Same size as previous caches, but no row decoder and no index at all! Comparators (one per block) take the place of the row decoder word select (mux) MDR Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg

12 Summary: decoding the address
Cache is normally specified as follows: {SIZE, ASSOC, BLOCKSIZE} SIZE: total bytes of data storage ASSOC: associativity (# of blocks in a set) BLOCKSIZE: size of cache block in bytes Question: how do we decode the address? REMEMBER the following equation Then compute size of each address field tag index block offset ? ? ? tag index block offset Fall 2018 ECE 463/563, Microprocessor Architecture, Prof. Rotenberg


Download ppt "Basic Cache Operation Prof. Eric Rotenberg"

Similar presentations


Ads by Google