1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < 200000 ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=

Slides:



Advertisements
Similar presentations
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Advertisements

Performance of Cache Memory
Practical Caches COMP25212 cache 3. Learning Objectives To understand: –Additional Control Bits in Cache Lines –Cache Line Size Tradeoffs –Separate I&D.
© Karen Miller, What do we want from our computers?  correct results we assume this feature, but consider... who defines what is correct?  fast.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Memory Organization.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
CS 524 (Wi 2003/04) - Asim LUMS 1 Cache Basics Adapted from a presentation by Beth Richardson
Memory: PerformanceCSCE430/830 Memory Hierarchy: Performance CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine)
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
1  The second question was how to determine whether or not the data we’re interested in is already stored in the cache.  If we want to read memory address.
1 Today  Trick or treat?  Finish up performance  Start looking into memory hierarchy - Caches!
 Higher associativity means more complex hardware  But a highly-associative cache will also exhibit a lower miss rate —Each set has more blocks, so there’s.
CMPE 421 Parallel Computer Architecture
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
Lecture 19 Today’s topics Types of memory Memory hierarchy.
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
1 CACHE MEMORY 1. 2 Cache Memory ■ Small amount of fast memory, expensive memory ■ Sits between normal main memory (slower) and CPU ■ May be located on.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
Caches Hiding Memory Access Times. PC Instruction Memory 4 MUXMUX Registers Sign Ext MUXMUX Sh L 2 Data Memory MUXMUX CONTROLCONTROL ALU CTL INSTRUCTION.
Review °We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. °So we create a memory hierarchy:
Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X)  Hit Rate : the fraction of memory access found in.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
The Memory Hierarchy (Lectures #17 - #20) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.
April 16, 2003Cache introduction1 How will execution time grow with SIZE? int array[SIZE]; int A = 0; for (int i = 0 ; i < ; ++ i) { for (int j.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
Computer Organization CS224 Fall 2012 Lessons 37 & 38.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Cache Memory - III Lecturer: Hui Wu Session 2, 2005 Modified.
1 IT 251 Computer Organization and Architecture Cache Introduction Chia-Chi Teng.
Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
COSC2410: LAB 19 INTRODUCTION TO MEMORY/CACHE DIRECT MAPPING 1.
CS161 – Design and Architecture of Computer Systems
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Main Memory Cache Architectures
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
CSE 351 Section 9 3/1/12.
The Goal: illusion of large, fast, cheap memory
CSC 4250 Computer Architectures
How will execution time grow with SIZE?
How will execution time grow with SIZE?
Morgan Kaufmann Publishers Memory & Cache
Chapter 8 Digital Design and Computer Architecture: ARM® Edition
Lecture 08: Memory Hierarchy Cache Performance
How can we find data in the cache?
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Memory Operation and Performance
CS-447– Computer Architecture Lecture 20 Cache Memories
Some of the slides are adopted from David Patterson (UCB)
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Presentation transcript:

1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum += array[j]; } SIZE TIME ? array fits in cache

2 Large and fast  Computers depend upon large and fast storage systems —database applications, scientific computations, video, music, etc —pipelined CPUs need quick access to memory (IF, MEM)  So far we’ve assumed that IF and MEM can happen in 1 cycle —there is a tradeoff between speed, cost and capacity StorageDelayCost/MBCapacity Static RAM1-10 cycles~$5128KB-2MB Dynamic RAM cycles~$ MB-4GB Hard disks10,000,000 cycles~$ GB-400GB

3 Introducing caches  Caches help strike a balance  A cache is a small amount of fast, expensive memory —goes between the processor and the slower, dynamic main memory —keeps a copy of the most frequently used data from the main memory  Memory access speed increases overall, because we’ve made the common case faster —reads and writes to the most frequently used addresses will be serviced by the cache —we only need to access the slower main memory for less frequently used data  Principle used in other contexts as well: Networks, OS … Lots of dynamic RAM A little static RAM (cache) CPU

4 Cache - introduction Single-core Two-level cache Dual-core Three-level cache

5 The principle of locality  Usually difficult or impossible to figure out “most frequently accessed” data or instructions before a program actually runs —hard to know what to store into the small, precious cache memory  In practice, most programs exhibit locality —cache takes advantage of this  The principle of temporal locality: if a program accesses one memory address, there is a good chance that it will access the same address again  The principle of spatial locality: that if a program accesses one memory address, there is a good chance that it will also access other nearby addresses  Example: loops (instructions), sequential array access (data)

6  Temporal: programs often access same variables, especially within loops  Ideally, commonly-accessed variables will be in registers —but there are a limited number of registers —in some situations, data must be kept in memory (e.g., sharing between threads)  Spatial: when reading location i from main memory, a copy of that data is placed in the cache but also copy i+1, i+2, … —useful for arrays, records, multiple local variables Locality in data sum = 0; for (i = 0; i < MAX; i++) sum = sum + f(i);

7 Definitions: Hits and misses  A cache hit occurs if the cache contains the data that we’re looking for  A cache miss occurs if the cache does not contain the requested data   Two basic measurements of cache performance: —the hit rate = percentage of memory accesses handled by the cache (miss rate = 1  hit rate) —the miss penalty = the number of cycles needed to access main memory on a cache miss  Typical caches have a hit rate of 95% or higher  Caches organized in levels to reduce miss penalty

8 A simple cache design  Caches are divided into blocks, which may be of various sizes —the number of blocks in a cache is usually a power of 2 —for now we’ll say that each block contains one byte (this won’t take advantage of spatial locality, but we’ll do that next time) index 8-bit data index = row

9 Four important questions 1.When we copy a block of data from main memory to the cache, where exactly should we put it? 2.How can we tell if a word is already in the cache, or if it has to be fetched from main memory first? 3.Eventually, the small cache memory might fill up. To load a new block from main RAM, we’d have to replace one of the existing blocks in the cache... which one? 4.How can write operations be handled by the memory system?  Questions 1 and 2 are related—we have to know where the data is placed if we ever hope to find it again later!

10 Where should we put data in the cache?  A direct-mapped cache is the simplest approach: each main memory address maps to exactly one cache block  Notice that index = least significant bits (LSB) of address  If the cache holds 2 k blocks, index = k LSBs of address Index Memory Address

11  If we want to read memory address i, we can use the LSB trick to determine which cache block would contain i  But other addresses might also map to the same cache block. How can we distinguish between them?  We add a tag, using the rest of the address How can we find data in the cache? Index Memory Address IndexTagData = = = = 0111 Main memory address in cache block

12 One more detail: the valid bit  When started, the cache is empty and does not contain valid data  We should account for this by adding a valid bit for each cache block —When the system is initialized, all the valid bits are set to 0 —When data is loaded into a particular cache block, the corresponding valid bit is set to 1  So the cache contains more than just copies of the data in memory; it also has bits to help us find data within the cache and verify its validity IndexTagData = 0000 Invalid = 0111 Main memory address in cache block Valid Bit

13 What happens on a memory access  The lowest k bits of the address will index a block in the cache  If the block is valid and the tag matches the upper (m - k) bits of the m- bit address, then that data will be sent to the CPU (cache hit)  Otherwise (cache miss), data is read from main memory and —stored in the cache block specified by the lowest k bits of the address —the upper (m - k) address bits are stored in the block’s tag field —the valid bit is set to 1  If our CPU implementations accessed main memory directly, their cycle times would have to be much larger —Instead we assume that most memory accesses will be cache hits, which allows us to use a shorter cycle time  On a cache miss, the simplest thing to do is to stall the pipeline until the data from main memory can be fetched and placed in cache.

14 What if the cache fills up?  We answered this question implicitly on the last page! —A miss causes a new block to be loaded into the cache, automatically overwriting any previously stored data —This is a least recently used replacement policy, which assumes that older data is less likely to be requested than newer data —Under direct mapping scheme, we have no other option. With other mapping schemes, we have other choices (such as least frequently used etc.)

15 Loading a block into the cache  After data is read from main memory, putting a copy of that data into the cache is straightforward —The lowest k bits of the address specify a cache block —The upper (m - k) address bits are stored in the block’s tag field —The data from main memory is stored in the block’s data field —The valid bit is set to IndexTagDataValid Address (32 bits) 1022 Index Tag Data 1

16 Memory System Performance  Memory system performance depends on three important questions: —How long does it take to send data from the cache to the CPU? —How long does it take to copy data from memory into the cache? —How often do we have to access main memory?  There are names for all of these variables: —The hit time is how long it takes data to be sent from the cache to the processor. This is usually fast, on the order of 1-3 clock cycles. —The miss penalty is the time to copy data from main memory to the cache. This often requires dozens of clock cycles (at least). —The miss rate is the percentage of misses. Lots of dynamic RAM A little static RAM (cache) CPU

17 Average memory access time  The average memory access time, or AMAT, can then be computed AMAT = Hit time + (Miss rate x Miss penalty) This is just averaging the amount of time for cache hits and the amount of time for cache misses  How can we improve the average memory access time of a system? —Obviously, a lower AMAT is better —Miss penalties are usually much greater than hit times, so the best way to lower AMAT is to reduce the miss penalty or the miss rate  However, AMAT should only be used as a general guideline. Remember that execution time is still the best performance metric.

18 Performance example  Assume that 33% of the instructions in a program are data accesses. The cache hit ratio is 97% and the hit time is one cycle, but the miss penalty is 20 cycles. AMAT= Hit time + (Miss rate x Miss penalty) = =  How can we reduce miss rate? —One-byte cache blocks don’t take advantage of spatial locality, which predicts that an access to one address will be followed by an access to a nearby address  To use spatial locality, we should transfer several words at once from RAM to cache. (cost of transferring multiple blocks is limited only by the memory bandwidth.)