Presentation is loading. Please wait.

Presentation is loading. Please wait.

Memory Memory is used to store programs and data. CPU Memory

Similar presentations


Presentation on theme: "Memory Memory is used to store programs and data. CPU Memory"— Presentation transcript:

1 Memory Memory is used to store programs and data. CPU Memory
Memory should be large and fast. large amounts of code and data time-critical applications Fast memories are expensive, slow memories are cheap. A fast and large memory results in an expensive system.

2 Main Memory Types CPU uses the main memory on the instruction level.
RAM (Random Access Memory): we can read and write. Static Dynamic ROM (Read-Only Memory): we can only read. Changing of the contents is possible for most types. Characteristics Access time Price Volatility

3 Random Access Memories
SRAM: bit value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: bit value is stored as a charge on capacitor (must be refreshed) very small but slower than SRAM (factor of 2 to 5) 1 Word line Bit Word line Bit line Capacitor Access transistor

4 Exploiting Memory Hierarchy
Users want large and fast memories! SRAM access times are ns at cost of 25 $ per MB. DRAM access times are ns at cost of .15 $ per MB. Disk access times are ms at cost of .001 $ per MB. Give it to them anyway. build a memory hierarchy 2003 Levels in the memory hierarchy Increasing distance from the CPU in access time Size of the memory at each level CPU Level 1 Level 2 Level n

5 Locality A principle that makes having a memory hierarchy a good idea
If an item is referenced, temporal locality: it will tend to be referenced again soon spatial locality: nearby items will tend to be referenced soon. Why does code have locality? loops instructions accessed sequentially arrays, records

6 Memory Hierarchy Main memory Secondary memory CPU Cache Registers
Levels L1, L2, … (hardware implementation, SRAMs) Virtual memory (software implementation) The computer uses the main memory (DRAM) on the instruction level.

7 Memory Hierarchy A pair of levels in the memory hierarchy:
two levels: upper and lower block: minimum unit of data transferred between upper and lower level hit: data requested is in the upper level hit rate miss: data requested is not in the upper level miss rate miss penalty depends mainly on lower level access time

8 Cache Basics What information goes into the cache
in addition to the referenced one? How do we know if a data item is in the cache? If it is, how do we find it?

9 Direct Mapped Cache Simple approach: Direct mapped
block size is one word every main memory location can be mapped to exactly one cache location lots of words in the main memory share a single location in the cache Address in the cache = (address in the main memory) modulo (number of words in the cache) cache address is identical with lower bits in the main memory address tag (higher address bits) differentiates between competing main memory words We are taking advantage of temporal locality.

10 Direct Mapped Cache: Simple Example

11 A More Realistic Example
32 bit word length 32 bit address 1 kW cache block size 1 word 10 bit cache index 20 bit tag size 2 bit byte offset (word alignment assumed) valid bit

12 Cache Access A d r e s ( h o w i n g b t p ) 2 1 V a l T D I x 3 H B y
1 B y f V a l T D I x 3 H

13 Cache Size Cache memory size 1024  32 b = 32 kb Tag memory size
Valid information 1024  1 b = 1 kb Efficiency 32/53 = 60.4 %

14 Cache Hits and Misses Cache hit - continue access the cache Cache miss
stall the CPU get information from the main memory write information in the cache data, tag, set valid bit resume execution

15 Hits and Misses in More Detail
Read hits this is what we want! Read misses stall the CPU, fetch block from memory, deliver to cache, restart Write hits: can replace the data in the cache and memory (write-through) write the data only in the cache, write in the main memory later (write-back) Write misses: read the entire block into the cache, then write the word

16 Write Through and Write Back
update cache and main memory at the same time may result in extra main memory writes requires a write buffer; it stores data while it is waiting to be written to memory Write back update main memory only during replacement replacement may be slower

17 Combined vs. Split Cache
Combined cache size equal to the sum of the split caches no rigid division between locations used by instructions and data usually a slightly better hit rate possibly stalls due to simultaneous access to instructions and data lower bandwidth because of sharing of resources Split instruction and data cache increased bandwidth from the cache slightly lower hit rate no conflict when accessing instruction and data simultaneously

18 Taking Advantage of Spatial Locality
The cache described so far: simple block size one word only temporal locality exploited Spatial locality block size longer than one word when a miss occurs, multiple adjacent words are fetched

19 Direct Mapped Cache with Four-Word Blocks
( h o w i n g b t p ) 1 6 2 B y f V T a D H 3 4 K 8 M u x l c k I 5

20 Miss Rate vs. Block Size Increasing block size tends to decrease miss rate: There is more spatial locality in code:

21 Achieving Higher Memory Bandwidth
P U a c h e B u s M m o r y . O n - w d i g z t b W l p x k 1 2 3 I v Miss penalties for a four-word block: a. four memory latencies and four bus cycles b. one memory latency and one bus cycle c. one memory latency and four bus cycles. The memory latency is much longer than the bus cycle.

22 Performance Simplified model: execution time = (execution cycles + stall cycles)  cycle time stall cycles = # of instructions  miss rate  miss penalty The model is more complicated for writes than reads (write-through vs. write-back, write buffers) Two ways of improving performance: decreasing the miss rate decreasing the miss penalty

23 Flexible Placement of Cache blocks
Direct mapped cache a memory block can go exactly in one place in the cache use the tag to identify the referenced word easy to implement Fully associative cache a memory block can be placed in any location in the cache search all entries in the cache in parallel expensive to implement (a comparator associated with each cache entry) Set-associative cache a memory block can be placed in a fixed number of locations n locations: n-way set-associative cache a block is mapped to a set; it can be placed in any element in that set search the elements of the set implementation simpler than in fully associative cache

24 Cache Types We are looking at block 12 in an 8-block cache; 12 mod 8 = 4, 12 mod 4 = 0 1 2 T a g D t B l o c k # 3 4 5 6 7 S e r h i m p d s v F u y

25 Mapping of an Eight-Block Cache
D t E i h - w y s e o c v ( f u l ) F r S 1 O n d m p B k 7 2 3 4 5 6

26 Performance Improvement
Associativity reduces high miss rates. Program Associativity Instruction Data Combined miss rate miss rate miss rate gcc % % % gcc % % % gcc % % % spice % % % spice % % % spice % % %

27 Locating a Block Address portions Index selects the set.
Tag chooses the the block by comparison. Block offset is the address of the data within the block. The costs of an associative cache comparators and multiplexers time for comparison and selection block offset index tag

28 Four-Way Set-Associative Cache
d r e s 2 8 V T a g I n x 1 5 3 4 D t - o m u l i p H 9

29 Replacement Strategy Replacement is needed in associative caches.
Random First-in-first-out oldest block is replaced First-in-not-used-first-out oldest of the blocks having not been accessed after the previous replacement is replaced LRU (Least Recently Used) the block having been unused for the longest time is replaced

30 Random vs. LRU Random simple to implement
almost as good as other algorithms LRU (Least Recently Used) 2-way set-associative: implementation simple (one bit) 4-way set-associative: approximated to make implementation reasonably simple

31 Pseudo LRU for Four-Way S-A Cache
Approximation of LRU, implemented with 3 bits per set Replacement needed: check B1 check B2 check B3 replace replace replace replace block block block block 3 At every cache access two of the bits are updated to point away from the MRU block Always chooses the best or second best choice

32 Performance (SPEC92) % 3 6 9 1 2 5 E i g h t - w a y F o u r T O n e K
% 3 6 9 1 2 5 E i g h t - w a y F o u r T O n e K B 4 8 M s A c v

33 Multilevel Caches Usually two levels:
L1 cache is often on the same chip as the processor L2 cache is usually off-chip miss penalty goes down if data is in L2 cache Example: CPI of 1.0 on a 500Mhz machine with a 200ns main memory access time: miss penalty 100 clock cycles Add a 2nd level cache with 20ns access time: miss penalty 10 clock cycles Using multilevel caches: minimise the hit time on L1 minimise the miss rate on L2

34 Virtual Memory Main memory can act as a “cache” for the secondary storage large virtual address space used in each program smaller main memory Motivations efficient and safe sharing of memory among multiple programs remove programming burdens of a small main memory Advantages: illusion of having more physical memory program relocation protection

35 Virtual Memory P h y s i c a l d r e D k V t u A n o

36 Pages: virtual memory blocks
CPU produces a virtual address translated by a combination of hardware and software to a physical address Virtual address: virtual page number and page offset Physical address: physical page number and page offset Page fault: data is not in memory, retrieve it from disk

37 Address Translation 3 2 1 9 8 5 4 7 P a g e o f s t V i r u l p n m b
9 8 5 4 7 P a g e o f s t V i r u l p n m b d h y c T

38 Virtual Memory Design Huge miss penalty, thus pages should be fairly large ( kB). Reducing page faults is important (LRU is worth the price). Page faults can be handled in software instead of hardware overhead small compared to the access time to disk Virtual memory systems use write-back. using write-through is too expensive Dirty bit indicates whether a page needs to be copied back when it is replaced initially cleared, set when the page is first written

39 Page Tables P h y s i c a l m e o r D k t g V d 1 b u p n

40 Page Table Details P a g e o f s t V i r u l p n m b d h y c I 2 1 8 3
2 1 8 3 9 7 5 4

41 Making Address Translation Fast
A cache for address translations: translation lookaside buffer (TLB) V a l i d 1 P g e t b h y s c p r T L B u n m o k D

42 MIPS R2000 TLB and Cache V a l i d T g D t P e o f s r u p n m b h y c
1 2 6 4 C x 3 B L 9 5 8

43 TLBs and caches Y e s D l i v r d a t o h C P U W ? T y f m c n , u p
g w b L B V x N

44 Protection and Virtual Memory
Multiple processes and the operating system share a single main memory memory protection is provided A user process can not access other processes’ data The operating system takes care of system administration page tables, TLBs

45 Hardware Requirements for Protection
At least two operating modes user process operating system process (also called kernel, supervisor or executive process) Portion of the CPU state a user process can read but not write user/supervisor mode bit page table pointer TLB Mechanisms for going from user mode to supervisor mode, and vice versa system call exception return from exception

46 Handling Page Faults and TLB Misses
page present in the memory  create missing TLB entry page not present in the memory  page fault  transfer control to the operating system Look at the matching page table entry valid bit on  copy the page table entry from memory into the TLB valid bit off  page fault exception Page fault EPC contains the virtual address of the faulting page find the page and move it into the memory after choosing a page to be replaced


Download ppt "Memory Memory is used to store programs and data. CPU Memory"

Similar presentations


Ads by Google