Download presentation
Presentation is loading. Please wait.
Published byLana Manner Modified over 10 years ago
1
1 Cache and Caching David Sands CS 147 Spring 08 Dr. Sin-Min Lee
2
2 The Cache Faster memory access times. Remembering frequently accessed data. Block of memory for temporary storage or indexing.
3
3 What are Pages? Not made of paper Just a block of data being accessed. Memory from disk -> RAM or container. When we are out of page space, what do we do?
4
4 Page Replacement Algorithms Optimal Page Replacement -- swap page that will not be used for a while with a page that is about to be used. First in, First out (FIFO) -- queue Second-Chance -- circular queue. If reference bit is set, put at last place. If not set, cache it. ‘Clock’ -- just like second-chance, but uses a ‘hand iterator’ instead of putting at back of queue.
5
5 More Algorithms Not Recently Used -- favors keeping page in cache that are recent (keep track of referenced / modified) Least Recently Used -- assumes new pages in cache will be used again in near future. Hard to implement. Random -- swaps random pages – compare to FIFO and LRU - fast Not Frequently Used -- counter variable for # uses of a page. Swaps out the underutilized pages. Aging -- Favoritism for recently referenced pages. Priority. Swaps out old pages first. woot
6
6 Primitive Example -- Fetch D then C then B then A from the tree memory. -- What we access is chained above the tree into tree-like cache. -- If we do FIFO algorithm, we replace D if there’s no cache space left.
7
7 Cache Types Memory Cache -- RAM to CPU Disk Cache -- Disk to CPU Memory, hardware, software, disk, page, and virtual memory caches
8
8 Disk Cache Hard disk Buffer cache The page cache is controlled by the Kernel
9
9 Other Cache Examples DNS daemon – mapping IP addresses. Web Browser – Recently visited website. Search Engines – popular sites. Databases – indexing and data dictionary.
10
10 L1, L2, L3 Cache Provides tiers of cache memory As memory size and distance from CPU increases, access time becomes longer. Cost-benefit problem. L3 Cache not required, but has larger storage, so we like it.
11
11 L1, L2, L3 Cache (cont.) L1 Inside processor chip (like registers) L2 Outside processor (can be on motherboard) L3 between L2 and main memory
12
12 Cache Write Policy Datum is written to cache How do we update the entry in main memory? Write through -- if there is a copy in the cache, updates the cache data on the fly. -- overloads BUS with multiple requests. Write back -- Updates the cache data with final data only. -- reduces BUS traffic, hides inconsistency.
13
13 Hit or Miss? - searching Does the desired Tag in the cache memory match an index in Main memory? -- If so, use the data from cache- HIT. -- Else, search the main memory for the data- MISS. HIT RATIO = Percent of accesses that HIT.
14
14 Miss Rate vs. Cache Size 1.00 - Hit = Miss Method of cache mapping to data elements
15
15 Time Analysis for One L1 Cache L1 Cache Avg. Cost = rC h + (1-r)C m r = hit ratio C h = L1 cache access time C m = memory access time This is a probability distribution function.
16
16 Multiple Cache Analysis Just extend the probability function L1 and L2 Cache Setup Avg. Cost = r 1 C h 1 + r 2 C h 2 + (1-r 1 -r 2 )C m 1 = L1 cache 2 = L2 cache Probability for memory fetch = 1-r1-r2
17
17 ?
18
18 References http://en.wikipedia.org/wiki/Cache http://en.wikipedia.org/wiki/Paging http://en.wikipedia.org/wiki/Page_replacem ent_algorithm http://en.wikipedia.org/wiki/Page_replacem ent_algorithm http://en.wikipedia.org/wiki/CPU_cache
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.