Cache Memory By JIA HUANG. "Computer Science has only three ideas: cache, hash, trash.“ - Greg Ganger, CMU.

Slides:



Advertisements
Similar presentations
ARC: A self-tuning, low overhead Replacement Cache
Advertisements

Song Jiang1 and Xiaodong Zhang1,2 1College of William and Mary
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms (ACM SIGMETRIC 05 ) ACM International Conference on Measurement & Modeling.
A Survey of Web Cache Replacement Strategies Stefan Podlipnig, Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys, December 2003 Presenter:
ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE
1 Cache and Caching David Sands CS 147 Spring 08 Dr. Sin-Min Lee.
Outperforming LRU with an Adaptive Replacement Cache Algorithm Nimrod megiddo Dharmendra S. Modha IBM Almaden Research Center.
Cache Definition Cache is pronounced cash. It is a temporary memory to store duplicate data that is originally stored elsewhere. Cache is used when the.
Virtual Memory Background Demand Paging Performance of Demand Paging
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.
Internet Cache Pollution Attacks and Countermeasures Yan Gao, Leiwen Deng, Aleksandar Kuzmanovic, and Yan Chen Electrical Engineering and Computer Science.
ECE7995 Caching and Prefetching Techniques in Computer Systems Lecture 8: Buffer Cache in Main Memory (IV)
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
Least Popularity-per-Byte Replacement Algorithm for a Proxy Cache Kyungbaek Kim and Daeyeon Park. Korea Advances Institute of Science and Technology (KAIST)
Systems I Locality and Caching
Chapter 3 Memory Management: Virtual Memory
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
7-1 Chapter 7 - Memory Principles of Computer Architecture by M. Murdocca and V. Heuring © 1999 M. Murdocca and V. Heuring Principles of Computer Architecture.
7-1 Chapter 7 - Memory Department of Information Technology, Radford University ITEC 352 Computer Organization Principles of Computer Architecture Miles.
« Performance of Compressed Inverted List Caching in Search Engines » Proceedings of the International World Wide Web Conference Commitee, Beijing 2008)
Chapter Twelve Memory Organization
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
Computer Architecture Memory organization. Types of Memory Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory)
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
An Effective Disk Caching Algorithm in Data Grid Why Disk Caching in Data Grids?  It takes a long latency (up to several minutes) to load data from a.
By Andrew Yee. Virtual Memory Memory Management What is Page Replacement?
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 3:00-4:00 PM.
Page Replacement Algorithms and Simulation Neville Allen.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
A BRIEF INTRODUCTION TO CACHE LOCALITY YIN WEI DONG 14 SS.
Memory Management & Virtual Memory © Dr. Aiman Hanna Department of Computer Science Concordia University Montreal, Canada.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
Evaluating Content Management Technique for Web Proxy Cache M. Arlitt, L. Cherkasova, J. Dilley, R. Friedrich and T. Jin MinSu Shin.
Transforming Policies into Mechanisms with Infokernel Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau, Nathan C. Burnett, Timothy E. Denehy, Thomas J.
Jiahao Chen, Yuhui Deng, Zhan Huang 1 ICA3PP2015: The 15th International Conference on Algorithms and Architectures for Parallel Processing. zhangjiajie,
CACHE MEMORY CS 147 October 2, 2008 Sampriya Chandra.
CS422 Principles of Database Systems Buffer Management Chengyu Sun California State University, Los Angeles.
On Caching Search Engine Query Results Evangelos Markatos Evangelos Markatoshttp://archvlsi.ics.forth.gr/OS/os.html Computer Architecture and VLSI Systems.
LIRS: Low Inter-reference Recency Set Replacement for VM and Buffer Caches Xiaodong Zhang College of William and Mary.
Page Replacement FIFO, LIFO, LRU, NUR, Second chance
Cache Memory Yi-Ning Huang. Principle of Locality Principle of Locality A phenomenon that the recent used memory location is more likely to be used again.
Logistics Homework 5 will be out this evening, due 3:09pm 4/14
Virtual memory.
Beyond Physical Memory: Policies
Ramya Kandasamy CS 147 Section 3
LRFU (Least Recently/Frequently Used) Block Replacement Policy
Chapter 21 Virtual Memoey: Policies
Demand Paging Reference Reference on UNIX memory management
Demand Paging Reference Reference on UNIX memory management
Andy Wang Operating Systems COP 4610 / CGS 5765
CGS 3763 Operating Systems Concepts Spring 2013
Memory Management & Virtual Memory
Distributed Systems CS
Andy Wang Operating Systems COP 4610 / CGS 5765
Chap. 12 Memory Organization
COT 4600 Operating Systems Spring 2011
February 12, 2004 Adrienne Noble
Operating Systems CMPSC 473
Jazan University, Jazan KSA
ARC (Adaptive Replacement Cache)
Sarah Diesburg Operating Systems CS 3430
Andy Wang Operating Systems COP 4610 / CGS 5765
Chapter Contents 7.1 The Memory Hierarchy 7.2 Random Access Memory
Sarah Diesburg Operating Systems COP 4610
Presentation transcript:

Cache Memory By JIA HUANG

"Computer Science has only three ideas: cache, hash, trash.“ - Greg Ganger, CMU

The idea of using caching Using very short time to access the recently and frequently data Using very short time to access the recently and frequently data cache: fast but expensivedisks: cheap but slow

Type of cache CPU cache CPU cache Disk cache Disk cache Other caches Other caches Proxy web cache Proxy web cache

Usage of caching Caching is used widely in: Caching is used widely in: Storage systems Storage systems Databases Databases Web servers Web servers Middleware Middleware Processors Processors Operating systems Operating systems RAID controller RAID controller Many other applications Many other applications

Cache Algorithms Famous algorithms Famous algorithms LRU (Least Recently Used) LRU (Least Recently Used) LFU (Least Frequently Used) LFU (Least Frequently Used) Not so Famous algorithms Not so Famous algorithms LRU –K LRU –K 2Q 2Q FIFO FIFO others others

LRU (Least Recently Used) LRU is implemented by a linked list. LRU is implemented by a linked list. Discards the least recently used items first. Discards the least recently used items first.

LFU (least frequently used) Counts, how often an item is needed. Those that are used least often are discarded first. Counts, how often an item is needed. Those that are used least often are discarded first.

LRU vs. LFU The fundamental locality principle claims that if a process visits a location in the memory, it will probably revisit the location and its neighborhood soon The fundamental locality principle claims that if a process visits a location in the memory, it will probably revisit the location and its neighborhood soon The advanced locality principle claims that the probability of revisiting will increased if the number of the visits is bigger The advanced locality principle claims that the probability of revisiting will increased if the number of the visits is bigger

Disadvantages LRU – problem with LRU – problem with process scans a huge database LFU – process scans a huge database, but it made the performance even worse.

can there be a better algorithm? Yes Yes

New algorithm ARC (Adaptive Replacement Cache) ARC (Adaptive Replacement Cache) - it combines the virtues of LRU and LFU, while avoiding vices of both. The basic idea behind ARC is to adaptively, dynamically and relentlessly balance between "recency" and "frequency" to achieve a high hit ratio. - - Invented by IBM in 2003 ( Almaden Research Center, San Jose)

How it works?

ARC L1: pages were seen once recently("recency") L2: pages were seen at least twice recently ("frequency") If L1 contains exactly c pages --replace the LRU page in L1 else --replace the LRU page in L2. Lemma: The c most recent pages are in the union of L1 and L2. L1 L2 LRU LRU MRU MRU

ARC Divide L1 into T1 (top) & B1 (bottom) Divide L2 into T2 (top) & B2 (bottom) T1 and T2 contain c pages in cache and in directory B1 and B2 contain c pages in directory, but not in cache If T1 contains more than p pages, --replace LRU page in T1, else --replace LRU page in T2. L2 L1 MRU MRU LRU LRU

Adapt target size of T1 to an observed workload A self-tuning algorithm: hit in T1 or T2 : do nothing hit in B1: increase target of T1 hit in B2: decrease target of T1 L2 L2"frequency" L1 L1 "recency" MidpointARC

ARC ARC has low space complexity. A realistic implementation had a total space overhead of less than 0.75%. ARC has low time complexity; virtually identical to LRU. ARC is self-tuning and adapts to different workloads and cache sizes. In particular, it gives very little cache space to sequential workloads, thus avoiding a key limitation of LRU. ARC outperforms LRU for a wide range of workloads.

Example For a huge, real-life workload generated by a large commercial search engine with a 4GB cache, ARC's hit ratio was dramatically better than that of LRU (40.44 percent vs percent). -IBM (Almaden Research Center)

ARC vs. LRU

ARC Currently, ARC is a research prototype and will be available to customers via many of IBM's existing and future products. Currently, ARC is a research prototype and will be available to customers via many of IBM's existing and future products.

References ing#Other_caches ing#Other_caches ing#Other_caches ing#Other_caches /2os/2os/os2.pdf /2os/2os/os2.pdf /2os/2os/os2.pdf /2os/2os/os2.pdf orageSystems/autonomic_stora ge/ARC/index.shtml orageSystems/autonomic_stora ge/ARC/index.shtml orageSystems/autonomic_stora ge/ARC/index.shtml orageSystems/autonomic_stora ge/ARC/index.shtml people/dmodha/arc-fast.pdf people/dmodha/arc-fast.pdf people/dmodha/arc-fast.pdf people/dmodha/arc-fast.pdf