Cache Replacement Algorithm 1999.05.04. Outline Exiting document replacement algorithm Squids cache replacement algorithm Ideal Problem.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management Basic memory management Swapping
Advertisements

9.4 Page Replacement What if there is no free frame?
CS 241 Spring 2007 System Programming 1 Memory Replacement Policies Lecture 32 Klara Nahrstedt.
Page Replacement Algorithms
Page Replacement Algorithms
Online Algorithm Huaping Wang Apr.21
Chapter 11 – Virtual Memory Management
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms (ACM SIGMETRIC 05 ) ACM International Conference on Measurement & Modeling.
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
A Survey of Web Cache Replacement Strategies Stefan Podlipnig, Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys, December 2003 Presenter:
Paging: Design Issues. Readings r Silbershatz et al: ,
Outperforming LRU with an Adaptive Replacement Cache Algorithm Nimrod megiddo Dharmendra S. Modha IBM Almaden Research Center.
Advanced Operating Systems - Spring 2009 Lecture 17 – March 23, 2009 Dan C. Marinescu Office: HEC 439 B. Office hours:
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Performance Evaluation of Web Proxy Cache Replacement Policies Orit Brimer Ravit krayif Sigal ishay.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.
CS 104 Introduction to Computer Science and Graphics Problems
Web Cache Behavior The Laboratory of Computer Communication and Networking Submitted by: Lena Vardit Liraz
Adaptive Content Management in Structured P2P Communities Jussi Kangasharju Keith W. Ross David A. Turner.
Submitting: Barak Pinhas Gil Fiss Laurent Levy
Virtual Memory Chapter 8.
Memory Management Virtual Memory Page replacement algorithms
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Squirrel: A decentralized peer- to-peer web cache Paul Burstein 10/27/2003.
A Hybrid Caching Strategy for Streaming Media Files Jussara M. Almeida Derek L. Eager Mary K. Vernon University of Wisconsin-Madison University of Saskatchewan.
Proxy Caching the Estimates Page Load Delays Roland P. Wooster and Marc Abrams Network Research Group, Computer Science Department, Virginia Tech 元智大學.
Web Cache Replacements 張燕光 資訊工程系 成功大學
1Bloom Filters Lookup questions: Does item “ x ” exist in a set or multiset? Data set may be very big or expensive to access. Filter lookup questions with.
Web Caching Schemes For The Internet – cont. By Jia Wang.
Cost-Aware WWW Proxy Caching Algorithms Pei Cao University of Wisconsin-Madison Sandy Irani University of California-Irvine Proceedings of the USENIX Symposium.
Evaluating Content Management Techniques for Web Proxy Caches Martin Arlitt, Ludmila Cherkasova, John Dilley, Rich Friedrich and Tai Jin Hewlett-Packard.
Least Popularity-per-Byte Replacement Algorithm for a Proxy Cache Kyungbaek Kim and Daeyeon Park. Korea Advances Institute of Science and Technology (KAIST)
1 Ekow J. Otoo Frank Olken Arie Shoshani Adaptive File Caching in Distributed Systems.
Virtual Memory.
O RERATıNG S YSTEM LESSON 10 MEMORY MANAGEMENT II 1.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
1 Towards Cinematic Internet Video-on-Demand Bin Cheng, Lex Stein, Hai Jin and Zheng Zhang HUST and MSRA Huazhong University of Science & Technology Microsoft.
Using the Small-World Model to Improve Freenet Performance Hui Zhang Ashish Goel Ramesh Govindan USC.
Search Engine Caching Rank-preserving two-level caching for scalable search engines, Paricia Correia Saraiva et al, September 2001
« Performance of Compressed Inverted List Caching in Search Engines » Proceedings of the International World Wide Web Conference Commitee, Beijing 2008)
Copyright © Curt Hill Query Evaluation Translating a query into action.
Qingqing Gan Torsten Suel CSE Department Polytechnic Institute of NYU Improved Techniques for Result Caching in Web Search Engines.
Proxy Cache and YOU By Stuart H. Schwartz. What is cache anyway? The general idea of cache is simple… Buffer data from a slow, large source within a (usually)
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
An Effective Disk Caching Algorithm in Data Grid Why Disk Caching in Data Grids?  It takes a long latency (up to several minutes) to load data from a.
Multicache-Based Content Management for Web Caching Kai Cheng and Yahiko Kambayashi Graduate School of Informatics, Kyoto University Kyoto JAPAN.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Memory: Paging.
Multicache-Based Content Management for Web Caching Kai Cheng and Yahiko Kambayashi Graduate School of Informatics, Kyoto University Kyoto JAPAN.
System Software Lab 1 Enhancement and Validation of Squid ’ s Cache Replacement Policy John Delley Martin Arlitt Stephane Perret WCW99 김 재 섭 EECS System.
CS 241 Section Week #9 (11/05/09). Topics MP6 Overview Memory Management Virtual Memory Page Tables.
Performance of Web Proxy Caching in Heterogeneous Bandwidth Environments IEEE Infocom, 1999 Anja Feldmann et.al. AT&T Research Lab 발표자 : 임 민 열, DB lab,
Energy-Efficient Data Caching and Prefetching for Mobile Devices Based on Utility Huaping Shen, Mohan Kumar, Sajal K. Das, and Zhijun Wang P 邱仁傑.
Project Presentation By: Dean Morrison 12/6/2006 Dynamically Adaptive Prepaging for Effective Virtual Memory Management.
CS307 Operating Systems Virtual Memory Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2012.
An Overview of Proxy Caching Algorithms Haifeng Wang.
Virtual Memory The address used by a programmer will be called a virtual address or logical address. An address in main memory is called a physical address.
Evaluating Content Management Technique for Web Proxy Cache M. Arlitt, L. Cherkasova, J. Dilley, R. Friedrich and T. Jin MinSu Shin.
Video Caching in Radio Access network: Impact on Delay and Capacity
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
Clustered Web Server Model
Computer Architecture
Chapter 8 Virtual Memory
Review.
Kalyan Boggavarapu Lehigh University
Chapter 9: Virtual Memory
Greedy Algorithms / Caching Problem Yin Tat Lee
Lecture 9: Caching and Demand-Paged Virtual Memory
Presentation transcript:

Cache Replacement Algorithm

Outline Exiting document replacement algorithm Squids cache replacement algorithm Ideal Problem

Exiting Document Replacement Algorithm Least-Recently-Used (LRU) –evicts the document which was requested the least recently. Least-Frequently-Used (LFU) –evicts the document which is accessed least frequently. Size [WASAF96] –evicts the largest document.

LRU-Threshold [ASAWF95] –is the same as LRU, except documents larger than a certain threshold size are never cached Log(Size)+LRU [ASAWF95] –evicts the document who has the largest log(size) and is the least recently used document among all documents with the same log(size). Hyper-G [WASAF96] –is a renement of LFU with last access time and size considerations;

Pitkow/Recker [WASAF96] –removes the least-recently-used document, except if all documents are accessed today, in which case the largest one is removed; Lowest-Latency-First [WA97] –tries to minimize average latency by removing the document with the lowest download latency rst;

Hybrid, introduced in [WA97], –is aimed at reducing the total latency. –function value : the utility of retaining a given document in the cache the smallest function value is then evicted. a document p located at server s c s - the time to connect with server s, b s - the bandwidth to server s, n p - the number of times p has been requested since it was brought into the cache, z p - the size (in bytes) of document p. Wb and Wn are constants. Estimates for cs and bs are based on the the times to fetch documents from server s in the recent past.

Lowest Relative Value (LRV), [LRV97] –LRV take into account locality, cost and size of a document. –function value : the utility of keeping a document in the cache. evicts the document with the lowest value. the value is based on extensive empirical analysis of trace data. Pi - the probability that a document is requested i + 1 times given that it is requested i times. Di - the total number of documents seen so far which have been requested at least i times in the trace Pi - estimated in an online manner by taking the ratio Di+1/Di, Pi(s) - is the same as Pi except the value is determined by restricting the count only to pages of size s.

1-D(t) - the probability that a page is requested again as a function of the time (in seconds) since its last request t; D(t) =.035 log(t + 1) +.45(1-e^(-t/2e6)) document d of size s and cost c, i - the last request to d is the ith request to it t - the last request to d is the was made t second ago ds value in LRV V(I,t,s) = P1(s)(1-D(t))*c/sif i=1 V(I,t,s) = Pi(1-D(t))*c/sotherwise

Squids Cache Replacement Algorithm LRU When selecting objects for removal, Squid –examines some number of objects and –determines which can be removed and which cannot If the object is currently being requested, or retrieved from an upstream site, it will not be removed. If the object is ``negatively-cached'' it will be removed. If the object has a private cache key, it will be removed Finally, if the time since last access is greater than the LRU threshold, the object is removed.

LRU threshold value is dynamic calculated based on the current cache size and the low and high mark (90% - 95%). –The LRU threshold scaled exponentially between the high and low water marks. the store swap size is near the low water mark, the LRU threshold the LRU threshold represents how long it takes to fill (or fully replace) your cache at the current request rate. ( 1~10 days ) Squid 1.1 v.s. Squid-2 –Squid1.1 cache storage is implemented as a hash table with some number of "hash buckets." scans one bucket at a time and sorts all the objects in the bucket by their LRU age. –Squid-2 we eliminated the need to use qsort() by indexing cached objects into an automatically sorted linked list. every time an object is accessed, it gets moved to the top of the list.

Ideal With the same document size, => removing the document with the lowest download latency first With the same download latency time => removing the largest document first The rate R, removing first. R = Zp / Ttot Zp - the size of document of p Ttot - total latency time

Problem Hybrid Algorithm The contents of squids access log –Elapsed time