A Case for Delay-conscious Caching of Web Documents Peter Scheuermann, Junho Shim, Radek Vingralek Department of Electrical and Computer Engineering Northwestern.

Slides:



Advertisements
Similar presentations
Online Algorithm Huaping Wang Apr.21
Advertisements

The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms (ACM SIGMETRIC 05 ) ACM International Conference on Measurement & Modeling.
Cost-Based Cache Replacement and Server Selection for Multimedia Proxy Across Wireless Internet Qian Zhang Zhe Xiang Wenwu Zhu Lixin Gao IEEE Transactions.
A Survey of Web Cache Replacement Strategies Stefan Podlipnig, Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys, December 2003 Presenter:
Overcoming Limitations of Sampling for Agrregation Queries Surajit ChaudhuriMicrosoft Research Gautam DasMicrosoft Research Mayur DatarStanford University.
ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE
1 Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers By Sreemukha Kandlakunta Phani Shashank.
Caching Strategies in Transcoding-Enabled Proxy System for Streaming Media Distribution Networks Bo Shen Sung-Ju Lee Sujoy Basu IEEE Transactions On Multimedia,
Internet Networking Spring 2006 Tutorial 12 Web Caching Protocols ICP, CARP.
Beneficial Caching in Mobile Ad Hoc Networks Bin Tang, Samir Das, Himanshu Gupta Computer Science Department Stony Brook University.
Ph.D. DefenceUniversity of Alberta1 Approximation Algorithms for Frequency Related Query Processing on Streaming Data Presented by Fan Deng Supervisor:
Peer-to-Peer Based Multimedia Distribution Service Zhe Xiang, Qian Zhang, Wenwu Zhu, Zhensheng Zhang IEEE Transactions on Multimedia, Vol. 6, No. 2, April.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.
Adaptive Web Caching: Towards a New Caching Architecture Authors and Institutions: Scott Michel, Khoi Nguyen, Adam Rosenstein and Lixia Zhang UCLA Computer.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies John Dilley and Martin Arlitt IEEE internet computing volume3 Nov-Dec 1999 Chun-Fu.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #13 Web Caching Protocols ICP, CARP.
ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Department of Computer Science Stony Brook University.
Towards a Better Understanding of Web Resources and Server Responses for Improved Caching Craig E. Wills and Mikhail Mikhailov Computer Science Department.
Internet Cache Pollution Attacks and Countermeasures Yan Gao, Leiwen Deng, Aleksandar Kuzmanovic, and Yan Chen Electrical Engineering and Computer Science.
A Survey of proxy Cache Evaluation Techniques 系統實驗室 田坤銘
SAIU: An Efficient Cache Replacement Policy for Wireless On-demand Broadcasts Jianliang Xu, Qinglong Hu, Dik Lun Department of Computer Science in HK University.
Proxy Caching the Estimates Page Load Delays Roland P. Wooster and Marc Abrams Network Research Group, Computer Science Department, Virginia Tech 元智大學.
Caching And Prefetching For Web Content Distribution Presented By:- Harpreet Singh Sidong Zeng ECE Fall 2007.
Announcements Your homework is due on September 19 th. Your homework is due on September 19 th. I will be away starting Sept 5 th.
ECE/CSC Yan Solihin 1 An Optimized AMPM-based Prefetcher Coupled with Configurable Cache Line Sizing Qi Jia, Maulik Bakulbhai Padia, Kashyap Amboju.
Web Caching Schemes For The Internet – cont. By Jia Wang.
Evaluating Content Management Techniques for Web Proxy Caches Martin Arlitt, Ludmila Cherkasova, John Dilley, Rich Friedrich and Tai Jin Hewlett-Packard.
Memory-Efficient Regular Expression Search Using State Merging Department of Computer Science and Information Engineering National Cheng Kung University,
1 Ekow J. Otoo Frank Olken Arie Shoshani Adaptive File Caching in Distributed Systems.
Achieving Load Balance and Effective Caching in Clustered Web Servers Richard B. Bunt Derek L. Eager Gregory M. Oster Carey L. Williamson Department of.
Chapter 4 Networking and the Internet Introduction to CS 1 st Semester, 2015 Sanghyun Park.
CPU Cache Prefetching Timing Evaluations of Hardware Implementation Ravikiran Channagire & Ramandeep Buttar ECE7995 : Presentation.
1 Design and Performance of a Web Server Accelerator Eric Levy-Abegnoli, Arun Iyengar, Junehwa Song, and Daniel Dias INFOCOM ‘99.
CH2 System models.
CMPE 421 Parallel Computer Architecture
Web Cache Replacement Policies: Properties, Limitations and Implications Fabrício Benevenuto, Fernando Duarte, Virgílio Almeida, Jussara Almeida Computer.
« Pruning Policies for Two-Tiered Inverted Index with Correctness Guarantee » Proceedings of the 30th annual international ACM SIGIR, Amsterdam 2007) A.
« Performance of Compressed Inverted List Caching in Search Engines » Proceedings of the International World Wide Web Conference Commitee, Beijing 2008)
NetCache Architecture and Deployment Peter Danzig Network Appliance, Santa Clara, CA 元智大學 系統實驗室 陳桂慧
Chapter 6 Distributed File Systems Summary Bernard Chen 2007 CSc 8230.
A Single-Pass Cache Simulation Methodology for Two-level Unified Caches + Also affiliated with NSF Center for High-Performance Reconfigurable Computing.
Multicache-Based Content Management for Web Caching Kai Cheng and Yahiko Kambayashi Graduate School of Informatics, Kyoto University Kyoto JAPAN.
ICP and the Squid Web Cache Duanc Wessels k Claffy August 13, 1997 元智大學系統實驗室 宮春富 2000/01/26.
PROP: A Scalable and Reliable P2P Assisted Proxy Streaming System Computer Science Department College of William and Mary Lei Guo, Songqing Chen, and Xiaodong.
Performance of Web Proxy Caching in Heterogeneous Bandwidth Environments IEEE Infocom, 1999 Anja Feldmann et.al. AT&T Research Lab 발표자 : 임 민 열, DB lab,
Modeling Information Navigation : Implication for Information Architecture Craig S. Miller 이주우.
Evaluating Content Management Techniques for Web Proxy Caches Martin Arlitt, Ludmila Cherkasova, John Dilley, Rich Friedrich and Tai Jin Proceeding on.
HTTP evolution - TCP/IP issues Lecture 4 CM David De Roure
CFTP - A Caching FTP Server Mark Russell and Tim Hopkins Computing Laboratory University of Kent Canterbury, CT2 7NF Kent, UK 元智大學 資訊工程研究所 系統實驗室 陳桂慧.
Energy-Efficient Data Caching and Prefetching for Mobile Devices Based on Utility Huaping Shen, Mohan Kumar, Sajal K. Das, and Zhijun Wang P 邱仁傑.
A BRIEF INTRODUCTION TO CACHE LOCALITY YIN WEI DONG 14 SS.
A Measurement Based Memory Performance Evaluation of Streaming Media Servers Garba Isa Yau and Abdul Waheed Department of Computer Engineering King Fahd.
The Measured Access Characteristics of World-Wide-Web Client Proxy Caches Bradley M. Duska, David Marwood, and Michael J. Feeley Department of Computer.
An Overview of Proxy Caching Algorithms Haifeng Wang.
A Flexible Interleaved Memory Design for Generalized Low Conflict Memory Access Laurence S.Kaplan BBN Advanced Computers Inc. Cambridge,MA Distributed.
Data Consolidation: A Task Scheduling and Data Migration Technique for Grid Networks Author: P. Kokkinos, K. Christodoulopoulos, A. Kretsis, and E. Varvarigos.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Video Caching in Radio Access network: Impact on Delay and Capacity
Jiahao Chen, Yuhui Deng, Zhan Huang 1 ICA3PP2015: The 15th International Conference on Algorithms and Architectures for Parallel Processing. zhangjiajie,
IP Routing table compaction and sampling schemes to enhance TCAM cache performance Author: Ruirui Guo a, Jose G. Delgado-Frias Publisher: Journal of Systems.
Improving the WWW: Caching or Multicast? Pablo RodriguezErnst W. BiersackKeith W. Ross Institut EURECOM 2229, route des Cretes. BP , Sophia Antipolis.
WATCHMAN: A Data Warehouse Intelligent Cache Manager Peter ScheuermannJunho ShimRadek Vingralek Presentation by: Akash Jain.
Adaptive Configuration of a Web Caching Hierarchy Pranav A. Desai Jaspal Subhlok Presented by: Pranav A. Desai.
Gleb Skobeltsyn Flavio Junqueira Vassilis Plachouras
Virtual memory.
Ramya Kandasamy CS 147 Section 3
Cache Memory Presentation I
Memory Management for Scalable Web Data Servers
Chapter 11: Indexing and Hashing
Chapter 11: Indexing and Hashing
Presentation transcript:

A Case for Delay-conscious Caching of Web Documents Peter Scheuermann, Junho Shim, Radek Vingralek Department of Electrical and Computer Engineering Northwestern University, Evanston, IL Oracle Corporation 400 Oracle Parkway, Box , Redwood Shores, CA 元智大學系統實驗室 宮春富 1999/12/1

Outline ⊙ Introduction ⊙ Design ⊙ Experimental Evaluation ⊙ Conclusion

Introduction ⊙ The World Wide Web has become the predominant client/server architecture. ⊙ The high response time perceived by Web clients is caused primarily by the long communication delays, although other factors (such as slow service times) also contribute. ⊙ The communication delay to a certain extent can be reduced by buying links with a higher bandwidth and improving the efficiency of communication protocols. ⊙ One of the most effective ways of reducing the communication delay is by employing caching.

⊙ The documents can be cached at clients as is done currently by most Web browsers, or by the Web servers themselves, which is most useful when a server contains many pointers to other servers. ⊙ In an attempt to tune the performance of caching proxies several techniques have been used, such as avoiding caching of documents that originate at nearby servers, and hierarchies of caches. ⊙ Cache replacement algorithms usually maximize the cache hit ratio by attempting to cache the data items which are most likely to be referenced in the future ⊙ We maximizing the cache hit ratio alone does not guarantee the best client response time in the Web environment. Cache Replacement

⊙ We define a new performance metric called delay-savings ratio which generalizes the hit ratio metric by explicitly considering cache miss costs. ⊙ We describe a new cache replacement algorithm LNC-R which maximizes the delay-savings ratio. ⊙ The LNC-R cache replacement algorithm approximates the optimal cache replacement algorithm, ⊙ The design of LNC-R relies on a solid theoretical foundation. New Algorithm

。 is the average delay to fetch document to cache 。 is the total number of references to 。 is the number of references to which were satisfied from the cache 。 average rate of reference to document 。 size of document ---- (1) ---- (2) ---- (3) Parameters and Functions

LNC-R Algorithm ⊙ LNC-R ( Least Normalized Cost Replacement ) selects for replacement the least profitable documents. ⊙ LNC-R sorts all documents held in the cache in ascending order of profit and selects the candidates for eviction in the sort order. ⊙ LNC-R simply tries to maximize the delay savings ratio by maximizing the profit from each unit of storage.

LNC-R-W3 Algorithm ⊙ Several studies of Web reference patterns show that Web clients exhibit a strong preference for accessing small documents --- (4) --- (5) -- (6)

Experimental Setup ⊙ We evaluated the performance of LNC-R-W3 on a client trace collected at Northwestern University. ⊙ We concentrate on two aspects: the dependence of reference rate on document size the correlation between the delay to fetch a document to cache and the size of the document ⊙ Previously published trace analyses show that small files are much more frequently referenced than large files. ⊙ The correlation between the document size and the delay to fetch the document is defined as :

Experimental Setup(con’t) ⊙ Some parameters: is the covariance between size and delay is the variance of size is the variance of delay show whether the delay to fetch a document to cache varies across documents of similar size ⊙ We measured the value on our trace as , which is relatively low.Therefore, delay-conscious caching is indeed necessary.

Experimental Setup(con’t)

Performance Metrics ⊙ We also use cache hit ratio (HR) as a secondary metric, which is defined as: is the number of references to document which were satisfied from cache is the total number of references to document ⊙ LRU-MIN exploits the preference of Web clients for accessing small documents. LRU-MIN does not consider the delay to fetch documents to the cache and estimates reference rate to each document using only the time of last reference.

Parameter K ⊙ Increasing the value of K improves the reliability of reference rate estimates. ⊙ Large value of K also result in higher spatial overhead to store the reference samples. ⊙ The transition from K=1 to K=2 is particularly sharp, since LNC-R-W3 with K=1 does not need to retain any reference samples after eviction of corresponding documents. ⊙ We conjecture that K should be set to 2 or 3 to obtain the best performance.

Parameter K (con’t)

Parameter b ⊙ Parameter b determines the skew of dependence of reference rate on document size. ⊙ The higher the value of b, the stronger the preference of clients to access small documents. ⊙ We conjecture that for best performance on most web workloads b should be set between 1 and 2.

Parameter b (con’t)

Performance Comparison

Conclusion ⊙ The main contribution of this paper is to show the importance of delay- conscious cache replacement algorithms for Web document cache management. ⊙ To address the need for design of delay-conscious cache replacement algorithms, we developed a new cache replacement algorithm LNC-R-W3. ⊙ The experimental results indicate that LNC-R-W3 provides consistently better performance than LRU and LRU-MIN.