Client Cache Management Improving the broadcast for one probability access distribution will hurt the performance of other clients with different access.

Slides:



Advertisements
Similar presentations
Dissemination-based Data Delivery Using Broadcast Disks.
Advertisements

Page Replacement Algorithms
Page Replacement Algorithms
ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Cache Here we focus on cache improvements to support at least 1 instruction fetch and at least 1 data access per cycle – With a superscalar, we might need.
1 Recap: Memory Hierarchy. 2 Memory Hierarchy - the Big Picture Problem: memory is too slow and or too small Solution: memory hierarchy Fastest Slowest.
CSC 4250 Computer Architectures December 8, 2006 Chapter 5. Memory Hierarchy.
Using one level of Cache:
Processor - Memory Interface
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Power Efficient IP Lookup with Supernode Caching Lu Peng, Wencheng Lu*, and Lide Duan Dept. of Electrical & Computer Engineering Louisiana State University.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.
Memory Problems Prof. Sin-Min Lee Department of Mathematics and Computer Sciences.
Web Servers How do our requests for resources on the Internet get handled? Can they be located anywhere? Global?
Virtual Memory Chapter 8.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
CSI 400/500 Operating Systems Spring 2009 Lecture #9 – Paging and Segmentation in Virtual Memory Monday, March 2 nd and Wednesday, March 4 th, 2009.
ECE7995 Caching and Prefetching Techniques in Computer Systems Lecture 8: Buffer Cache in Main Memory (IV)
SAIU: An Efficient Cache Replacement Policy for Wireless On-demand Broadcasts Jianliang Xu, Qinglong Hu, Dik Lun Department of Computer Science in HK University.
An Intelligent Cache System with Hardware Prefetching for High Performance Jung-Hoon Lee; Seh-woong Jeong; Shin-Dug Kim; Weems, C.C. IEEE Transactions.
Web Caching Schemes For The Internet – cont. By Jia Wang.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
RAID Ref: Stallings. Introduction The rate in improvement in secondary storage performance has been considerably less than the rate for processors and.
Massively Distributed Database Systems Broadcasting - Data on air Spring 2014 Ki-Joune Li Pusan National University.
Memory Management What if pgm mem > main mem ?. Memory Management What if pgm mem > main mem ? Overlays – program controlled.
7-1 Chapter 7 - Memory Principles of Computer Architecture by M. Murdocca and V. Heuring © 1999 M. Murdocca and V. Heuring Principles of Computer Architecture.
7-1 Chapter 7 - Memory Department of Information Technology, Radford University ITEC 352 Computer Organization Principles of Computer Architecture Miles.
Segment-Based Proxy Caching of Multimedia Streams Authors: Kun-Lung Wu, Philip S. Yu, and Joel L. Wolf IBM T.J. Watson Research Center Proceedings of The.
Search Engine Caching Rank-preserving two-level caching for scalable search engines, Paricia Correia Saraiva et al, September 2001
Cache Memory.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
Computer Architecture Lecture 26 Fasih ur Rehman.
By Andrew Yee. Virtual Memory Memory Management What is Page Replacement?
Chapter 7 Sampling Distributions Statistics for Business (Env) 1.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 3:00-4:00 PM.
Memory Management What if pgm mem > main mem ?. Memory Management What if pgm mem > main mem ? Overlays – program controlled.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
Energy-Efficient Data Caching and Prefetching for Mobile Devices Based on Utility Huaping Shen, Mohan Kumar, Sajal K. Das, and Zhijun Wang P 邱仁傑.
Client Cache Management Improving the broadcast for one probability access distribution will hurt the performance of other clients with different access.
Operating Systems (CS 340 D) Princess Nora University Faculty of Computer & Information Systems Computer science Department.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Cache Memory Chapter 17 S. Dandamudi To be used with S. Dandamudi, “Fundamentals of Computer Organization and Design,” Springer,  S. Dandamudi.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Massively Distributed Database Systems Broadcasting - Data on air Spring 2015 Ki-Joune Li Pusan National University.
Page Table Implementation. Readings r Silbershatz et al:
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
IP Routing table compaction and sampling schemes to enhance TCAM cache performance Author: Ruirui Guo a, Jose G. Delgado-Frias Publisher: Journal of Systems.
- 세부 1 - 이종 클라우드 플랫폼 데이터 관리 브로커 연구 및 개발 Network and Computing Lab.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Copyright © Cengage Learning. All rights reserved. 8 PROBABILITY DISTRIBUTIONS AND STATISTICS.
Associative Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of.
Chapter 2 Memory and process management
Memory COMPUTER ARCHITECTURE
The Impact of Replacement Granularity on Video Caching
Computer Architecture
CSC 4250 Computer Architectures
Data Dissemination and Management (2) Lecture 10
Dissemination-based Data Delivery Using Broadcast Disks
Module IV Memory Organization.
Module IV Memory Organization.
CSE 4340/5349 Mobile Systems Engineering
COT 4600 Operating Systems Spring 2011
Operating Systems CMPSC 473
Cache - Optimization.
10/18: Lecture Topics Using spatial locality
Chapter Contents 7.1 The Memory Hierarchy 7.2 Random Access Memory
Data Dissemination and Management (2) Lecture 10
Presentation transcript:

Client Cache Management Improving the broadcast for one probability access distribution will hurt the performance of other clients with different access distributions. Therefore the client machines need to cache pages obtained from the broadcast.

Client Cache Management With traditional caching clients cache the data most likely to be accessed in the future. With Broadcast Disks, traditional caching may lead to poor performance if the server’s broadcast is poorly matched to the clients access distribution.

Client Cache Management In the Broadcast Disk system, clients cache the pages for which the local probability of access is higher than the frequency of broadcast. This leads to the need for cost-based page replacement.

Client Cache Management One cost-based page replacement strategy replaces the page that has the lowest ratio between its probability of access (P) and its frequency of broadcast (X) - PIX PIX requires the following: 1Perfect knowledge of access probabilities. 2Comparison of PIX values for all cache resident pages at cache replacement time.

Example One page is accessed 1% of the time at a particular time and is also broadcast 1% of the time. Second page is accessed only 0.5% but is broadcast 0.1% of time Which page to be replaced, 1 st will be replaced in favor of 2

Client Cache Management Another page replacement strategy adds the frequency of broadcast to an LRU style policy. This policy is known as LIX. LIX maintains a separate list of cache- resident pages for each logical disk A page enters the chain corresponding to the disk in which it is broadcast Each list is ordered based on an approximation of the access probability (L) for each page.

Cont. When a page is hit, it is moved to top of chain and when a new page is entered > A LIX value is computed by dividing L by X, the frequency of broadcast for the page at the bottom of each chain The page with the lowest LIX value is replaced.

Prefetching PIX/LIX for only demand driven pages An alternative approach to obtaining pages from the broadcast. Goal is to improve the response time of clients that access data from the broadcast. Methods of Prefetching: Tag Team Caching Prefetching Heuristic

Prefetching Tag Team Caching - Pages continually replace each other in the cache. For example two pages x and y, being broadcast, the client caches x as it arrives on the broadcast. Client drops x and caches y when y arrives on the broadcast.

Expected delay in demand driven Suppose a client is interested in accessing X and Y and Px = Py = 0.5 with one single slot for cache In demand driven, cache X and if needs Y, wait for Y and replace the cache by Y Expected delay on a cache miss is ½ of the rotation of the disk Expected delay over all accesses is Ci*Mi*Di, where C is the access probability, M is the probability of cache miss and D is the expected broadcast delay for page i For pages x and y, it is = 0.5 *0.5* *0.5*0.5 = 0.25

Expected Delay in Tag-team caching 0.5*0.5* *0.5*0.25 = 0.125, that is average cost is ½ of the demand driven scheme Why: a miss can occur at any time in demand driven whereas misses can only occur during a half of the broadcast

20 Prefetching Simple Prefetching Heuristic Performs a calculation for each page that arrives on the broadcast based on the probability of access for the page (P) and the amount of time that will elapse before the page will come around again (T). If the PT value of the page being broadcast is higher than the page in cache with the lowest PT value, then the page in cache is replaced.

Example