Download presentation
Presentation is loading. Please wait.
1
Client Cache Management Improving the broadcast for one probability access distribution will hurt the performance of other clients with different access distributions. Therefore the client machines need to cache pages obtained from the broadcast.
2
Client Cache Management With traditional caching clients cache the data most likely to be accessed in the future. With Broadcast Disks, traditional caching may lead to poor performance if the server’s broadcast is poorly matched to the clients access distribution.
3
Client Cache Management In the Broadcast Disk system, clients cache the pages for which the local probability of access is higher than the frequency of broadcast. This leads to the need for cost-based page replacement.
7
Client Cache Management One cost-based page replacement strategy replaces the page that has the lowest ratio between its probability of access (P) and its frequency of broadcast (X) - PIX PIX requires the following: 1Perfect knowledge of access probabilities. 2Comparison of PIX values for all cache resident pages at cache replacement time.
8
Example One page is accessed 1% of the time at a particular time and is also broadcast 1% of the time. Second page is accessed only 0.5% but is broadcast 0.1% of time Which page to be replaced, 1 st will be replaced in favor of 2
11
Client Cache Management Another page replacement strategy adds the frequency of broadcast to an LRU style policy. This policy is known as LIX. LIX maintains a separate list of cache- resident pages for each logical disk A page enters the chain corresponding to the disk in which it is broadcast Each list is ordered based on an approximation of the access probability (L) for each page.
12
Cont. When a page is hit, it is moved to top of chain and when a new page is entered > A LIX value is computed by dividing L by X, the frequency of broadcast for the page at the bottom of each chain The page with the lowest LIX value is replaced.
15
Prefetching PIX/LIX for only demand driven pages An alternative approach to obtaining pages from the broadcast. Goal is to improve the response time of clients that access data from the broadcast. Methods of Prefetching: Tag Team Caching Prefetching Heuristic
16
Prefetching Tag Team Caching - Pages continually replace each other in the cache. For example two pages x and y, being broadcast, the client caches x as it arrives on the broadcast. Client drops x and caches y when y arrives on the broadcast.
18
Expected delay in demand driven Suppose a client is interested in accessing X and Y and Px = Py = 0.5 with one single slot for cache In demand driven, cache X and if needs Y, wait for Y and replace the cache by Y Expected delay on a cache miss is ½ of the rotation of the disk Expected delay over all accesses is Ci*Mi*Di, where C is the access probability, M is the probability of cache miss and D is the expected broadcast delay for page i For pages x and y, it is = 0.5 *0.5*0.5 +0.5*0.5*0.5 = 0.25
19
Expected Delay in Tag-team caching 0.5*0.5*0.25 + 0.5*0.5*0.25 = 0.125, that is average cost is ½ of the demand driven scheme Why: a miss can occur at any time in demand driven whereas misses can only occur during a half of the broadcast
20
20 Prefetching Simple Prefetching Heuristic Performs a calculation for each page that arrives on the broadcast based on the probability of access for the page (P) and the amount of time that will elapse before the page will come around again (T). If the PT value of the page being broadcast is higher than the page in cache with the lowest PT value, then the page in cache is replaced.
21
Example
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.