Download presentation
Presentation is loading. Please wait.
1
Least Popularity-per-Byte Replacement Algorithm for a Proxy Cache Kyungbaek Kim and Daeyeon Park. Korea Advances Institute of Science and Technology (KAIST) Eighth International Conference on Parallel and Distributed Systems
2
Outline Introduction Related work Least popularity-per-byte replacement algorithm Performance evaluation Conclusion
3
Introduction The correlation between the on-line fashion parameters and the object popularity in the proxy cache are weaken because of efficient client caches. It use the long-term measurements of request frequency as popularity value in this paper.
4
Least popularity-per-byte replacement algorithm (LPPB-R) It is a function-based algorithm. The function of the LPPB-R is to make the popularity per byte of the outgoing objects to be minimum.
5
Least popularity-per-byte replacement algorithm (LPPB-R) (cont.) How to set the popularity value determines the performance of this LPPB-R algorithm? Using the reference count directly. Reference count as the power term of the impact factor
6
Some other consideration in LPPB-R algorithm Using the multi queues to manage objects to decrease the complexity of calculation. It consider the problem of cache pollution.
7
Related work The classification of replacement algorithm Traditional LRU, LFU and FIFO Key-based LFF and LOG2SIZE Function-based GDS, Hybrid, LRV, SA-LRU
8
Least popularity-per-byte replacement algorithm The overview of LPPB-R U(j)=P(j)/S(j) P(j): the popularity value of object j S(j): the size of object j U(j): the popularity value per byte
9
Getting the popularity value Two model to get the popularity value P(j)=R(j)/T R(j): the reference count of j T: total requests through the proxy cache P(j)=1/(ß) R(j), (0<ß<1) ß: impact factor
10
Managing the objects The LPPB-R has terrible overhead to calculate the utilization values. The operation needs O(k) time. (k is the object number in the cache) It use multi queues to decrease the order of complexity of calculation.
11
Multi queues The ith queue manages the objects whose size is from 2 i-1 to 2 i -1. Thus, there will be different queues of objects. Where M is the cache size. The objects in each queue i are maintained as a separate LFU list. Decreasing the order of complexity from O(k) to.
12
Multi Queues (cont.)
13
Avoiding the cache pollution phenomenon It use LRU list for each LFU list to avoid the cache pollution. Checking the LRU list periodically.
14
Avoiding the cache pollution phenomenon (cont.)
15
Performance evaluation The traces are from pb and bo2 proxy server of NLANR.
16
Performance metrics and algorithms It consider three aspects of web caching benefits hit rate, byte rate and reduced latency It compare the performance of LPPB-R with LRU, LFU, LOG2SIZE, and SA- LRU.
17
Hit rate in bo2 server
18
Hit rate in pb server
19
Byte hit rate in bo2 server
20
Byte hit rate in pb server
21
Reduced latency in bo2 server
22
Reduced latency in pb server
23
Conclusion If the ß be set to the range from 0.3 to 0.5, LPPB-R will achieves the best hit rate. On the other hand, closer to zero the ß is, better the performance of the cache is in the byte hit rate and reduced latency.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.