Download presentation
1
Proxy Caching For Multimedia Objects
Project of Multimedia Course Amir Nayyeri
2
Overview Introduction Video Caching My Work Other Interesting Subjects
Overview of Current Algorithms My Work Other Interesting Subjects Prefetching Distributed Proxies
3
Introduction What can a Proxy Do? How? Reducing the delay for users.
Reducing the load in the Backbone. How? Prefetching Caching Popular Requests
4
Video Stream Caching Video Objects vs Web Objects
High Data Rate, yet Adoptive Huge Volume One Hour MPEG1, about 675 MB Long Playback Duration Various Interactions Random Access Early Termination
5
Video Stream Caching Video Objects vs Web Objects
Caching the entire object is not feasible Media Streams are not required to deliver at once We can change the bandwidth for media streams, gaining lower quality …
6
Video Stream Caching Three Primary Ideas: Sliding Interval Caching
Prefix Caching Segment Caching
7
Video Stream Caching Sliding Interval Caching, Dan et all, 1996
Maintain the blocks after servicing the request. It tries to benefit from similar requests in a short period of time. r1 r2 1 2 2 3
8
Video Stream Caching Prefix Caching, Sen et all. 1999
Cache the Initial Frames of the stream Many of the users terminate the video before reaching the end of it The size of the video prefix Path between the server and the proxy Client playback delay Aids Bandwidth Smoothing
9
Video Stream Caching Segment Caching, Wu et all, 2001
Blocks of a media object are grouped into variable-sized, distance-sensitive segments Two LRU stacks are maintained: one for initial segments, one for later segments Provides better facilities for the replacement mechanism
10
Video Stream Caching Common Assumptions till NOW Continuous Playback
No interaction with the users Homogeneous Clients Identical Access Bandwidth Time Partitioning Only Non-Adoptive Caching
11
Video Stream Caching Heterogeneous Environments
Users with different request types From Cell Phone to PC Maybe Different formats of the same object is requested Can benefit from Transcoding, or Layered Coding
12
Video Stream Caching Rajaei et all, proposed the following, 2000:
On the first request of an object cache it as you receive it from the server. On later requests try to refine your database
13
Video Stream Caching Rajaei et all, proposed the following, 2000:
If the cache is full follow this replacement strategy: Once a victim video is identified, its cached segments are flushed as presented in the figure
14
Video Stream Caching Tang et all, proposed the following, 2002 :
They tried to benefit from transcoding Simplicity: transcode only from the full version FVO, Full Version Only, Fetching only the full version and transcode if other formats are required High CPU Load TVO, Transcoded Version Only, Fetching all the transcoded formats from the server High Network Load Tradeoff Use FVO with probability p TVO with Probability 1-p Adjust p according to the history of the requests
15
U = (L-r1)2 + (L-r2)2 + … + (L-rn)2
My Work Supporting users interactions, Random Access Considering Segments as the caching blocks rather than entire object Simple Version: Trying to get max Utility with limited initial tolerable delay, D, and proxy to server bandwidth, B. Utility Function: U = (L-r1)2 + (L-r2)2 + … + (L-rn)2 ri is the layers of the segment sent to the client for request i.
16
My Work Splitting the System of Caching into two main components, Queue Manager and Request Manager Request Manager sends the request from the main server to the Queue Manager Queue Manager tries to fetch more vital requests from the server sooner
17
Overall System Design Users RM QM Main Server
18
Request Manager P1 P2 … PN-1 PN
Maintains a vector from Recently referenced Segments These Segments are sorted due to an estimation the probability of future accesses The Vector is divided into N parts, p1, … pN pi contains segments cached up to ith layer. P1 P2 … PN-1 PN
19
Request Manager Upon a request of Seg from the clients:
Update the access record of Seg, and replace it in the vector Send request of not cached layers to QM Wait D Send the ready video to the client Start prefetching the future segments, if they are not already in the cache
20
Queue Manager Three types of requests:
Time Limit: those that should be answered after a limited time, extra layers to cache. Time Limit and not needed else: like number one but they are not needed after the time limit passed, extra layers to show. Vital time limit: they should be answered on time or the system will encounter serious problems, base layer to show.
21
Existing Challenges The replacement function: some primary assumptions are considered like using some well-known functions like LRU … How much of the Vector should be assigned to each layer storage? How the method can be extended to heterogeneous environments? Theoretical analysis about the order of the algorithms and also about the exactness of the solution could be really valuable. Maybe other things later.
22
Other Interesting Subjects
Distributed Proxies Prefetching
23
Distributed Proxies Some Proxies Working together
They can be very far from each other Two main models Centralized Manager Completely Distributed Each Client sends its request to the nearest proxy, then it will serve it directly, or fetch the required object from the main server or other proxies
24
Prefetching Just a brief Overview
Trying to anticipate the future requests from the previous logs The work of Zhong Su et al: “A Prediction System for Multimedia Prefetching in Internet”, 2000.
25
Prefetching (Cont) Keeping logs of users Request
Predicting the probability of a special requests among the future m requests, based on the previous n requests, call it m_step n_gram model. IDEA: Different Prediction methods of AI can be used to register better results.
26
Prefetching (Cont) Example(3-Gram, 2-Step): Log Files: A, B, C, J, E
A, B, C, E, F B, C, D, K, A B, C, D, K, B B, C, D, F, L
27
Prefetching (Cont) N-Gram Prediction A, B, C E(100%)
Example(3-Gram, 2-Step): Log Files: A, B, C, J, E A, B, C, E, F B, C, D, K, A B, C, D, K, B B, C, D, F, L Probability Model: N-Gram Prediction A, B, C E(100%)
28
Prefetching (Cont) N-Gram Prediction A, B, C E(100%) B, C, D K(66%)
Example(3-Gram, 2-Step): Log Files: A, B, C, J, E A, B, C, E, F B, C, D, K, A B, C, D, K, B B, C, D, F, L Probability Model: N-Gram Prediction A, B, C E(100%) B, C, D K(66%)
29
Thanks Any Questions?
30
References: [1] J. Liu, and J. Xu, “A Survey of Streaming Media Caching”, Department of Computer Science The Chinese University of Hong Kong. [2] S. Sen, J. Rexford, and D. Towsley, “Proxy prefix caching for multimedia streams,” in Proc. IEEE INFOCOM’99, New York, NY, Mar [3] S. Chen, B. Shen, S. Wee, and X. Zhang, “Adaptive and lazy segmentation based proxy caching for streaming media delivery,” Proc. NOSSDAV’03, Monterey, CA, June 2003. [4] R. Tewari, H. M. Vin, A. Dan, and D. Sitaram, “Resource-based caching for Web servers,” in Proc. SPIE/ACM Conf. on Multimedia Computing and Networking (MMCN'98), San Jose, CA, Jan [5] J. M. Almeida, D. L. Eager, and M. K. Vernon, “A hybrid caching strategy for streaming media files,” in Proc. Multimedia Computing and Networking (MMCN’01), San Jose, CA., Jan [6] J. Kangasharju, F. Hartanto, M. Reisslein, and K. W. Ross, “Distributing layered encoded video through caches,” IEEE Trans. Computers, 51(6), pp , June 2002. [7] R. Rejaie, H. Yu, M. Handley, and D. Estrin, “Multimedia proxy caching mechanism for quality adaptive streaming applications in the Internet,” in Proc. IEEE INFOCOM’00, Tel Aviv, Israel, Mar [8] J. Liu, X. Chu, and J. Xu, “Proxy Cache Management for Fine-Grained Scalable Video Streaming,” Proc. IEEE INFOCOM'04, Hong Kong, Mar [9] S. Podlipnig, L. Boszormenyi, “A Survey of Web Cache Replacement Strategies”, ACM Computer Science Surveys, December 2003. [10] X. Tang, F. Zhang, S. T. Chanson, “Streaming Media Caching Algorithms for Transcoding Proxies”, ACM Proceedings of the International Conference on Parallel Processing, 2002. [11] S. Acharya and B. C. Smith, “Middleman: A video caching proxy server,” in Proc. NOSSDAV’00, June 2000. [12] K. C. Tsui, J. Liu, M. J. Kaiser, “Self-Organized Load Balancing in Proxy Servers: Algorithms and Performance”, ACM Journal of Intelligent Information Systems, Volume 20, Issue 1, January 2003. [13] Brian D. Davison. (2004) Learning Web request patterns. [14] Z. Su, Q. Yang, H. Zhang, “A Prediction System for Multimedia Pre-fetching in Internet”, Proceedings of the eighth ACM international conference on Multimedia, 2000.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.