Multimedia Proxy Caching Mechanism for Quality Adaptive Streaming Applications in The Internet Reza Rejaie, Haobo Yu, Mark Handley, and Deborah Estrin.

Slides:



Advertisements
Similar presentations
Streaming Video over the Internet
Advertisements

Playback delay in p2p streaming systems with random packet forwarding Viktoria Fodor and Ilias Chatzidrossos Laboratory for Communication Networks School.
Optimization of Data Caching and Streaming Media Kristin Martin November 24, 2008.
Doc.: IEEE /0604r1 Submission May 2014 Slide 1 Modeling and Evaluating Variable Bit rate Video Steaming for ax Date: Authors:
Pervasive Web Content Delivery with Efficient Data Reuse Chi-Hung Chi and Cao Yang School of Computing National University of Singapore
1 Nazanin Magharei, Reza Rejaie University of Oregon INFOCOM 2007 PRIME: P2P Receiver-drIven MEsh based Streaming.
Improving TCP Performance over Mobile Ad Hoc Networks by Exploiting Cross- Layer Information Awareness Xin Yu Department Of Computer Science New York University,
Dynamic Adaptive Streaming over HTTP2.0. What’s in store ▪ All about – MPEG DASH, pipelining, persistent connections and caching ▪ Google SPDY - Past,
Measurements of Congestion Responsiveness of Windows Streaming Media (WSM) Presented By:- Ashish Gupta.
June 3, A New Multipath Routing Protocol for Ad Hoc Wireless Networks Amit Gupta and Amit Vyas.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Multimedia Proxy Caching Mechanism for Quality Adaptive Streaming Applications in the Internet Reza Rejaie Haobo Yu Mark Handley Deborah Estrin Presented.
Multimedia Proxy Caching Mechanism for Quality Adaptive Streaming Applications in the Internet R. Rejaie, H. Yu, M. Handley, D. Estrin.
Analysis of Using Broadcast and Proxy for Streaming Layered Encoded Videos Wilson, Wing-Fai Poon and Kwok-Tung Lo.
Distributed Video Streaming Over Internet Thinh PQ Nguyen and Avideh Zakhor Berkeley, CA, USA Presented By Sam.
Vikash Agarwal, Reza Rejaie Computer and Information Science Department University of Oregon January 19, 2005 Adaptive Multi-Source.
RAP: An End-to-End Rate-Based Congestion Control Mechanism for Realtime Streams in the Internet Reza Rejai, Mark Handley, Deborah Estrin U of Southern.
Adaptive Multi-source Streaming in Heterogeneous Peer-to-peer Network Vikash Agarwa; Reza Rejaie Twelfth Annual Multimedia Computing and Networking (MMCN.
Prefix Caching assisted Periodic Broadcast for Streaming Popular Videos Yang Guo, Subhabrata Sen, and Don Towsley.
Reza Rejaie Computer and Information Science Department University of Oregon Antonio Ortega Integrated Media Systems Center University of Southern California.
Understanding Mesh-based Peer-to-Peer Streaming Nazanin Magharei Reza Rejaie.
Reza Rejaie AT&T Labs - Research1 Reza Rejaie AT&T Labs – Research Menlo Park, CA Jussi Kangasharju Institut Eurocom France NOSSDAV 2001, New York June.
1 USC INFORMATION SCIENCES INSTITUTE Proxy Caching Mechanism for Multimedia Playback Streams in the Internet R. Rejaie, M. Handley, H. Yu, D. Estrin USC/ISI.
Differentiated Multimedia Web Services Using Quality Aware Transcoding S. Chandra, C.Schlatter Ellis and A.Vahdat InfoCom 2000, IEEE Journal on Selected.
Efficient Support for Interactive Browsing Operations in Clustered CBR Video Servers IEEE Transactions on Multimedia, Vol. 4, No.1, March 2002 Min-You.
Random Early Detection Gateways for Congestion Avoidance
6/28/2015Reza Rejaie INFOCOM 07 1 Nazanin Magharei, Reza Rejaie University of Oregon PRIME: P2P Receiver-drIven MEsh based.
Streaming Video Gabriel Nell UC Berkeley. Outline Scalable MPEG-4 video – Layered coding method – Integrated transport-decoder buffer model RAP streaming.
Reliable Transport Layers in Wireless Networks Mark Perillo Electrical and Computer Engineering.
Prof. Reza Rejaie Computer & Information Science University of Oregon Winter 2003 An Overview of Internet Multimedia Networking.
CS :: Fall 2003 TCP Friendly Streaming Ketan Mayer-Patel.
Internetworking Fundamentals (Lecture #2) Andres Rengifo Copyright 2008.
Reza Rejaie AT&T Labs - Research1 Reza Rejaie AT&T Labs – Research Menlo Park, CA. ICON 2000 In collaboration with Mark.
Reza Rejaie CIS UO1 Prof. Reza Rejaie Computer & Information Science University of Oregon Fall 2002 Multimedia.
Receiver-driven Layered Multicast Paper by- Steven McCanne, Van Jacobson and Martin Vetterli – ACM SIGCOMM 1996 Presented By – Manoj Sivakumar.
By Ravi Shankar Dubasi Sivani Kavuri A Popularity-Based Prediction Model for Web Prefetching.
Distributed Quality-of-Service Routing of Best Constrained Shortest Paths. Abdelhamid MELLOUK, Said HOCEINI, Farid BAGUENINE, Mustapha CHEURFA Computers.
Providing Controlled Quality Assurance in Video Streaming across the Internet Yingfei Dong, Zhi-Li Zhang and Rohit Rakesh Computer Networking and Multimedia.
1 Cache Me If You Can. NUS.SOC.CS5248 OOI WEI TSANG 2 You Are Here Network Encoder Sender Middlebox Receiver Decoder.
1 USC INFORMATION SCIENCES INSTITUTE An End-to-end Architecture for Quality- Adaptive Streaming Applications in Best- effort Networks Reza Rejaie
Distributing Layered Encoded Video through Caches Authors: Jussi Kangasharju Felix HartantoMartin Reisslein Keith W. Ross Proceedings of IEEE Infocom 2001,
CONGESTION CONTROL and RESOURCE ALLOCATION. Definition Resource Allocation : Process by which network elements try to meet the competing demands that.
Segment-Based Proxy Caching of Multimedia Streams Authors: Kun-Lung Wu, Philip S. Yu, and Joel L. Wolf IBM T.J. Watson Research Center Proceedings of The.
CA-RTO: A Contention- Adaptive Retransmission Timeout I. Psaras, V. Tsaoussidis, L. Mamatas Demokritos University of Thrace, Xanthi, Greece This study.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
Polishing: A Technique to Reduce Variations in Cached Layer-Encoded Video By Michael Zink, Oliver Heckmann, Jens Schmitt, Andreas Mauthe, Ralf Steinmetz.
Rate Adaptation Protocol for Real-time Streams Goal: develop an end-to-end TCP-friendly RAP for semi-reliable rate-based applications (e.g. playback of.
PRIME: P2P Receiver-drIven MEsh based Streaming Nazanin Magharei, Reza Rejaie University of Oregon Presenter Jungsik Yoon.
SocialTube: P2P-assisted Video Sharing in Online Social Networks
Performance of Web Proxy Caching in Heterogeneous Bandwidth Environments IEEE Infocom, 1999 Anja Feldmann et.al. AT&T Research Lab 발표자 : 임 민 열, DB lab,
Multimedia Information System Lab. Network Architecture Res. Group Cooperative Video Streaming Mechanisms with Video Quality Adjustment Naoki Wakamiya.
NUS.SOC.CS Roger Zimmermann (based in part on slides by Ooi Wei Tsang) 1 Proxy Caching for Streaming Media.
Deadline-based Resource Management for Information- Centric Networks Somaya Arianfar, Pasi Sarolahti, Jörg Ott Aalto University, Department of Communications.
An Adaptive Video Streaming Control System: Modeling, Validation, and Performance Evaluation PRESENTED BY : XI TAO AND PRATEEK GOYAL DEC
August 23, 2001ITCom2001 Proxy Caching Mechanisms with Video Quality Adjustment Masahiro Sasabe Graduate School of Engineering Science Osaka University.
NUS.SOC.CS5248 Ooi Wei Tsang 1 Proxy Caching for Streaming Media.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
LAYERED QUALITY ADAPTATION for INTERNET VIDEO STREAMING Reza Rejaie, Mark Handley and Deborah Estrin Information Science Institute (ISI), University of.
Development of a QoE Model Himadeepa Karlapudi 03/07/03.
On the Interactions Between Layered Quality Adaptation And Congestion Control for Streaming Video Mick Feamster, Deepak Bansal, and Hari Balakrishnan MIT.
HP Labs 1 IEEE Infocom 2003 End-to-End Congestion Control for InfiniBand Jose Renato Santos, Yoshio Turner, John Janakiraman HP Labs.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
PATH DIVERSITY WITH FORWARD ERROR CORRECTION SYSTEM FOR PACKET SWITCHED NETWORKS Thinh Nguyen and Avideh Zakhor IEEE INFOCOM 2003.
Distributed Caching and Adaptive Search in Multilayer P2P Networks Chen Wang, Li Xiao, Yunhao Liu, Pei Zheng The 24th International Conference on Distributed.
Accelerating Peer-to-Peer Networks for Video Streaming
DASH2M: Exploring HTTP/2 for Internet Streaming to Mobile Devices
Proxy Caching for Streaming Media
The Impact of Replacement Granularity on Video Caching
A New Multipath Routing Protocol for Ad Hoc Wireless Networks
Modeling and Evaluating Variable Bit rate Video Steaming for ax
Presentation transcript:

Multimedia Proxy Caching Mechanism for Quality Adaptive Streaming Applications in The Internet Reza Rejaie, Haobo Yu, Mark Handley, and Deborah Estrin AT&T Labs – Research USC/ISI ACIRI at ICSI IEEE Info COMM 2000 pp

a proxy caching mechanism - this papers proposes a proxy caching mechanism for layered-encoded multimedia streams for layered-encoded multimedia streams in the Internet to maximize the delivered quality of popular streams to interested clients - a proxy cache resides close to a group of clients - requested streams are always delivered from the original servers through the proxy to clients. Thus, the proxy is able to intercept and cache these steams

- the proxy can significantly increase the delivered quality of popular streams to high BW clients despite the presence of a bottleneck on its path to the original server - the proxy can also reduce the startup delay, facilitate VCR-functionalities, and reduce load on the server and network - compare to the other enhancing approaches, such as mirror servers, proxy caches have lower storage and processing requirements

- recall that, the architecture of layered quality adaptation mechanism without the proxy is as following figure:

- with the proxy, the big image of the system will be as following:

congestion control ACKER Cache Controller prefetching request Layer0 Layer1 Layer2 Layer3 Cache Quality Adaptation proxy available BW

- to maximize the delivered quality to clients while obeying congestion controlled rate limits, streaming applications should match the quality of delivered stream with the average available BW on the path. Thus the quality of cached streams - Thus the quality of cached streams will depend on the available BW (on the path between the server and the proxy) to the first client that retrieved the stream

- however, quality variations of the cached stream are not correlated with the quality variations required by quality adaptation ( on the proxy-client path ) during the new playback - this means that the quality adaptation module at the proxy may attempt to send some data that do not exist in the cache (the missing data may be caused by either congestion or the quality adaptation module of the server) data prefetching - this results in requiring of data prefetching (done by the proxy) during the idle time or when delivering for the higher quality flow

- note that performing “data prefetching” for a cached stream can be considered as the quality adjustment of that stream - to allow fine-grain adjustment, each layer of the encoded stream is divided into equal-sized “segments” pieces called “segments” - the proxy prefetches missing segments that are required by the quality adaptation along the proxy-client path and are missing in the cache

novel - main contributions of this paper are novel prefetchingfine-grain cache replacement prefetching and fine-grain cache replacement algorithms algorithms for multimedia proxy caching - the interaction of the two algorithms causes the state of the cache to converge to an efficient state efficient state such that the quality of a cached stream is proportional to its popularity, and its quality variations are inversely proportional to its popularity

Main Assumptions: 1.using Rate Adaptation Protocol (RAP) without the implementation of packet retransmission mechanism 2. using Hierarchical Encoding 3. all streams are linear-layered encoded where all layers have the same constant BW ( for simplicity )

Delivery Procedure: - clients always send their requests for a particular stream to their corresponding proxy - when a proxy receives a request, it checks the availability of the requested stream - the rest of the delivery procedure varies for cache miss or a cache hit

Relaying on a cache miss: - if the requested stream is missing from the cache, the request is relayed to the original server. - the stream is played back from the server to the proxy via an RAP connection - the proxy then relays data packets toward the client through a separate RAP connection - in the cache miss scenario, the client does not observe any benefit from the presence of the proxy cache

Prefetching on a cache hit: - on a cache hit, the proxy acts as a server and starts playing back the requested stream - as a result, the client observes shorter startup latency - in case of the mismatch between the quality of the cached stream and the requirement of the proxy’s quality adaptation module, the proxy should prefetch the missing segments from the server ahead of time

- two possible scenarios: 1. Playback AvgBW <= Stored AvgBW 2. Playback AvgBW > Stored AvgBW - all the prefetched segments during a session are cached in both scenarios

Playback AvgBW <= Stored AvgBW Playback AvgBW > Stored AvgBW

Prefetching Mechanism: “sliding-window” - prefetching a segment from the server will take at least one RTT of the server-proxy path - thus, the proxy must predict a missing segment that may be required by the quality adaptation module in the future - quality adaptation adjusts the number of active layers according to random changes in the available BW, the time for upcoming adjustment apriori is not known a priori

- tradeoffs: the earlier the proxy prefetches a missing segment, the less accurate is the prediction, but the higher is the chance of receiving the prefetched segment in time - the server should deliver the requested segment based on their priority, otherwise the prefetching stream will fall behind the playback - note that the prefetched segments are always cached even if they arrive after their playout times

- the proxy maintains the playout time for each active client - at the playout time, t p, the proxy examines the interval [tp+T, tp+T+  ], which is called the “prefetching window” of the stream, and identifies all missing segments within this window - the proxy then sends a single “prefetching request” that contains an ordered list of all these missing segments in a prioritized fashion

- to loosely synchronize the prefetching stream with the playback stream, after  seconds, the proxy examines the next prefetching window and sends a new prefetching request to the server - when the server receives a prefetching request, it starts to send the requested segments based on their priorities (layer numbers) preempts - a new prefetching request preempts the previous one. If the server receives a new prefetching request before delivery of all the requested segments in the previous request, it ignores the old request and starts to deliver segments in the new request (this preempting mechanism causes the prefetching and the playback to proceed with the same rate) - the average quality improvement of a cached stream after each playback is determined by the average prefetching BW - multiple prefetching sessions can be multiplexed

Replacement Algorithm: - goal: converge the cache state to an efficient efficient state - the conditions of the “efficient” state: 1. the average quality of each cached stream is directly proportional to its popularity. Furthermore, the avg quality of the stream must converge to the avg BW across most recent playbacks 2. the quality variations of each cached stream are inversely proportional to its popularity

Replacement Pattern: - layered encoded streams are structured into separate layers, and each layer is further divided into segments with a unique ID - as the popularity of a cached stream decreases, its quality-- consequently its size-- is gradually reduce before it is completely flushed out - once a victim layer is identified, its cached segments are flushed from the end.

thevictim layer - if flushing all segments of the victim layer does not provide sufficient space, the proxy then identifies a new victim layer and repeats this process

Popularity Function: - in the context of streaming applications, the client can interact with the server and perform VCR-functionalities (i.e., Stop, FF, Rewind, Play). - intuitively, the popularity of each stream should reflect the level of interest that is observed through this interaction

- assume that the total playback time of each stream indicates the level of interest in that stream for example, if a client only watches half of one stream, its level of interest is half of a client who watches the entire stream. This approach can also include weighted duration of fast forward and rewind with proper weighting - define the term “weighted hit (whit)” where PlaybackTime: the total playback time of a session (second) StreamLength: the length of the entire stream (second)

- the proxy calculates weighted hits on a per-layer basis for each playback whit - the cumulative value of whit during a recent popularity window window (called the popularity window) is used popularity index as the popularity index of the layer - the popularity of each layer is re-calculated at the end of a session as follows: Where P denotes popularity  denotes the width of the popularity window

Note ! - adding and dropping layers by quality adaptation results in different PlaybackTimes in a session and consequently affects popularity of a cached layer - applying the definition of popularity on a per-layer basis is compatible with the proposed fine-grain replacement mechanism. The reason is that layered encoding guarantees that popularity of different layers in the same stream monotonically decrease with the layer number. Thus, a victim layer is always the highest in-cache layer of one of the cached streams - the length of a layer does not affect its popularity, because whit whit is normalized by stream length

Simulation Setup: - ns-2 simulator - RAP without error control mechanism [no retransmission scheme] - two sets of simulations: 1. focusing on evaluating the prefetching algorithm 2. focusing on the replacement algorithm

- in all simulations the server-proxy link is shared by 10 RAP and 10 long-lived TCP flows. One of the RAP flows is used to deliver multimedia streams from server to the proxy’s cache; the other 19 flows present background traffic, whose dynamic results in available BW changes that will trigger adding and dropping of layers - to limit the number of parameters, all streams have 8 layers, the same segment size of 1 KB, and layer consumption rate of 2.5 KB/s

Evaluation Metrics: - Completeness * the percentage of a stream residing in the cache * this metric allows us to trace the quality evolution of a cached stream after each playback * defined on a per-layer basis l * the completeness of layer l in cached s stream s is defined as:

where “chunk”: a continuous group of segments of a single layer of a cached stream chunk(l): the set of all chunks of layer l L l, i : the length (in terms of segments) of the ith cached chunk in layer l RL i : the official length of the layer

- Continuity * the level of smoothing of a cached stream * also defined on a per-layer basis l * the continuity of layer l in cached stream s s is defined as follows: where a layer break occurs when there is a missing segment in a layer

Results: - Prefetching

- Replacement Algorithm * to examine the hypothesis that the state of the cache gradually converges to an “efficient” state as result of the interaction between prefetching and replacement algorithm * the resulting quality due to cache replacement depends on two factors: stream popularity and the BW between the requesting clients and the proxy * 10 video streams with lengths uniformly distributes between 1 and 10 minutes. Stream#0 is the most popular one

- the cache size is set to be half of the total size of all 10 streams - to study the effect of stream popularity, BW sp >= BW pc - to study the effect of client’s bandwidth, BW sp < BW pc