Download presentation
Presentation is loading. Please wait.
Published byPhilippa Wiggins Modified over 9 years ago
1
1 Overview of Hyper-Proxy Project: High-Quality Streaming Media Delivery (an academia-industry collaboration) Xiaodong Zhang (Major Team Members: Songqing Chen, Bo Shen, Susie Wee ) Founded by NSF and HP Labs
2
2 Proxy Caching Server Intermediary Client Reduce response time to client Reduce network traffic Reduce server’s load
3
3 Existing Internet Status Servers Intermediaries Clients Media Objects Very large sizes Very rigorous real-time delivery constrains: small startup latency, continuous delivery A large number of proxies with: disk, memory, and CPU cycles Diverse client access devices: computers, PDAs, cell-phones A large number of proxies with: disk, memory, and CPU cycles
4
4 A Missing Component in Proxy Caching Current streaming media delivery status (WWW’05) –Client-Server Model (poor scalability, heavy traffic) Downloading (about 90%). Many media servers limits downloading to protect copyrights. –CDNs (very expensive, idle resources) Dedicated services. Why not leverage existing proxies in Internet?
5
5 Typical Engineering Solutions Examining existing proxy structure: Squid is an open source software. Redesign Squid: add the streaming function. Integrating segmentation methods: restructure the proxy to make it segmentation enable. Testing and evaluating the streaming proxy.
6
6 Three Aimed Impacts Academic Impact: influencing the research community via publications, and producing strong Ph.Ds. HP Product Impact: the research prototype is identified by product divisions for technology transfer. Industry Impact: a successful technology transfer or a product becomes demanding in industry or in the market.
7
7 Aimed Impacts from NSF Intellectual merits and challenges: –Knowledge Advancement and Discovery. –Originality and creativeness of the research. –Problem solving ability and qualifications. –Activity organization and resource availability. Broader impact: –Wider applications and deeper impact to society. –Education impact and diverse workforce training (e.g., gender, ethnicity, disability, geographic, etc. ) –Dissemination of the tools and software in public. –Enhancement of infrastructure for research and education
8
8 Current Solutions for Proxy Streaming Dealing with large size – Object segmentation – partial caching Along the quality domain Along the viewing time domain Reducing startup latency, increasing Byte Hit Ratio – Priority caching – caching beginnings popular objects. Guaranteeing continuous streaming –Little research done
9
9 The State-of-the-art of Segmentation & Priority Caching Prefix Caching (INFOCOM’99-’02) Uniform Segmentation (JSAC’02) Exponential Segmentation (WWW’01) Adaptive-Lazy Segmentation (NOSSDAV’03) prefix suffix ……………………. …………………….. Priority: low startup latency, bursty smoothing Priority: load balance, low startup latency Priority: low startup latency, quick replacement Priority: high byte hit ratio Existing strategies emphasize on byte hit ratio or startup latency. Conflicting Automatically provide continuous streaming?
10
10 A Case of Conflicting Performance Interests Evaluated on a typical workload: WEB
11
11 Problems in Streaming Proxy Design Independently pursuing highest byte hit ratio or minimizing startup latency –But they conflict with each other! Lack of serious considerations to guarantee the continuous streaming –But this is the most important QoS concern to viewers! –Providing continuous streaming also conflicts with improving byte hit ratio!
12
12 Our Design Model An efficient design model should guarantee continuous delivery subject to low startup latency and high byte hit ratio. Challenges –How to reconcile guaranteeing continuous streaming delivery with improving byte hit ratio? –How to balance reducing startup latency with improving byte hit ratio?
13
13 Algorithms Design in Hyper Proxy (intellectual challenges) Active Prefetching Technique –Prefetching uncached segments to ensure continuous delivery. Lazy Segmentation Method –Segmenting objects adapting the access patterns. Priority-based Admission Policy –Continuous streaming > small startup latency > byte hit ratio Differentiated Replacement Policy –Keeping appropriate (not necessarily the most popular) segments in the proxy
14
14 Our Contributions Addressed two pairs of conflicting interests Designed Hyper-Proxy following our efficient design model Implemented Hyper-Proxy and deployed it at Hewlett-Packard Company It is the first deployed segment-based streaming proxy
15
15 Outline Addressing two pairs of conflicting interests –Provide Continuous Streaming via Active Prefetching –Minimize Startup Latency via Performance Trade-off Hyper-Proxy Design Conclusion
16
16 Proxy Jitter: Minimization Objective Client Streaming Proxy Internet Streaming Server ? ? ? YN ! Proxy Jitter: the delay of fetching uncached segments Bs: encoding rate Bt: average bandwidth Prefetching is done in a unicast channel
17
17 Window-based Prefetching Principle: to fetch the next segment after a client starts to access the current segment Streaming speed = encoding rate, B s Prefetching speed = average network bandwidth, Bt Streaming… L Prefetching Bs ≤ 2Bt L
18
18 Active Prefetching Principle:prefetching as early as when a client starts to access the object. Additional notation –Ns: the number of cached segments of an object Uniformly Segmented Object –Ns < Bs/Bt – 1 (converted to number of segments) Some segments can not be prefetched in time –Ns ≥ Bs/Bt – 1 All segments can be prefetched in time Proxy Jitter!
19
19 Active Prefetching? Is it possible to guarantee an always-in-time prefetching? Yes! To increase Ns! – i.e., to cache more segments of those objects who have difficulty to afford in-time prefetching How many segments of an object should be cached so that active prefetching always works?
20
20 Active Prefetching – A Step Further Minimum cached length of an object: –To ensure the prefetching of its uncached segments to be in time. (L obj : length of segment, L 1 : Length of 1 st cached segment). Uniformly Segmented ObjectExponentially Segmented Object
21
21 Active Prefetching – In Practice High Medium Low Cashed length by popularity Medium Byte Hit Ratio slightly decreases; Proxy Jitter is totally eliminated! Same required minimum length for continuous streaming. But Insufficient for popular files. High Low Not all popular segments need to be cached, but appropriate ones! Conflicting Low popularity document needs additional caching space, which will be provided by others.
22
22 Outline Addressing two pairs of conflicting interests –Guarantee Continuous Streaming via Active Prefetching - Tradeoffs between Startup Latency and byte hit ratios Hyper-Proxy Design Conclusion and Future work
23
23 Modeling Assumptions Assumption 1: there is no premature termination. Assumption 2: objects are sequentially accessed. Assumption 3: Zipf-like distribution of object popularities (USITS’01, NOSSDAV’02) Assumption 4: Poisson distribution of request arrival intervals (USITS’01, NOSSDAV’02)
24
24 Modeling Results We get the relative changing rate of byte hit ratio to delayed start request ratio mathematically. We found byte hit ratio decreases much slower than delayed startup request ratio. We can use a small reduction of byte hit ratio to trade for a large decrease of delayed startup request ratio.
25
25 Outline Addressing two pairs of conflicting interests –Guarantee Continuous Streaming via Active Prefetching –Minimize Startup Latency via Performance Trade-off Hyper-Proxy Design Conclusion and Future work
26
26 Algorithms Design in Hyper Proxy Active Prefetching Technique –Prefetching uncached segments Lazy Segmentation Method –Segmenting objects according to real access pattern Priority-based Admission Policy –Continuous streaming > small startup latency > byte hit ratio Differentiated Replacement Policy –Keeping appropriate (not necessarily the most popular) segments in the proxy
27
27 Admission: fully admit the initially accessed objects.Replacement: find victim.Lazy Segmentation: triggered by a replacement.Replacement: kept partial segments, shift to next list. More replacements: adding partial segment sets.Prefetching: for shortage of segments, list shift. Admission: based on prefetching calculation. Basic Hyper-Proxy Operations (Lazy Segmentation based) Size: Full Size of a Media Object CDL: cached data length CDL = Size its Prefetching Length<= CDL < Size CDL < its Prefetching Length
28
28 Priority-based Admission Policy Admission due to new object access Admission for Prefetching Length Admission due to popularity increase
29
29 Active Prefetching For partially cached objects: –Determine when to prefetch which segment(s) upon client accesses –Determine whether the Prefetching Length is cached
30
30 Lazy Segmentation Collect object access statistics along client accesses Calculate the base segment length as the average client access length when a fully cached object is selected as the victim by the replacement policy Different objects have different base segment length Calculation
31
31 Differentiated Replacement Select victims according to different admission situations If a partially cached victim object, replace the last segment (where the Startup Length is taken care of) If a fully cached victim object, replace all segments but the first L thd
32
32 Workload Summary Workload#req#objSizeλLength Encoding (Bs) Transfer (Bt) Duration WEB151884005140.472-12028~256[0.5, 2]1 PART151884005140.472-12028~256[0.5, 2]1 REAL900040320-----10 λ : of Poisson, in second : of Zipf-like, the skew factor Length : of objects, in minute Duration : of simulation time, in days Size: of total objects, in GB Encoding, Transfer: in Kbps
33
33 Hyper-Proxy Performance (comparing with Lazy-Segmentation, and Lazy- Segmentation with prefetching)
34
34 Hyper-Proxy Performance (comparing with Lazy-Segmentation, and Lazy- Segmentation with prefetching)
35
35 Hyper-Proxy Performance (comparing with Lazy-Segmentation, and Lazy- Segmentation with prefetching)
36
36 Outline Addressing two pairs of conflicting interests Hyper-Proxy Design Conclusion
37
37 Conclusions Hyper-Proxy design and implementation –Provide an effective design model –Enable cost-effective streaming delivery service on Web servers – Minimizing playback jitter and startup latency –The results from measurements are consistent to the simulation results.
38
38 Outline Background Architecture and Performance Conclusion and Ongoing Work
39
39 Hyper-Proxy – Protocol Servers Intermediaries Clients Hyper-Proxy HTTP; No Changes RTP/RTSP Existing Web servers can provide “real” streaming service!
40
40 Hyper-Proxy – Architecture Streaming Engine Local Manager&Scheduler Segmentation Enabled Cache Engine Disk Client Internet Hyper-Proxy RTP/RTSP HTTP ghost.mp4, please 1 st segment, pleaseMeta data Media data Meta data Synchronization Media data Next segment, please Perform streaming Interfacing two engines Object segmentation, Segment admission, replacement Fast data path
41
41 Hyper-Proxy Evaluation Test Environments Local Access Environment -- HP Labs, Palo Alto, CA –Server (apache 2.0.45): Pentium III 1 GHz, SuSE Linux 8.0 –Hyper-Proxy: HP x4000 workstation 2 GHz –Client: Pentium III 1GHz, SuSE Linux 8.0 Remote Access Environment -- Takaido, Japan –Server (apache 2.0.45): Pentium III 1 GHz, SuSE Linux 8.0 –Hyper-Proxy: HP x4000 workstation 2 GHz –Client: Pentium III 1 GHz, SuSE Linux 8.0 100 60-second Video Clips Encoded at 75 Kbps
42
42 Hyper-Proxy Evaluation in Global Intent Environment US West coast -- HP Labs, Palo Alto, CA –Server (apache 2.0.45): Pentium III 1 GHz, SuSE Linux 8.0 –Hyper-Proxy: HP x4000 workstation 2 GHz –Many Clients: Pentium III 1GHz, SuSE Linux 8.0 Asia – NTT, Takaido, Japan –Server (apache 2.0.45): Pentium III 1 GHz, SuSE Linux 8.0 –Hyper-Proxy: HP x4000 workstation 2 GHz –Many Clients: Pentium III 1 GHz, SuSE Linux 8.0 US East Cost – William and Mary, Williamsburg, VA –Hyper-Proxy: HP x4000 workstation 2 GHz –Many clients: diverse types.
43
43 Hyper-Proxy Performance Playback Jitter vs. Scalability PSNR (Peak Signal-to-Noise Ratio) shows no quality degradation when #client is less than 100+!
44
44 Hyper-Proxy Performance Startup Latency vs. Scalability
45
45 Hyper-Proxy Performance HP Company Media Server Log (08/01/2002) #requests: 4898; Amount of data: 15.2554 GB; 12 hours Rate (Kbps) File Length (minute) Max Access Duration (minute) 281, 10, 20, 501 565012 1121, 2, 5, 10, 20, 5014 1561, 20, 5014 1802, 5, 10, 20, 50, 10050 2561, 2, 5, 10, 20, 50, 10025
46
46 Hyper-Proxy Performance Byte Hit Ratio vs. Cache Size
47
47 Outline Background and Motivation Architecture and Performance Conclusion and Ongoing Work
48
48 Conclusions Hyper-Proxy design and implementation –Provide an efficient design model –Enable streaming delivery service on Web servers –Cause little playback jitter –Produce a small startup latency
49
49 Current Status and Impacts Impact of Hyper-Proxy at HP (product impact) –Deployed at Printing Division for world-wide training –Deployed at Telecommunication Division –Used by Imaging and Signal Processing Team in HP Labs Potential Industry Impact –Enterprise solution after the trial stage –Education solution for remote education (currently several universities are using the prototypes). –Two patents pending.
50
50 Project 1: Mobility Servers Intermediaries Clients Device Hand-Off Cooperation
51
51 Project 2: P2P Streaming Servers Intermediaries Clients Sharing contents and streaming resources
52
52 Project 3: Diversity of Devices Servers Intermediaries Clients Different screen sizes, color depths, et. al. transcoding
53
53 Project 4: Live Streaming Servers Intermediaries Clients The Oscar Annual Academy Awards cannot cache
54
54 Acknowledgement NSF and HP Labs for research grants Collaborators: Bo Shen, Susie Wee, Sumit Roy, Yong Yan, Sujoy Basu, Dan (Wai- tian) Tan, Ankcorn John, Mitch Trott, Zhichen Xu ; Zhen Xiao Members and alumni of HPCS Lab@WM
55
55 Related Publications (academic impact) 1. S. Chen, B. Shen, Y. Yan, S. Basu, and X. Zhang. SRB: Shared Running Buffers in Proxy to Exploit Memory Locality of Multiple Streaming Sessions. IEEE ICDCS 2004. 2. S. Chen, B. Shen, S. Wee, and X. Zhang. Designs of High Quality Streaming Proxy Systems. IEEE INFOCOM 2004. 3. S. Chen, B. Shen, S. Wee, and X. Zhang. Investigating Performance Insights of Segment-based Proxy Caching of Streaming Media Strategies. ACM/SPIE MMCN 2004. 4. S. Chen, B. Shen, S. Wee, and X. Zhang. Streaming Flow Analyses for Prefetching in Segment-based Proxy Caching Strategies to Improve Media Delivery Quality. WCW 2003. 5. S. Chen, B. Shen, S. Wee, and X. Zhang. Adaptive and Lazy Segmentation Based Proxy Caching for Streaming Media Delivery. ACM NOSSDVA 2003. Songqing Chen received his Ph.D., has been placed in a research academic institution (George Mason U.)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.