Download presentation
Presentation is loading. Please wait.
Published byIlene Hill Modified over 9 years ago
1
SHADOWSTREAM: PERFORMANCE EVALUATION AS A CAPABILITY IN PRODUCTION INTERNET LIVE STREAM NETWORK ACM SIGCOMM 2012 2012.10.15 CING-YU CHU
2
MOTIVATION Live streaming is a major Internet application today Evaluation of live streaming Lab/testbed, simulation, modeling Scalability realism Live testing
3
CHALLENGE Protection Real views’ QoE Masking failures from real viewers Orchestration Orchestrating desired experimental scenarios (e.g., flash-crowd) Without disturbing QoE
4
MODERN LIVE STREAMING Complex hybrid systems Peer-to-peer network Content delivery network BitTorrent-like Tracker peers watching same channel overlay network topology Basic unit: pieces
5
MODERN LIVE STREAMING Modules P2P topology management CDN management Buffer and playpoint management Rate allocation Download/upload scheduling Viewer-interfaces Share bottleneck management Flash-crowd admission control Network-friendliness
6
METRICS Piece missing ratio Pieces not received by the playback deadline Channel supply ratio Total bandwidth capacity (CDN+P2P) to total streaming bandwidth demand
7
MISLEADING RESULTS SMALL-SCALE EmuLab: 60 clients vs. 600 clients Supply ratio Small: 1.67 Large: 1.29 Content bottleneck!
8
MISLEADING RESULTS SMALL-SCALE With connection limit CDN server’s neighbor connections are exhausted by those clients that join earlier
9
MISLEADING RESULTS MISSING REALISTIC FEATURE Network diversity Network connectivity Amount of network resource Network protocol implementation Router policy Background traffic
10
MISLEADING RESULTS MISSING REALISTIC FEATURE LAN-like network vs. ADSL-like network Hidden buffers ADSL has larger buffer but limited upload bandwidth
11
SYSTEM ARCHITECTURE
12
STREAMING MACHINE self-complete set of algorithms to download and upload pieces Multiple streaming machines experiment (E) Play buffer
13
R+E TO MASK FAILURES Another streaming machine For protection repair (R)
14
R+E TO MASK FAILURES Virtual playpoint Introducing a slight delay To hide the failure from real viewers R = rCDN Dedicated CDN resources Bottleneck
15
R = PRODUCTION Production streaming engine Fine-tuned algorithms (hybrid architecture) Larger resource pool More scalable protection Serving clients before experiment starts
16
PROBLEM OF R = PRODUCTION Systematic bias Competition between experiment and production Protect QoE higher priority for production underestimate experiment
17
PCE R = P + C C: CDN (rCDN) with bounded resource P: production δ
18
PCE rCDN as a filter It “lowers” the piece missing ratio curve of experiment visible by production down by δ
19
IMPLEMENTATION Modular process for streaming machines Sliding window to partition downloading tasks
20
STREAMING HYPERVISOR Task window management: sets up sliding window Data distribution control: copies data among streaming machines Network resource control: bandwidth scheduling among stream machines Experiment transition
21
STREAMING HYPERVISOR
22
TASK WINDOW MANAGEMENT Informs a streaming machine about the pieces that it should download
23
DATA DISTRIBUTION CONTROL Data store Shared data store Each streaming machine pointer
24
NETWORK RESOURCE CONTROL Production bears higher priority LED-BAT to perform bandwidth estimation Avoid hidden buffer network congestion
25
EXPERIMENT ORCHESTRATION Triggering Arrival Experiment Transition Departure
26
SPECIFICATION AND TRIGGERING Testing behavior pattern Multiple classes Each class Arrival rate function during interval Duration function L Triggering condition t start
27
ARRIVAL Independent arrivals to achieve global arrival pattern Network-wide common parameters t star t, t exp and λ(t) Included in keep-alive message
28
EXPERIMENT TRANSITION Current t 0, join at a e,i [t 0, a e,i ] Connectivity Transition Production neighbor’s production (not in test) Production rejoins
29
EXPERIMENT TRANSITION Playbuffer State Transition Legacy removal
30
DEPARTURE Early departure Capturing client state snapshot Using disconnection message Substitution Arrival process again Only equal or more frequent than the real viewer departure pattern
31
EVALUATION Software Framework Experimental Opportunities Protection and Accuracy Experiment Control Deterministic Replay
32
SOFTWARE FRAMEWORK Compositional Run-time Block-based architecture Total ~8000 lines of code Flexibility
33
EXPERIMENTAL OPPORTUNITIES Real traces from 2 living streaming testing channel (impossible in testbed) Flash-crowd No client departs
34
PROTECTION AND ACCURACY EmuLab (weakness) Multiple experiment with same settings 300 clients δ ~ 4% Buggy code!
35
EXPERIMENT CONTROL Trace-driven simulation Accuracy of distributed arrivals Impact of clock synchronization Up to 3 seconds
36
DETERMINISTIC REPLAY Minimize logged data Hypervisor Protocol packet: whole payload Data packet: only header
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.