CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 37 – P2P Streaming and P2P Applications/PPLive Klara Nahrstedt Spring 2011
Administrative MP3 preview is on April 27 Sign-up sheet for APRIL 27 demonstration will be provided in class on April 27 Preview demonstrations on April 27, 7-9pm in 216 SC CS Spring 2011
Administration Final MP3 delivery on April 29 Demonstration time slots on April 29 will be allocated on Thursday, April 28 (TA will to each group) Two demonstration intervals students not in Google competition – demos 2-4pm in 216 SC, students in Google competition – demos 5-7pm in 216 SC Pizza and Google Prizes Announcement at 7:15pm, April 29 (room 3403 SC) Homework 2 will be out on Wednesday, April 27 Deadline, May 4, 11:59pm – submission via compass CS Spring 2011
Outline P2P Streaming Single tree (previous lecture) Multiple trees Mesh-based streaming Case study: PPLive CS Spring 2011
Why P2P? Every node is both a server and client Easier to deploy applications at endpoints No need to build and maintain expensive infrastructure Potential for both performance improvement and additional robustness Additional clients create additional servers for scalability CS Spring 2011
Multiple Trees Challenge: a peer must be internal node in only one tree, leaf in the rest CS Spring Source Are nodes 1, 2, 3 receiving the same data multiple times?
Multiple Description Coding (MDC) Each description can be independently decoded (only one needed to reproduce audio/video) More descriptions received result in higher quality CS Spring 2011 codercoder Frames Packets for description 0 Packets for description n … 3n3n 2n2n 1n1n
Streaming in multiple-trees using MDC Assume odd-bit/even-bit encoding -- description 0 derived from frame’s odd- bits, description 1 derived from frame’s even-bits CS Spring (using RTP/UDP)
Multiple-Tree Issues Complex procedure to locate a potential- parent peer with spare out-degree Degraded quality until a parent found in every tree Static mapping in trees, instead of choosing parents based on their (and my) bandwidth An internal node can be a bottleneck CS Spring 2011
Mesh-based streaming Basic idea Report to peers the packets that you have Ask peers for the packets that you are missing Adjust connections depending on in/out bandwidth CS Spring 2011 Description 0/1/2 Description 1 Description 0 Description 1,2 Description 0,1,2 (Nodes are randomly connected to their peers, instead of statically) Description 0/1/2 (mesh uses MDC)
Content delivery CS Spring Description 0Description 2 Description 1 (1) Diffusion Phase ( ) (2) Swarming Phase ( ) (Levels determined by hops to source)
Diffusion Phase As a new segment (set of packets) of length L becomes available at source every L seconds Level 1 nodes pull data units from source, then level 2 pulls from level 1, etc. Recall that reporting and pulling are performed periodically CS Spring … Segment 0 Segment 1 … … Have segment 0 3 Send me Segment (during period 0) 3 Have segment (during period 1) Send me Segment 0 (drawings follow previous example)
Swarming Phase At the end of the diffusion all nodes have at least one data unit of the segment Pull missing data units from (swarm-parent) peers located at same or lower level Can node 9 pull new data units from node 16? Node 9 cannot pull data in a single swarm interval CS Spring 2011 (drawings follow previous example)
Purdue Stan1 Stan2 Berk2 Overlay Tree Stanford Berkeley Dumb Network Gatech Berk1 Stan1 Stan2 Berk1 Berk2 Source: Purdue Single Tree/Multi-tree/Mesh use Overlay P2P Multicast Source: Sanjay Rao’s lecture from Purdue CS Spring 2011
Overlay Performance Even a well-designed overlay cannot be as efficient as IP Mulitcast But performance penalty can be kept low Trade-off some performance for other benefits Increased Delay Dumb Network Gatech Duplicate Packets: Bandwidth Wastage Stanford Berkeley Source: Sanjay Rao’s lecture from Purdue CS Spring 2011
Traffic Distribution (2006) and New Trends (P4P) CS Spring 2011 P4P – ISPs and P2P Traffic Work together Source:
P2P Applications Many P2P applications since the 1990s File sharing Napster, Gnutella, KaZaa, BitTorrent Internet telephony Skype Internet television PPLive, CoolStreaming CS Spring 2011
PPLive Current Viewers during Olympics 2008 CS Spring 2011
Case Study: PPLive Very popular P2P IPTV application From Huazhong U. of Science and Technology, China Free for viewers Over 100,000 simultaneous viewers and 400,00 viewers daily Over 200+ channels Windows Media Video and Real Video format CS Spring 2011
PPLive Overview CS Spring 2011
PPLive Design Characteristics Gossip-based protocols Peer management Channel discovery TCP used for signaling Data-driven p2p streaming TCP used for video streaming Peer client contacts multiple active peers to download media content of the channel Cached contents can be uploaded from a client peer to other peers watching the same channel Received video chunks are reassembled in order and buffered in queue of PPLive TV Engine (local streaming) CS Spring 2011
PPLive Architecture 1. Contact channel server for available channels 2. Retrieve list of peers watching selected channel 3. Find active peers on channel to share video chunks Source: “Insights into PPLive: A Measurement Study of a Large-Scale P2P IPTV System” by Hei et al. CS Spring 2011
P2P Streaming Process CS Spring 2011 TV Engine – responsible for downloading video chunks from PPLive network streaming downloaded video to local media player
Download and Upload Video Rate over Time at CCTV3 Campus CS Spring 2011
Evolution of active video peer connections on CCTV3 Network CS Spring 2011
PPLive Channel Size Analysis CS Spring 2011
Conclusion Couple of Lessons Learned Structure of PPLive overlay is close to random PPLive peers slightly peer to have closer neighbors and peers can attend simultaneous overlays Improves streaming quality Geometrically distributed session lenghts of nodes can be used to accurately model node arrival and departure Major differences between PPLive overlays and P2P file-sharing overlays!!! CS Spring 2011
Background Large-scale video broadcast over Internet (Internet TV such as PPLIve, YouTube) Real-time video streaming Need to support large numbers of viewers AOL Live 8 broadcast peaked at 175,000 (July 2005) CBS NCAA broadcast peaked at 268,000 (March 2006) NBC Olympic Games in 2008 served total 75.5 million streams BBC served almost 40 million streams of Olympic Games 2008 ( Very high data rate TV quality video encoded with MPEG-4 would require 1.5 Tbps aggregate capacity for 100 million viewers NFL Superbowl 2007 had 93 million viewers in the U.S. (Nielsen Media Research) CS Spring 2011
Reading “ Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast” by Liu et al. “Insights into PPLive: A Measurement Study of a Large- Scale P2P IPTV System” by Hei et al. “Mapping the PPLive Network: Studying the Impacts of Media Streaming on P2P Overlays” by Vu et al. Some lecture material borrowed from the following sources Sanjay Rao’s lecture on P2P multicast in his ECE 695B course at Purdue “Insights into PPLive: A Measurement Study of a Large-Scale P2P IPTV System” by Hei et al. “Mapping the PPLive Network: Studying the Impacts of Media Streaming on P2P Overlays” by Vu et al. CS Spring 2011