Peer-to-Peer Networks (3) - IPTV Hongli Luo CEIT, IPFW.

Slides:



Advertisements
Similar presentations
Dynamic Replica Placement for Scalable Content Delivery Yan Chen, Randy H. Katz, John D. Kubiatowicz {yanchen, randy, EECS Department.
Advertisements

Peer-to-Peer Systems CNT
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Clayton Sullivan PEER-TO-PEER NETWORKS. INTRODUCTION What is a Peer-To-Peer Network A Peer Application Overlay Network Network Architecture and System.
PPL IVE : A M EASUREMENT S TUDY OF P2P IPTV S YSTEM Sergio Chacon.
Cooperative Overlay Networking for Streaming Media Content Feng Wang 1, Jiangchuan Liu 1, Kui Wu 2 1 School of Computing Science, Simon Fraser University.
19 – Multimedia Networking. Multimedia Networking7-2 Multimedia and Quality of Service: What is it? multimedia applications: network audio and video (“continuous.
Scalable On-demand Media Streaming with Packet Loss Recovery Anirban Mahanti Department of Computer Science University of Calgary Calgary, AB T2N 1N4 Canada.
Network Coding in Peer-to-Peer Networks Presented by Chu Chun Ngai
Resilient Peer-to-Peer Streaming Paper by: Venkata N. Padmanabhan Helen J. Wang Philip A. Chou Discussion Leader: Manfred Georg Presented by: Christoph.
1 Content Delivery Networks iBAND2 May 24, 1999 Dave Farber CTO Sandpiper Networks, Inc.
1 A Case For End System Multicast Yang-hua Chu, Sanjay Rao and Hui Zhang Carnegie Mellon University Largely adopted from Jonathan Shapiro’s slides at umass.
Performance Analysis of Orb Rabin Karki and Thangam V. Seenivasan 1.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast J. Liu, S. G. Rao, B. Li and H. Zhang Proc. of The IEEE, 2008 Presented by: Yan Ding.
Application layer (continued) Week 4 – Lecture 2.
Network Coding for Large Scale Content Distribution Christos Gkantsidis Georgia Institute of Technology Pablo Rodriguez Microsoft Research IEEE INFOCOM.
Cis e-commerce -- lecture #6: Content Distribution Networks and P2P (based on notes from Dr Peter McBurney © )
Peer-to-Peer Based Multimedia Distribution Service Zhe Xiang, Qian Zhang, Wenwu Zhu, Zhensheng Zhang IEEE Transactions on Multimedia, Vol. 6, No. 2, April.
CoolStreaming/DONet: A Data- driven Overlay Network for Peer- to-Peer Live Media Streaming INFOCOM 2005 Xinyan Zhang, Jiangchuan Liu, Bo Li, and Tak- Shing.
Application Layer Multicast
1March -05 Jiangchuan Liu with Xinyan Zhang, Bo Li, and T.S.P.Yum Infocom 2005 CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live.
An Overlay Multicast Infrastructure for Live/Stored Video Streaming Visual Communication Laboratory Department of Computer Science National Tsing Hua University.
T.Sharon-A.Frank 1 Multimedia Various Applications.
Web Caching and CDNs March 3, Content Distribution Motivation –Network path from server to client is slow/congested –Web server is overloaded Web.
A Case for End System Multicast Author: Yang-hua Chu, Sanjay G. Rao, Srinivasan Seshan and Hui Zhang.
Department of Computer Science & Engineering The Chinese University of Hong Kong Constructing Robust and Resilient Framework for Cooperative Video Streaming.
Some recent work on P2P content distribution Based on joint work with Yan Huang (PPLive), YP Zhou, Tom Fu, John Lui (CUHK) August 2008 Dah Ming Chiu Chinese.
Caching and Content Distribution Networks. Web Caching r As an example, we use the web to illustrate caching and other related issues browser Web Proxy.
CS Spring 2014 CS 414 – Multimedia Systems Design Lecture 40 – P2P Streaming (Part 4) Klara Nahrstedt.
Slide courtesy: Dr. Sumi Helal & Dr. Choonhwa Lee at University of Florida, USA Prof. Darshan Purandare at University of Central Florida, USA Dr. Meng.
Communication Part IV Multicast Communication* *Referred to slides by Manhyung Han at Kyung Hee University and Hitesh Ballani at Cornell University.
Network Topologies.
© 2009 AT&T Intellectual Property. All rights reserved. Multimedia content growth: From IP networks to Medianets Cisco-IEEE ComSoc Webinar. Sept. 23, 2009.
1 Content Distribution Networks. 2 Replication Issues Request distribution: how to transparently distribute requests for content among replication servers.
{ Content Distribution Networks ECE544 Dhananjay Makwana Principal Software Engineer, Semandex Networks 5/2/14ECE544.
Communication (II) Chapter 4
Exploring VoD in P2P Swarming Systems By Siddhartha Annapureddy, Saikat Guha, Christos Gkantsidis, Dinan Gunawardena, Pablo Rodriguez Presented by Svetlana.
COCONET: Co-Operative Cache driven Overlay NETwork for p2p VoD streaming Abhishek Bhattacharya, Zhenyu Yang & Deng Pan.
1 Towards Cinematic Internet Video-on-Demand Bin Cheng, Lex Stein, Hai Jin and Zheng Zhang HUST and MSRA Huazhong University of Science & Technology Microsoft.
CPSC 441: Multimedia Networking1 Outline r Scalable Streaming Techniques r Content Distribution Networks.
November 27 th, 2012 CS1652 Jack Lange University of Pittsburgh 1.
2: Application Layer1 Chapter 2 outline r 2.1 Principles of app layer protocols r 2.2 Web and HTTP r 2.3 FTP r 2.4 Electronic Mail r 2.5 DNS r 2.6 Socket.
A Case for End System Multicast Yang-hua Chu, Sanjay G. Rao, Srinivasan Seshan and Hui Zhang Presentation by Warren Cheung Some Slides from
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 37 – P2P Streaming and P2P Applications/PPLive Klara Nahrstedt Spring 2011.
TOMA: A Viable Solution for Large- Scale Multicast Service Support Li Lao, Jun-Hong Cui, and Mario Gerla UCLA and University of Connecticut Networking.
HUAWEI TECHNOLOGIES CO., LTD. Page 1 Survey of P2P Streaming HUAWEI TECHNOLOGIES CO., LTD. Ning Zong, Johnson Jiang.
Making the Best of the Best-Effort Service (2) Advanced Multimedia University of Palestine University of Palestine Eng. Wisam Zaqoot Eng. Wisam Zaqoot.
Content distribution networks (CDNs) r The content providers are the CDN customers. Content replication r CDN company installs hundreds of CDN servers.
A Utility-based Approach to Scheduling Multimedia Streams in P2P Systems Fang Chen Computer Science Dept. University of California, Riverside
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 37 – P2P Applications/PPLive Klara Nahrstedt Spring 2009.
Overlay Networks: An Akamai Perspective Ramesh K. Sitaraman, mangesh kasbekar, Woody Lichtenstein, and Manish Jain Akamai Technologies Inc Univerisy of.
March 2001 CBCB The Holy Grail: Media on Demand over Multicast Doron Rajwan CTO Bandwiz.
Content Distribution Network, Proxy CDN: Distributed Environment
CS 6401 Overlay Networks Outline Overlay networks overview Routing overlays Resilient Overlay Networks Content Distribution Networks.
Inside the New Coolstreaming: Principles, Measurements and Performance Implications Bo Li, Susu Xie, Yang Qu, Gabriel Y. Keung, Chuang Lin, Jiangchuan.
09/13/04 CDA 6506 Network Architecture and Client/Server Computing Peer-to-Peer Computing and Content Distribution Networks by Zornitza Genova Prodanoff.
Buffer Analysis of Live P2P Media Streaming Approaches Atif Nazir BSc ’07, LUMS.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
John S. Otto Mario A. Sánchez John P. Rula Fabián E. Bustamante Northwestern, EECS.
INTERNET PROTOCOL TELEVISION (IP-TV)
19 – Multimedia Networking
University of Pittsburgh
INTERNET PROTOCOL TELEVISION (IP-TV)
ECE 671 – Lecture 16 Content Distribution Networks
AWS Cloud Computing Masaki.
Content Distribution Networks + P2P File Sharing
EE 122: Lecture 22 (Overlay Networks)
Content Distribution Networks + P2P File Sharing
2019/9/14 PPSP Survey.
Presentation transcript:

Peer-to-Peer Networks (3) - IPTV Hongli Luo CEIT, IPFW

Internet Video Broadcasting r References: m “Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast” by Liu et al. m “Insights into PPLive: A Measurement Study of a Large- Scale P2P IPTV System” by Hei et al.

Background r Large-scale video broadcast over Internet m Real-time video streaming m Applications: Internet TV Broadcast of sports events Online games Distance education m Need to support large numbers of viewers AOL Live 8 broadcast peaked at 175,000 (July 2005) CBS NCAA broadcast peaked at 268,000 (March 2006) m Very high data rate TV quality video encoded with MPEG-4 would require 1.5 Tbps aggregate capacity for 100 million viewers NFL Superbowl 2007 had 93 million viewers in the U.S. (Nielsen Media Research)

Possible Solutions r Broadcasting is possible in air, cable networks, or local area networks r Possible solutions for broadcasting over Internet m Single server - unicast m IP multicast m Multicast overlay networks m Content delivery networks (CDNs) m Application end points (pure P2P)

Single Server r Application-layer solution m Single media server unicasts to all clients r Needs very high capacity to serve large number of clients m CPU m Main memory m Bandwidth r Impractical for millions of simultaneous viewers

IP Multicast r Network-layer solution m Routers responsible for multicasting r Efficient bandwidth usage r Requires per-group state in routers m High complexity m Scalability concern m Violates end-to-end design principle

IP Multicast r Unicast via Multicast S C C C Server Clients S C C C Server Clients UnicastMulticast Multicast group

IP Multicast r End-to-end design principle: a functionality should be m Pushed to higher layers if possible, unless m Implementing it at the lower layer can achieve significant performance befits that outweigh the cost of additional complexity r Slow deployment m IP multicast is often disabled in routers r Difficult to support higher layer functionality m Error control, flow control, and congestion control r Needs changes at the infrastructure level

IP Multicast “Smart Network” Berkeley Gatech Stanford Per-group Router State Source: Purdue Source: Sanjay Rao’s lecture from Purdue

Multicast Overlay Network r Consists of user hosts and possibly dedicated servers scattered through the Internet r Hosts, servers and logical links between them form an overlay network, which multicasts traffic from the source to users r Originally in IP multimcast, router responsible for forwarding packets, application run on the end systems. r New applications can now make their own forwarding decisions. r A logical network implemented on top of a physical network. r Consists of application-layer links r Application-layer link is logical link consisting of one or more links in underlying network r Each node in the overlay processes and forwards packets in an application-specific way r Used by both CDNs and pure P2P systems

MBone r The multicast backbone (MBone) is an overlay network that implements IP multicast. r Mbone was an experimental backbone for IP multicast traffic across the Internet r It connects multicast-capable networks over the existing Internet infrastructure r One of the popular applications run on top of the MBone is Vic r Vic m Supports multiparty videoconferencing m Broadcast both seminars and meetings across the Internet, e.g. IETF meetings

Content distribution networks (CDNs) Content replication r challenging to stream large files (e.g., video) from single origin server in real time r solution: replicate content at hundreds of servers throughout Internet m content downloaded to CDN servers ahead of time m placing content “close” to user avoids impairments (loss, delay) of sending content over long paths m CDN server typically in edge/access network origin server in North America CDN distribution node CDN server in S. America CDN server in Europe CDN server in Asia

Content distribution networks (CDNs) Content replication r CDN (e.g., Akamai) customer is the content provider (e.g., CNN) r CDN places CND servers close to ISP access networks and the clients r CDN replicates customers’ content in CDN servers. r when provider updates content, CDN updates servers origin server in North America CDN distribution node CDN server in S. America CDN server in Europe CDN server in Asia

Content distribution networks (CDNs) r When a client requests content, the content is provided by the CDN server that can best deliver the content to the specific m The closest CDN server to the client m CDN server with a congestion-free path to the client r A CDN server typically contains objects from many content providers

CDN example origin server ( r distributes HTML r replaces: with h ttp:// HTTP request for DNS query for HTTP request for origin server CDN’s authoritative DNS server CDN server near client CDN company (cdn.com) r distributes gif files r uses its authoritative DNS server to route redirect requests client

More about CDNs routing requests r CDNs make use of DNS redirection in order to guide browsers to the correct server. r The browser does a DNS lookup on which is forwarded to authoritative DNS server. r CDN’s DNS server returns the IP address of the CDN server that is likely the best for the requesting browser. r when query arrives at authoritative DNS server: m server determines ISP from which query originates m uses “map” to determine best CDN server r CDN nodes create application-layer overlay network r CDN: bring content closer to clients

Why P2P? r Previous problems m Sparse deployment of IP multicast m High cost of bandwidth requirement for server-based unicast and CDNs. m Limit video broadcasting to only a subset of Internet content publishers

Why P2P? r Every node is both a server and client m Easier to deploy applications at endpoints m No need to build and maintain expensive routers and expensive infrastructure m Potential for both performance improvement and additional robustness m Additional clients create additional servers for scalability r Performance penalty m Can not completely prevent multiple overlay edges from traversing the same physical link Redundant traffic on physical links m Increasing latency

Peer-to-peer Video Broadcasting r Characteristics of video broadcasting m Large scale, corresponding to tens of thousands of users simultaneously participating the broadcast. m Performance-demanding, involving bandwidth requirements of hundreds of kilobits per- second and even more. m Real-time constraints, requiring timely and continuously streaming delivery. While interactivity may not be critical and minor delays can be tolerated through buffering, it is critical to get video uninterrupted. m Gracefully degradable quality, enabling adaptive and flexible delivery that accommodates bandwidth heterogeneity and dynamics.

Peer-to-peer Video Broadcasting r Stringent real-time performance requirement m Bandwidth and latency r On-demand streaming – users are asynchronous r Audio/video conferencing – interactive, latency more critical r File downloading m No time constraint, segments of contents can arrive out of order m Needs efficient indexing and search r P2p video broadcasting m Simultaneously support a large number of participants, m Dynamic changes to participant membership, m High bandwidth requirement of the video m Needs efficient data communication.

Overlay construction r Criterion: m Overlay efficiency m Scalability and load balancing m Self-organizing m Honor per-node bandwidth constraint m System considerations r Approaches m Tree-based m Data-driven randomized

P2P Overlay r Tree-based m Peers are organized into trees for delivering data m Each packet disseminated using the same structure m Parent-child relationships m Push-based When a node receives a data packet, it forwards copies of the packet to each of its children. m Failure of nodes result in poor transient performance. m Uplink bandwidth not utilized at leaf nodes Data can be divided and disseminated along multiple trees (e.g., SplitStream) m The structure should be optimized to offer good performance to all receivers. m Must be repaired and maintained to avoid interruptions m Example: End System Multicast (ESM)

P2P Overlay r Data-driven m Do not construct and maintain an explicit structure for delivering data m Use gossip algorithms to distribute data A node sends a newly generated message to a set of randomly selected nodes. These nodes do similarly in the next round, and so do other nodes until the message is spread to all. m Pull-based Nodes maintain a set of partners Periodically exchange data availability with random partners and retrieve new data Redundancy is avoided since node pulls data only it does not have it. m Similar to BitTorrent, but must consider real-time constraints Scheduling algorithm schedules the segments that must be downloaded to meet the playback deadlines m Example: CoolStreaming, PPLive

Tree-based vs. Data driven r Data driven m Simple m Suffer from a latency-overhead trade-off r Tree-based m No latency-overhead trade-off m Instability m Bandwidth under-utilization r A combination of both

P2P live streaming system r CoolStreaming m X.Zhang, J. Liu, B. Li, and T. S. P. Yum. Coolstreaming/donet: A data-driven overlay network for efficient live media streaming. In Proceedings of IEEE INFOCOM’05, March r PPLive m r PPStream m r UUSee m r AnySee m X. Liao, H. Jin, Y. Liu, L. M. Ni, and D. Deng. Anysee: Peer-to-peer live streaming. In Proceedings of IEEE INFOCOM’06, April r Joost m

Case Study: PPLive r PPLive: free P2P-based IPTV r As of January 2006, the PPLive network provided m 200+ channels m with 400,000 daily users on average. m Typically over 100,000 simultaneously users r It now covers over 120 TV Chinese stations, 300 live channels and 20,000 VOD (video-on-demand) channels (from Wiki) r The company claimed that they have more than 200 million user installations and 105 million active monthly user base (as of Dec 2010). (from Wiki) r The bit rates of video programs mainly range from 250 Kbps to 400 Kbps with a few channels as high as 800 Kbps. r The channels are encoded in two video formats: Window Media Video (WMV) or Real Video (RMVB). r The encoded video content is divided into chunks and distributed to users through the PPLive P2P network. r Employs proprietary signaling and video delivery protocols

Case Study: PPLive r BitTorrent is not a feasible video delivery architecture m Does not account for the real-time needs of IPTV r Bearing strong similarities to BitTorrent m Video chunk has playback deadline m No reciprocity mechanism deployed to encourage sharing between peers r Two major application level protocols m A gossip-based protocol Peer management Channel discovery m P2P-based video distribution protocol High quality video streaming r Data-driven p2p streaming

Case Study: PPLive 1. User starts PPlive software and becomes a peer node. 2. Contact channel server for list of available channels 3. Select a channel 4. Sends query to root server to retrieve an online peer list for this channel 5. Find active peers on channel to share video chunks Channel and peer discovery from “Insights into PPLive: A Measurement Study of a Large-Scale P2P IPTV System” by Hei et al.

r TV Engine m Download video chunks from PPLive network m Stream the downloaded video to a local video player r Streaming process traverses two buffers m PPLive TV engine buffer m Media player buffer r Cached contents can be uploaded to other peers watching the same channel. r This peer may also upload cached video chunks to multiple peers. r Peer may also download media content from multiple active peers r Received video chunks are reassembled in order and buffered in queue of PPLive TV engine, forming local streaming file in memory.

r When the streaming file length crosses a predefined threshold, the PPLive TV engine launches media player, which downloads video content from local HTTP streaming server. r After the buffer of the media player fills up to required level, the actual video playback starts. r When PPLive starts, the PPLive TV engine downloads media content from peers aggressively to minimize playback start-up delay. m PPLive uses TCP for both signaling and video streaming r When the media player receives enough content and starts to play the media, streaming process gradually stabilizes. r The PPLive TV engine streams data to the media player at media playback rate.

Measurement setup r One residential and one campus PC “watched” channel CCTV3 r The other residential and campus PC “watched” channel CCTV10 r Each of these four traces lasted about 2 hours. r From the PPLive web site, CCTV3 is a popular channel with a 5- star popularity grade and CCTV10 is less popular with a 3-star popularity grade.

Start-up delays r A peer search for peers and download data from active peers. r Two types of start-up delay: m the delay from when one channel is selected until the streaming player pops up; m the delay from when the player pops up until the playback actually starts. r The player pop-up delay is in general seconds r The player buffering delay is around seconds. r Therefore, the total start-up delay is around seconds. r Nevertheless, some less popular channels have a total start-up delays of up to 2 minutes. r Overall, PPLive exhibits reasonably good start-up user experiences

Video Traffic Redundancy r It is possible to download same video blocks more than once r Transmission of redundant video is a waste of network bandwidth r Define redundancy ratio as ratio between redundant traffic and estimated media segment size. r The traffic redundancy in PPLive is limited m Partially due to the long buffer time period m Peers have enough time to locate peers in the same channel and exchange content availability information.

Video Buffering r Estimation of size of media player buffer: m at least 5.37 MBytes r Estimation of size of PPLive engine buffer: m 7.8 MBytes to 17.1 Mbytes r The total buffer size in PPLive streaming m Mbytes r A commodity PC can easily meet this buffer requirement

PPLive Peering Statistics r A campus peer has many more active video peer neighbors than a residential peer due to its high-bandwidth access network. r A campus peer maintains a steady number of active TCP connections for video traffic exchanges. r Peers with less popular channel have difficulty in finding enough peers for streaming the media r If the number of active video peers drops, the peer searches for new peers for additional video download. r Peer constantly changes its upload and download neighbors