Download presentation
Presentation is loading. Please wait.
1
1 Placement of Continuous Media in Wireless Peer-to-Peer Networks Shahram Ghadeharizadeh, Bhaskar Krishnamachari, Shanshan Song, IEEE Transactions on Multimedia, vol. 6, no. 2, April 2004 Presentation by Tony Sung, MC Lab, IE CUHK 10th November 2003
2
2 Outline The “ H2O ” Framework A Novel Data Placement and Replication Strategy Modeling (Topology) and Performance Analysis Two Distributed Implementations Conclusion, Discussion and Future Work
3
3 The “H2O” Framework H2O: Home-to-home online A number of devices connected through wireless channel. Complements existing wired infrastructure such as Internet and provide data services to individual households. Implementing VoD over H2O A household may store its personal video library on an H2O cloud. Each device might act in 3 possible roles: producer of data; an active client that is displaying data; a router that delivers data from a producer to a consumer.
4
4 The “H2O” Framework Media Retrieval The 1st block of a clip must make multiple hops to arrive. A portion of the video must be prefetched before playback starts to compensate for network bandwidth fluctuations. How to Minimize the Startup Latency?
5
5 The “H2O” Framework By Caching One Extreme: Full replication Startup latency is minimized. Even if bandwidth is not a limiting factor, storage requirement is tremendous. A Novel Approach: Stripe the video clip into blocks, and replicate blocks that has a later playback time less often in the system. Startup latency is still minimized.
6
6 A Novel Data Placement & Replication Strategy Parameters Video clip X is divided into z equal sized blocks b i of size S b Playback duration of a block : D = S b / B Display Playback time of b i = (i - 1)D Let : time to transmit a block across one hop = h b i can be placed at most H i hops away under the constraint: H i = ((i - 1)D) / h Assumptions CBR media, h is a fixed constant, B Link > B Display b 1 should be placed on all nodes in the network. For b i with 1 ≤ i ≤ z, it can be placed less often to save storage.
7
7 A Novel Data Placement & Replication Strategy Core Replication and Placement Strategy 1. Divide the clip into z equal sized blocks b i, each S b in size. 2. Place b 1 on all nodes in the network. 3. For each b i with 1 ≤ i ≤ z, compute its delay tolerance H i. 4. Based on H i, compute r i which is the total number of replicas required for block b i. Notes that r i and its computation are topology dependent, and it decreases monotonically with i until it reaches 1. 5. Construct r i replicas of block b i and place each copy of the block in the network while ensuring that for all nodes there exists at least one copy of the block b i that is no more than H i hops away.
8
8 Modeling and Performance Analysis Analysis of r i for 3 different topologies: Worst-Case Linear Topology Grid Topology Average Case Graph Topology Performance is measured in percentage savings over full replication.
9
9 Modeling and Performance Analysis Worst-Case Linear Topology N devices organized in a linear fashion. In the worst case scenario, b i must be replicated r i = N-H i times and takes N-r i -1 hops to reach the destination. If r i is non-positive, it is reset to 1. This is equivalent to stop replicating those blocks whose index exceeds U r = ((N – 1)h)/D + 1. Giving total storage as: contains replica of b i
10
10 Modeling and Performance Analysis Worst-Case Linear Topology N = 1000 h = 0.5 s B Display = 4 Mbps S b = 1 MB D = 2 s Worst Case Storage Requirement
11
11 Modeling and Performance Analysis Grid Topology N devices organized in a square grid of fixed area, each neighbors only the 4 nodes in each cardinal direction. No. of replicas r i : Expected total storage:
12
12 Modeling and Performance Analysis Grid Topology 2-min Clip2-h Clip Effect of Block Size Decrease Storage Increase Complexity Depends on h only Indep. of N and S C
13
13 Modeling and Performance Analysis Average Case Graph Topology N devices scattered randomly in a fixed area A with radio range R for each node. Of any given node, the expected number of nodes within H i hops is between where γ is a density dependent correction factor between 0 and 1 (when densed and nodes are distributed evenly). Using the upper boundary, the number of replicas and expected total storage are : and
14
14 Modeling and Performance Analysis Comparison 250 devices, 97% savings more than 80%
15
15 Distributed Implementations TIMER and ZONE Both control placement of r i copies of each b i with the follow objective: Each node in the network in within H i hops of at least one copy of b i General Framework The publishing node H2O p computes block size S b, no. of blocks z, and required hop-bound H i, using the previous expressions. H2O p floods the network with a message containing this information, and each recipient H2O j computes a binary array A j that signifies which blocks to host. The recipients will also retrieve from H2O p a copy of the blocks. TIMER and ZONE differs in how A j are computed.
16
16 Distributed Implementations TIMER A distributed timer-suppress algorithm When H2O j receives the flooded query message, it performs z rounds of “ elections ”, one for each block H2O j determines whether to maintain a copy of block b i Each node picks a random timer value from 1 to M and starts count down. When timer reaches 0, and the node is not already suppressed elects itself to store a copy of block b sends a suppress message to all nodes within H i hops At the end of a round, every node will either be elected or suppressed, and every node is guaranteed to be within H i hops of an elected node
17
17 Distributed Implementations ZONE Assumes existence of nodes with geopositioning info Consider all nodes fit within an area of S x S For each b i, divide the area into s i x s i squares such that the square fits within a circle of radius H i R where R is the radio range It can be shown that s i = γH i R√2 where γ ≦ 1 is a correction factor that depends on node density. A copy of b i is placed near the center of each square
18
18 Distributed Implementations ZONE z rounds of elections, one for each block All nodes determine which zone they belongs to, based on H i For each zone Elect the node that is closest to the zone center by a distributed leader election protocol (such as FloodMax)
19
19 Distributed Implementations Comparison Both distribution algorithms require a few more replicas per block than predicted analytically. <= Border Effect The percentage savings, however, is still several orders of magnitude superior to full replication.
20
20 Distributed Implementations Comparison Blocks distribution across H2O devices is uniform with TIMER. ZONE favors placement of blocks with a large H i towards the center of a zone. But results are not provided.
21
21 Conclusion Explored a novel architecture collaborating H2O devices to provide VoD with minimal startup latency. A replication technique that replicates the first few blocks of a clip more frequently because they are needed more urgently. Quantified impact of different H2O topologies on the storage space requirement. Proposed two distributed implementation of the placement and replication technique.
22
22 Discussion and Future Work Assumptions: fixed h In wireless ad hoc networks, h is a function of the amount of transmitting devices Admission control and transmission scheduling shall be added to address the variability of h Can be extended to adjust data placement when a user requests a clip Enable H2O cloud to respond to user actions such as removal of device Consider bandwidth constraints
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.