Download presentation
Presentation is loading. Please wait.
1
Coded Caching in Information-Centric Networks
Zhuoqun Chen 陈卓群 SJTU June 2015
2
? Coded Caching ICN
3
Coded Caching ICN
4
Roadmap to ICN Coded Caching in ICN Nonuniform Demands Coded Caching
Decentralized Coded Cache Nonuniform Demands 1 2 Delay Sensitive 4 3 Coded Caching in ICN 5 Example text example text. Example text Go ahead and replace it with your own text. This is an example text. xample text. Your own footer Your Logo
5
Uncoded Caching - Least Frequently Used
N=2 Files, K=1 Users, Cache Size M=1 PA=2/3 Populate the cache in low-traffic time Server PB=1/3 Cache the most popular file(s) E[R]=PB=1/3 Average rate is the same as miss rate User Cache Size One LFU is optimum for one cache memory in the system. LFU minimizes the miss rate.
6
Least Frequently Used Is this optimum? E[R]=1-(2/3)2=5/9
N=2 Files, K=2 Users, Cache Size M=1 PA=2/3 PB=1/3 E[R]=1-(2/3)2=5/9 Is this optimum?
7
Multicasting opportunity for users with different demand
Coded Caching Scheme N=2 Files, K=2 Users, Cache Size M=1 A1 A2 B1 B2 A2 A2⊕B1 B1 A1 B1 A2 B2 Multicasting opportunity for users with different demand
8
Coded Caching Uncoded Caching Coded Caching [Maddah-Ali, Niesen 2012]
N Files, K Users, Cache Size M Uncoded Caching Caches used to deliver content locally Local cache size matters Coded Caching [Maddah-Ali, Niesen 2012] The main gain in caching is global Global cache size matters (even though caches are isolated)
9
Centralized Coded Caching
N=3 Files, K=3 Users, Cache Size M=2 Maddah-Ali, Niesen, 2012 A12 A13 A23 B12 B13 B23 C12 C13 C23 Approximately Optimum A23 B13 C12 A23⊕B13⊕C12 1/3 A12 A13 B12 B13 B23 C12 C13 C23 A23 Multicasting Opportunity between three users with different demands
10
Centralized Coded Caching
N=3 Files, K=3 Users, Cache Size M=2 A12 A13 A23 B12 B13 B23 C12 C13 C23 Centralized caching needs Number and identity of the users in advance In practice, it is not the case, Users may turn off Users may be asynchronous Topology may time-varying (wireless) A12 A13 B12 B13 B23 C12 C13 C23 A23 Question: Can we achieve similar gain without such knowledge?
11
? ICN Coded Caching Key Features of ICN
Distributed nodes, not centralized Various content popularity, not uniform Asynchronous request and delivery, with deadline Arbitrary network topology, not a shared link/tree
12
Roadmap to ICN Coded Caching in ICN Nonuniform Demands Coded Caching
Decentralized Coded Cache Nonuniform Demands 1 2 Delay Sensitive 4 3 Coded Caching in ICN 5 Example text Go ahead and replace it with your own text. This is an example text. Your own footer Your Logo
13
Decentralized Caching Scheme
N=3 Files, K=3 Users, Cache Size M=2 1 2 3 12 13 23 123 1 2 3 12 13 23 123 1 2 3 12 13 23 123 Delivery: Greedy linear encoding Prefetching: Each user caches 2/3 of the bits of each file - randomly, - uniformly, - independently. 2 1 ⊕ 23 13 12 ⊕ 3 2 ⊕ 3 1 ⊕ 1 12 13 123 2 12 23 123 3 13 23 123 1 12 13 123 2 12 23 123 3 13 23 123 1 12 13 123 2 12 23 123 3 13 23 123
14
Decentralized Caching
15
Decentralized Caching
Centralized Prefetching: 12 13 23 12 13 23 12 13 23 Decentralized Prefetching: 1 2 3 12 13 23 123 1 2 3 12 13 23 123 1 2 3 12 13 23 123
16
Comparison Uncoded Local Cache Gain: Proportional to local cache size
N Files, K Users, Cache Size M Uncoded Local Cache Gain: Proportional to local cache size Offers minor gain Coded (Centralized): [Maddah-Ali, Niesen, 2012] Global Cache Gain: Proportional to global cache size Offers gain in the order of number of users Coded (Decentralized)
17
The proposed scheme is optimum within a constant factor in rate.
Theorem: The proposed scheme is optimum within a constant factor in rate. Can We Do Better?
18
Roadmap to ICN Coded Caching in ICN Nonuniform Demands Coded Caching
Decentralized Coded Cache Nonuniform Demands 1 2 Delay Sensitive 4 3 Coded Caching in ICN 5 Example text Go ahead and replace it with your own text. This is an example text. Example text Go ahead and replace it with your own text. This is an example text. Example text Go ahead and replace it with your own text. This is an example text. Your own footer Your Logo
19
Non-Uniform Demands Contradicting Intuitions:
More popular file More caching memory Symmetry of the prefetching Tractable Analysis
20
Idea of Grouping Prefetching: Delivery:
Group the files with approximately similar popularities Dedicate Memory Mi to group i. Prefetching: Apply decentralized prefetching within each group i, with memory budget of Mi Delivery: Apply coded delivery for users demanding file from one group. M1 M2 M3 M4 M1+M2+M3+M4=M
21
Observations Within each group same cache allocation
Files in different group different cache allocation Symmetry within each group Analytically tractable Losing coding between groups
22
Roadmap to ICN Coded Caching in ICN Nonuniform Demands Coded Caching
Decentralized Coded Cache Nonuniform Demands 1 2 Delay Sensitive 4 3 Coded Caching in ICN 5 Example text Go ahead and replace it with your own text. This is an example text. Your own footer Your Logo
23
Requests have Deadlines!
Merge Rule First-fit Perfect-fit
24
Can We Do Better? Misfit function: τ-Fit Threshold Rule
25
Roadmap to ICN Coded Caching in ICN Nonuniform Demands Coded Caching
Decentralized Coded Cache Nonuniform Demands 1 2 Delay Sensitive 4 3 Coded Caching in ICN 5 Example text Go ahead and replace it with your own text. This is an example text. Your own footer Your Logo
26
Simulation Hard to analyze theoretically One Shared link
Numerous links Hard to analyze theoretically Simulation
27
Simulation - Network Topology
1 Content Provider, 15 users, each with cache size:10 Simple Interest Forwarding Strategy: Flooding/Broadcast Parameters: 20000 contents Request Pattern: Zipf distribution α = 0.8
28
Results (1) Request Merging – Caching Efficiency
29
Results (2) Request Merging - Delay
Deadline = 10
30
Results (3) Request Merging – Delay Vs Cache size
Request rate = 10000/s
31
Conclusion To adapt coded caching to ICN Merge Rule
Use Decentralized algorithm Group contents to deal with non-uniform request Merge contents to enhance multicast efficiency Consider delay Merge Rule Tradeoff between Delay and Caching gain
32
Discussion & Future Work
Simulation on networks with various Connectivity Consider link failure, Congestion Capacity of ICN with coded caching
33
Thanks everyone
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.