Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff Mohammad Ali Maddah-Ali Bell Labs, Alcatel-Lucent joint work with Urs Niesen Allerton October 2013
Video on Demand High temporal traffic variability Caching (prefetching) can help to smooth traffic
Caching (Prefetching) Server Placement phase: populate caches Demands are not known yet Delivery phase: reveal request, deliver content
Problem Setting N Files Server Shared Link K Users Cache Contents Size M Question: Smallest worst-case rate R(M) needed in delivery phase? How to choose (1) caching functions, (2) delivery functions Placement: - Cache arbitrary function of the files (linear, nonlinear, …) Delivery: -Requests are revealed to the server - Server sends a function of the files
Coded Caching Uncoded Caching Coded Caching [Maddah-Ali, Niesen 2012] N Files, K Users, Cache Size M Uncoded Caching Caches used to deliver content locally Local cache size matters Coded Caching [Maddah-Ali, Niesen 2012] The main gain in caching is global Global cache size matters (even though caches are isolated)
Centralized Coded Caching N=3 Files, K=3 Users, Cache Size M=2 Maddah-Ali, Niesen, 2012 A12 A13 A23 B12 B13 B23 C12 C13 C23 Approximately Optimum A23 B13 C12 A23⊕B13⊕C12 1/3 A12 A13 B12 B13 B23 C12 C13 C23 A23 Multicasting Opportunity between three users with different demands
Centralized Coded Caching N=3 Files, K=3 Users, Cache Size M=2 A12 A13 A23 B12 B13 B23 C12 C13 C23 Centralized caching needs Number and identity of the users in advance In practice, it is not the case, Users may turn off Users may be asynchronous Topology may time-varying (wireless) A12 A13 B12 B13 B23 C12 C13 C23 A23 Question: Can we achieve similar gain without such knowledge?
Decentralized Proposed Scheme N=3 Files, K=3 Users, Cache Size M=2 1 2 3 12 13 23 123 1 2 3 12 13 23 123 1 2 3 12 13 23 123 Delivery: Greedy linear encoding Prefetching: Each user caches 2/3 of the bits of each file - randomly, - uniformly, - independently. 2 1 ⊕ 23 13 12 ⊕ 3 2 ⊕ 3 1 ⊕ 1 12 13 123 2 12 23 123 3 13 23 123 1 12 13 123 2 12 23 123 3 13 23 123 1 12 13 123 2 12 23 123 3 13 23 123
Decentralized Caching
Decentralized Caching Centralized Prefetching: 12 13 23 12 13 23 12 13 23 Decentralized Prefetching: 1 2 3 12 13 23 123 1 2 3 12 13 23 123 1 2 3 12 13 23 123
Comparison Uncoded Local Cache Gain: Proportional to local cache size N Files, K Users, Cache Size M Uncoded Local Cache Gain: Proportional to local cache size Offers minor gain Coded (Centralized): [Maddah-Ali, Niesen, 2012] Global Cache Gain: Proportional to global cache size Offers gain in the order of number of users Coded (Decentralized)
The proposed scheme is optimum within a constant factor in rate. Can We Do Better? Theorem: The proposed scheme is optimum within a constant factor in rate. Information-theoretic bound The constant gap is uniform in the problem parameters No significant gains beside local and global
Asynchronous Delivery Segment 1 Segment 2 Segment 3 Segment 1 Segment 2 Segment 3 Segment 1 Segment 2 Segment 3
Conclusion We can achieve within a constant factor of the optimum caching performance through Decentralized and uncoded prefetching Greedy and linearly coded delivery Significant improvement over uncoded caching schemes Reduction in rate up to order of number of users Papers available on arXiv: Maddah-Ali and Niesen: Fundamental Limits of Caching (Sept. 2012) Maddah-Ali and Niesen: Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff ( Jan. 2013) Niesen and Maddah-Ali: Coded Caching with Nonuniform Demands (Aug. 2013)