Download presentation
Presentation is loading. Please wait.
Published byCoral Copeland Modified over 9 years ago
1
Piotr Srebrny 1
2
Problem statement Packet caching Thesis claims Contributions Related works Critical review of claims Conclusions Future work 2
3
Internet is a content distribution network P2P, file hosting, and streaming account for more than 80% of the Internet traffic (“Internet study”, Ipoque, 2009) Single source multiple destination transport mechanism is fundamental At present, Internet does not provide efficient multi-point transport mechanism 3
4
Server transmitting the same data to multiple destinations is wasting the Internet resources The same data traverses the same path multiple times D C S A B PDPCPBPA 4
5
The goal of this work is to remove the Internet redundancy in a minimal invasive way 5
6
“Datagram routing for internet multicasting”, L. Aguilar, 1984 – explicit list of destinations in the IP header “Host groups: A multicast extension for datagram internetworks”, D. Cheriton and S. Deering, 1985 – destination address denotes a group of host “A case for end system multicast”, Y. hua Chu et al., 2000 – application layer multicast 6
7
Consider two packets A and B that carry the same content and travel the same few hops PA A B PPP 7
8
PB A B PPP B PB 8
9
I. Packet Caching system can achieve near multicast bandwidth savings II. Packet Caching system requires server support III. Packet Caching system is incrementally deployable IV. Packet Caching system preserves fairness in Internet 9
10
Principles Feasibility Environmental impact 10
11
I. Contribution 11
12
Network elements: Link Medium transporting packets Very deterministic Throughput limited in bits per second Router Switches data packets between links Very unpredictable Throughput limited in packets per second 12 (I) Cache payloads on links
13
Caching is done on per link basis Cache Management Unit (CMU) removes payloads that are stored on the link exit Cache Store Unit (CSU) restores payloads from a local cache 13
14
Link cache processing must be simple ~72ns to process a minimum size packet on a 10Gbps link Modern memory r/w cycle ~6-20ns Link cache size must be minimised At present, a link queue is scaled to 250ms of the link traffic Difficult to build! 14 (II) A source of redundant data must support caching!
15
Server can transmit packets carrying the same data within a minimum time interval Server can mark its redundant traffic Server can provide additional information on packet content 15
16
Two components of the CacheCast system Server support Distributed infrastructure of small link caches D C S A B DCB PA CMUCSUCMUCSU 16
17
Cacheable packet carries a metadata describing packet payload Payload ID Payload size Index Only packets with the metadata are cached 17
18
Packet train Only the first packet carries the payload The remaining packets truncated to the header 18
19
Packet train duration time It is sufficient to hold payload in the CSU for the packet train duration time What is the maximum packet train duration time? 19
20
Back-of-the-envelope calculations ~10ms caches are sufficient 20
21
II. Contribution 21
22
Two aspects of the CacheCast system I. Efficiency How much redundancy CacheCast removes? II. Computational complexity Can CacheCast be implemented efficiently with the present technology? 22
23
CacheCast and ‘Perfect multicast’ ‘Perfect multicast’ – delivers data to multiple destinations without any overhead CacheCast overheads I. Unique packet header per destination II. Finite link cache size resulting in payload retransmissions III. Partial deployment 23
24
Example L m – total amount of multicast links L u – total amount of unicast links 24
25
CacheCast unicast header part (h) and multicast payload part (p) Thus: E.g.: using packets where s p =1436B and s h =64B, CacheCast achieves 96% of the ‘perfect multicast’ efficiency 25
26
System efficiency δ m for 10ms large caches 26
27
S CMU and CSU deployed partially 123456 27
28
28
29
Considering unique packet headers CacheCast can achieve 96% of the ‘Perfect multicast’ efficiency Considering finite cache size 10ms link caches can remove most of the redundancy generated by fast sources Considering partial deployment CacheCast deployed over the first five hops from a server achieves already half of the maximum efficiency 29
30
Computational complexity may render CacheCast inefficient Implementations Server support – a Linux system call and an auxiliary shell command tool Link cache elements – implemented with Click Modular Router Software as processing elements 30
31
New system call msend() msend() compared with the standard send() system call 31
32
Server transmitting to 100 destinations using Loop of send() sys. calls A single msend() sys. call msend() system call outperforms the standard send() system call when transmitting to multiple destinations 32
33
Click Modular Router Software CMU and CSU implemented as processing elements CacheCast router 33
34
Due to CSU and CMU elements CacheCast router cannot forward packet trains at line rate 34
35
When compared with a standard router CacheCast router can forward more data 35
36
Testbed setup Clients from machines A and B gradually connect to server S 36
37
Original paraslash server can only handle 74 clients CacheCast paraslash server can handle 1020 clients and more depending on the chunk size Server load is reduced when using large chunks 37
38
III. Contribution 38
39
Internet congestion avoidance relies on communicating end-points that adjust transmission rate to the network conditions CacheCast transparently removes redundancy increasing network capacity It is not given how congestion control algorithms behave in the CacheCast presence 39
40
CacheCast implemented in ns-2 Simulation setup: Bottleneck link topology 100 TCP flows and 100 TFRC flows Link cache operating on a bottleneck link 40
41
TCP flows consume the spare capacity TFRC flows increase end-to-end throughput CacheCast preserves the Internet ‘fairness’ 41
42
J. Santos and D. Wetherall, “Increasing effective link bandwidth by suppressing replicated data,” USENIX’98 A. Anand, A. Gupta, A. Akella, S. Seshan, and S. Shenker, “Packet caches on routers: the implications of universal redundant traffic elimination,” SIGCOMM’08 A. Anand, V. Sekar, and A. Akella, “SmartRe: an architecture for coordinated network-wide redundancy elimination,” SIGCOMM’09 42
43
I. Packet caching system can achieve near multicast bandwidth savings II. Packet caching system requires server support III. Packet caching system is incrementally deployable IV. Packet caching system preserves fairness in Internet 43
44
I. Packet caching system can achieve near multicast bandwidth savings II. Packet caching system requires server support III. Packet caching system is incrementally deployable IV. Packet caching system preserves fairness in Internet 44
45
I. Packet caching system can achieve near multicast bandwidth savings II. Packet caching system requires server support III. Packet caching system is incrementally deployable IV. Packet caching system preserves fairness in Internet 45
46
I. Packet caching system can achieve near multicast bandwidth savings II. Packet caching system requires server support III. Packet caching system is incrementally deployable IV. Packet caching system preserves fairness in Internet 46
47
Getting CacheCast into the real world Server support Link cache II and III generations of router 47
48
48
49
msend() system call and an auxiliary system command tool Implemented in Linux Simple API msend() compared with the standard send() system call for a various destination group size and payload size msend(fd_set *fds_write, fd_set *fds_written, char *buf, int len) 49
50
Index 0 1 2 Index 0 1 2 Payload ID 0 0 0 Payload - AP1x A 0 Cache miss CMU tableCache store CMUCSU 50
51
Index 0 1 2 Index 0 1 2 Payload ID P1 P2 P3 Payload - BP2x P1 B1 P2 P3 Cache hit P2 CMU tableCache store CMUCSU 51
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.