Download presentation
Presentation is loading. Please wait.
Published byJulia Barber Modified over 9 years ago
1
Authors: Haowei Yuan and Patrick Crowley Publisher: 2013 Proceedings IEEE INFOCOM Presenter: Chia-Yi Chu Date: 2013/08/14 1
2
Introduction Experimental Setup File Distribution Performance Improving CCNx Performance 2
3
Name-centric network architectures ◦ Data requests need to have unique names ◦ In-network storage elements that can cache the data and respond to matching requests. Named-Data Networking (NDN) ◦ Interest packets containing the name of the requested content ◦ Data packets containing both the name and its associated data ◦ NDN routers cache Data packets Entries in a cache indexed by their names. 3
4
HTTP infrastructure ◦ URLs are the names that matter most in today’s Internet. ◦ The requested URL in the HTTP header is the content name. Including both web servers and caching proxies, can be viewed as providing in-network storage for named HTTP data. Evaluate the effectiveness of NDN and HTTP as content distribution systems over a range of experimental scenarios. 4
5
Test bed ◦ Open Network Laboratory (ONL) ◦ 48 single-core machines AMD 2.0GHz Operon Processor, with 512MB memory and 1Gbps network interface ◦ Connected via virtual switches Network Processor-based Routers (NPRs) 5
6
CCNx Software Tools ◦ ccnx-0.4.0, release on Sep. 15, 2011. ◦ ccnd daemon Configured with default underlying transportation protocol is TCP ◦ Built-in ccncatchunks2 Generate a sequence of Interest packets ◦ ccnfileserver Generate Data packets with content fetched from files on server 6
7
HTTP and Web-Caching Software Tools ◦ Lighttpd-1.4.28 ◦ Squid-3.41.11 Both using default configurations ◦ wget For downloading files 7
8
The metric ◦ Download Time (DT) ◦ the time from when a client application sends a request for a file until the file is downloaded completely. 8
9
Experimental Configuration ◦ 40 client hosts, 1 server, and 2 levels of intermediate nodes ◦ 8 clients form a cluster, and shared a common second level intermediate node ◦ Connected via 1Gbps links ◦ 100MB file is stored in server, clients try to fetch file simultaneously 9
10
10
11
CCNx vs. Lighttpd ◦ downloading 100MB file ◦ without a caching proxy ◦ Start with 1 client in each cluster ◦ Active 1 clients each round until all clients are active 11
12
12
13
CCNx vs. Squid ◦ Single level case all the clients connect to the server through the top level CCNx router or Squid proxy ◦ Two level case clients are connected via a second level cache 13
14
14
15
Lossy Network Condition ◦ Emulate a lossy link Rand drop plugin, which probabilistically selects and drops packets on the NPRs. ◦ Emulate delay Delay plugin to an NPR connected with the link. ◦ 1 MB file 15
16
16
17
17
18
18
19
CCNx employs an XML encoding scheme to encode packets to wire format. The original CCNx implementation ◦ stores content with their names encoded in the Content Store (CS) ◦ when the CS is queried, several content names might need to be decoded A simple change ◦ decoded content names are stored in the CS. 19
20
20
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.