Download presentation
Presentation is loading. Please wait.
Published byAngelica Carr Modified over 9 years ago
1
Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform Massimo GALLO (Bell Labs, Alcatel - Lucent) Joint work with: Luca Muscariello (Orange) Giovanna Carofiglio (Bell Labs, Alcatel - Lucent)
2
Agenda ICN Lurch Experiments Conclusions and future works
3
ICN Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform
4
ICN Today’s Internet ICN advantages imdb.com www.imdb.com/title/tt12242 Ever-growing amount of digital info Point-to-point dissemination Mobility issues Waste of resources in content replication Simplified management Traffic reduction and localization Seamless, ubiquitous connectivity Congestion reduction Effective Security named packets () Names not addresses Name-based routing/ forwarding In-network storage Pull-based transport ICN properties
5
LURCH Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform
6
Lurch A newly designed protocol need to be tested Event driven simulation: limited in the number of events (hence topology size) computation is hard to parallelize Large scale experiments: Complex to manage We needed a test orchestrator From protocol design to large scale experimentation
7
Lurch Lurch is a test orchestrator for CCNx 1 Simplify and automate ICN’s protocol testing over a list of interconnected servers (i.e. G5K). Lurch run on a separate machine and control the test Controller
8
Lurch Application Control Plane Virtualized Data Plane Virtualized Data Plane Management CCNx TCP/UDP Virtualized IP IP layer PHY layer Data Plane Protocol stack Architecture Lurch controller: Virtualized Data plane Control Plane Application layer
9
Lurch Create virtual interfaces between nodes (i.e. G5K) Bash configuration file computed remotely by the orchestrator and transfered to experiment nodes Network iptunnels to build virtualized interfaces One physical interface (eth0), multiple virtual interfaces (tap0,..,) Topology management #!/bin/bash sysctl -w net.ipv4.ip_forward=1 modprobe ipip iptunnel add tap0 mode ipip local 172.16.49.50 remote 172.16.49.5 ifconfig tap0 10.0.0.2 netmask 255.255.255.255 up route add 10.0.0.1 tap0 iptunnel add tap1 mode ipip local 172.16.49.50 remote 172.16.49.51 ifconfig tap1 10.0.0.3 netmask 255.255.255.255 up route add 10.0.0.4 tap1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. tap0 tap1 eth0 172.16.9.50 172.16.49.5 172.16.49.51 10.0.0.2 10.0.0.3 tap0 10.0.0.1 tap0 10.0.0.4 Controller Virtual Physical
10
Lurch Remotely assign network resources to nodes preserving physical bandwidth constraints Bash configuration file computed remotely by the orchestrator and transferred to experiment nodes Traffic Control Linux tool to limit bandwidth, add delay, packet loss, etc.. Resource management #!/bin/bash tc qdisc del dev eth0 | cut -d " " -f 1) root tc qdisc add dev eth0 | cut -d " " -f 1) root handle 1: htb default 1 tc class add dev eth0 | cut -d " " -f 1) parent 1: classid 1:1 htb rate 100.0mbit ceil 10.0mbit tc filter add dev eth0 | cut -d " " -f 1) parent 1: prio 1 protocol ip u32 match ip dst 172.16.49.5 flowid 1:1 tc class add dev eth0 | cut -d " " -f 1) parent 1: classid 1:2 htb rate 100.0mbit ceil 50.0mbit tc filter add dev eth0 | cut -d " " -f 1) parent 1: prio 1 protocol ip u32 match ip dst 172.16.49.51 flowid 1:2 1. 2. 3. 4. 5. 6. 7. 8. 9. 10Mbps Controller Virtual Physical 50Mbps 1Gbps
11
Lurch Remotely control name-based forwarding tables Bash configuration file computed remotely by the orchestrator and transferred to experiment nodes CCNx’s FIB control command ccndc Name-based control plane #!/bin/bash ccndc add ccnx:/music UDP 10.0.0.1 ccndc add ccnx:/video UDP 10.0.0.4 1. 2. 3. 4. 5. Name prefix face ccnx:/music0 ccnx:/video1 FIB ccnx:/music Controller Virtual Physical ccnx:/video
12
Lurch Remotely control experiment workload File download application started according experiment’s needs Arrival process: Poisson,CBR File popularity: Zipf, Weibull, et.. Application Workload Two ways: Centralize workload generation at the controller Delegated workload generation to clients for performance improvement tap0 tap1 eth0 172.16.9.50 172.16.49.5 172.16.49.51 10.0.0.2 10.0.0.3 tap0 10.0.0.1 tap0 10.0.0.4 Controller Virtual Physical
13
Lurch Remotely control experiment statistic’s Bash start/stop commands sent remotely CCNx’s statistics (e.g. caching, forwarding) through logs top / vmstat monitoring active processes CPU usage (e.g. ccnd) Ifstat monitoring link rate Measurements At the end of the experiment statistics are collected and transferred to the user tap0 tap1 eth0 172.16.9.50 172.16.49.5 172.16.49.51 10.0.0.2 10.0.0.3 tap0 10.0.0.1 tap0 10.0.0.4 Virtual Physical Controller
14
EXPERIMENTS Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform
15
Experiments 20 different, simultaneous content requests (flows) 1 name prefix in all the FIBs 4 6 15M 5M 10M 20M Link i,j Measured/optimal Rate [Mbps] 0 -> 44.7 / 5 1 -> 49.2 / 10 2 -> 52.4 / 2.5 3 -> 52.4 / 2.5 4 -> 613.9 / 15 5 -> 64.8 / 5 5 ccnx:/
16
Experiments Large topologies Up to 100 physical nodes More than 200 links Realistic scenarios Mobile Backhaul
17
CONCLUSIONS AND FUTURE WORKS Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform
18
Conclusions and future works With Lurch, we tested multiple ICN’s mechanisms in a real big test-bed: Forwarding, caching strategies, Congestion control Ongoing: Project started in the Orange – Bell Labs collaboration and is now under the SystemX Architecture de Resaux Future open source release Future works: Extend site based experiments to grid experiments Exploit the power of the servers offered by grid using two or more virtual machines per server Adapt the tool to run different ICN prototypes (e.g. NDNx)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.