Download presentation
Presentation is loading. Please wait.
1
Virtualizing a Wireless Network: The Time-Division Approach Suman Banerjee, Anmol Chaturvedi, Greg Smith, Arunesh Mishra Contact email: suman@cs.wisc.edu http://www.cs.wisc.edu/~suman Department of Computer Sciences University of Wisconsin-Madison Wisconsin Wireless and NetworkinG Systems (WiNGS) Laboratory
2
Virtualizing a wireless network Virtualize resources of a node Virtualize the medium –Particularly critical in wireless environments Approaches Time Frequency Space Code Courtesy: ORBIT
3
Virtualizing a wireless network Virtualize resources of a node Virtualize the medium –Particularly critical in wireless environments Approaches Time Frequency Space Code Time Space, Freq, Code, etc. Expt-1 Expt-2 Expt-3 Expt-1 Expt-2 Expt-3
4
TDM-based virtualization Need synchronous behavior between node interfaces –Between transmitter and receiver –Between all interferers and receiver A B CD A B A B Expt-1 Expt-2Expt-1 A B CD
5
Problem statement To create a TDM-based virtualized wireless environment as an intrinsic capability in GENI This work is in the context of TDM- virtualization of ORBIT
6
Current ORBIT schematic Controller nodeHandler UI Node nodeAgent Manual scheduling Single experiment on grid
7
Controller nodeHandler UI Master Overseer Our TDM-ORBIT schematic Node VM nodeAgent VM nodeAgent VM nodeAgent Node Overseer Virtualization: abstraction + accounting Fine-grained scheduling for multiple expts on grid Asynchronous submission VM = User Mode Linux
8
Controller Overseers UI experiment queue scheduler submit Node Node Overseer monitor handler reporting feedback mcast commands Master Overseer Master overseer: Policy-maker that governs the grid Node overseer: - Add/remove experiment VMs -Swap experiment VMs Monitor node health and experiment status Mostly mechanism, no policy
9
Virtualization Why not process-level virtualization? –No isolation Must share FS, address space, network stack, etc. –No cohesive “schedulable entity” What other alternatives are there? –Other virtualization platforms (VMware, Xen, etc.)
10
TDM: Virtualization Virtualization –Experiment runs inside a User-Mode Linux VM Wireless configuration –Guest has no way to read or set wifi config! –Wireless extensions in virtual driver relay ioctls to host kernel Node Host Kernel net_80211 Guest VM UML Kernel virt_net iwconfig ioctl() tunneled ioctl()
11
Node TDM: Routing ingress eth 192.169.x.y wifi mrouted experiment channel nodeHandler commands (multicast) iptables DNAT: 192.169 -> 192.168 Routing Table VM forwarded to all VMs in mcast group 192.168.x.y 10.10.x.y
12
Synchronization challenges Without tight synchronization, experiment packets might be dropped or misdirected Host: VMs should start/stop at exactly the same time –Time spent restoring wifi config varies –Operating system is not an RTOS –Ruby is interpreted and garbage-collected Network latency for overseer commands –Mean: 3.9 ms, Median: 2.7 ms, Std-dev: 6 ms Swap time between experiments
13
Synchronization: Swap time I Variables involved in swap time –Largest contributor: wifi configuration time More differences in wifi configuration = longer config time –Network latency for master commands –Ruby latency in executing commands
14
Synchronization: Swap Time II We can eliminate wifi config latency and reduce the effects of network and ruby latencies “Swap gaps” –A configuration timing buffer –VMs not running, but incoming packets are still received and routed to the right place
15
Ruby Network Latency Inside VM, Ruby shows anomalous network latency –Example at right: tcpdump and simple ruby recv loop –No delays with C –Cause yet unknown 00.000 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 30 00.035 received 30 bytes 01.037 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 30 01.065 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 56 01.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 40 01.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 45 01.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 44 11.018 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 30 12.071 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 45 23.195 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 30 24.273 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 45 26.192 received 30 bytes 34.282 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 30 35.332 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 45 40.431 received 56 bytes 40.435 received 40 bytes 40.438 received 45 bytes 40.450 received 44 bytes 40.458 received 30 bytes 40.462 received 45 bytes 40.470 received 30 bytes 40.476 received 45 bytes 40.480 received 30 bytes 40.484 received 45 bytes 24+ secs
16
UI screen shots Time slice 1 Time slice 2
17
Performance: Runtime Breakdown Booting a VM is fast Each phase slightly longer in new system –Ruby network delay causes significant variance in data set –Handler must approximate sleep times
18
Performance: Overall Duration Advantages –Boot duration Disadvantages –Swap gaps
19
Future work: short term Improving synchrony between nodes –More robust protocol –Porting Ruby code to C, where appropriate Dual interfaces –Nodes equipped with two cards –Switch between them during swaps, so that interface configuration can be preloaded at zero cost
20
Dual interfaces wifi0 Essid: “expA” Mode: B Channel: 6 wifi1 Essid: “expB” Mode: G Channel: 11 VM nodeAgent VM nodeAgent Routing Logic VM nodeAgent Node Overseer config “current card is…”
21
Future work: long term Greater scalability –Allow each experiment to use, say 100s of nodes, to emulate 1000s of nodes –Intra-experiment TDM virtualization –Initial evaluation is quite promising
22
Intra-experiment TDM Any communication topology can be modeled as a graph
23
Intra-experiment TDM We can emulate all communication on the topology accurately, as long as we can emulate the reception behavior of the node with the highest degree
24
Intra-experiment TDM Time-share of different logical nodes to physical facility nodes Testbed of 8 nodes Time Unit 1
25
Testbed of 8 nodes Time Unit 2 Time-share of different logical nodes to physical facility nodes Intra-experiment TDM
26
Testbed of 8 nodes Time Unit 3 Time-share of different logical nodes to physical facility nodes Intra-experiment TDM
27
Some challenges How to perform the scheduling? –A mapping problem How to achieve the right degree of synchronization? –Use of a fast backbone and real-time approaches What are the implications of slowdown? –Bounded by the number of partitions
28
Conclusions Increased utilization through sharing More careful tuning needed for smaller time slices –Need chipset vendor support for very small times Non real-time apps, or apps with coarse real-time needs are best suited to this virtualization approach
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.