Download presentation
Presentation is loading. Please wait.
Published byMaximilian Curtis Modified over 9 years ago
1
1 XenSocket: VM-to-VM IPC John Linwood Griffin Jagged Technology virtual machine inter-process communication Suzanne McIntosh, Pankaj Rohatgi, Xiaolan Zhang IBM Research Presented at ACM Middleware: November 28, 2007
2
2 with XenSocket: before XenSocket: What we did: Reduce work on the critical path VM 1 Xen Domain-0VM 2 VM 1 Xen VM 2 Put packet into a page Ask Xen to remap page Route packet Ask Xen to remap page Allocate pool of pages (once) Ask Xen to share pages (once) Write into pool Read from pool
3
3 The standard outline What we did (Why) we did what we did (How) we did what we did What we did (again)
4
4 IBM building a stream processing system with high-throughput requirements Vid eo Enormous volume of data enters the system Independent nodes process and forward data objects Design for isolated, audited, and profiled execution environments
5
5 VM 1 Xen VM 2VM 3VM 4 x86 virtualization technology provides isolation in our security architecture Node 1 Other physical nodes 2 3 1 4 1 Node 2 2 Node 3 3 Node 4 41 2 3 4
6
6 Using Xen virtual network resulted in low throughput @ max CPU usage VM 1 Xen Domain-0VM 2 Xen Linux Process 1Process 2 UNIX socket 14 Gbit/s TCP socket 0.14 Gbit/s 100% CPU 20% CPU100% CPU
7
7 before XenSocket: Our belief: root causes are Xen hypercalls and network stack VM 1 Xen Domain-0VM 2 Uses 1.5 KB of 4 KB page Ask Xen to swap pages Packet routed Ask Xen to swap pages Put packet into a page May invoke Xen hypercall after only 1 packet queued Victim pages must be zeroed
8
8 The standard outline What we did (Why) we did what we did (How) we did what we did What we did (again)
9
9 with XenSocket: XenSocket hypothesis: Cooperative memory buffer improves throughput VM 1 Xen VM 2 Allocate 128 KB pool of pages Ask Xen to share pages No per-packet processing Writes are visible immediately Still requires hypercalls for signaling (but fewer) Pages reused in circular buffer
10
10 Caveat emptor We used Xen 3.0—Latest is Xen 3.1 Xen networking is reportedly improved Shared-memory concepts remain valid Released under GPL as XVMSocket http://sourceforge.net/projects/xvmsocket/ Community is porting to Xen 3.1
11
11 Sockets interface; new socket family used to set up shared memory Server socket(); bind(sockaddr_inet); listen(); accept(); socket(); bind(sockaddr_xen); Client socket(); connect(sockaddr_inet); socket(); connect(sockaddr_xen); System returns grant # for client Remote address Remote port # Remote VM # Remote grant # Local port #
12
12 After setup, steady-state operation needs little (if any) synchronization XenSocket VM 1VM 2 write(“XenSocket”) read(3) “Xen” If receiver is blocked, send signal via Xen
13
13 Design goal (future work): Support for efficient local multicast XenSocket VM 1 VM 2 VM 3 write(“XenSocket”) read(3) “Xen” read(5) “XenSo” Future writes wrap around; block on first unread page
14
14 The standard outline What we did (Why) we did what we did (How) we did what we did What we did (again)
15
15 Figure 5: Pretty good performance Message size (KB, log scale) 0.516 Bandwidth (Mbit/s) 14 7 UNIX socket: 14 MB/s XenSocket: 9 MB/s INET socket: 0.14 MB/s
16
16 Figure 6: Interesting cache effects 14 7 Message size (MB, log scale) 1101000.10.10.01 Bandwidth (Mbit/s) UNIX socket XenSocket INET socket
17
17 Throughput limited by CPU usage; Advantageous to offload Domain-0 VM 1 Xen Domain-0VM 2 Xen 100% CPU 1% CPU100% CPU VM 1 Xen Domain-0VM 2 Xen TCP socket 0.14 Gbit/s 100% CPU 20% CPU100% CPU XenSocket 9 Gbit/s
18
18 Adjusted communications integrity and relaxing of pure VM isolation Possible solution: Use a proxy for pointer updates along the reverse path VM 1 VM 2 VM 3 But now this path is bidirectional(?) Any masters students looking for a project?
19
19 Potential memory leak: Xen didn’t (doesn’t?) support page revocation VM 1VM 2 Setup VM 1 shares pages VM 1VM 2 Scenario #1 VM 2 releases pages VM 1VM 2 Scenario #2 VM 1 cannot safely reuse pages
20
20 Xen shared memory: Hot topic! XenSocket Middleware’07 | make a better virtual network MVAPICH-ivc: Huang and colleagues (Ohio State, USA) SC’07 | What we did, but with a custom HPC API XWay: Kim and colleagues (ETRI, Korea) ’07 | What we did, but hidden behind TCP sockets Menon and colleagues (HP, USA) VEE’05, USENIX’06 | make the virtual network better
21
21 Conclusion: XenSocket is awesome Shared memory enables high-throughput VM-to-VM communication in Xen (a broadly applicable result?) John Linwood Griffin John.Griffin @ JaggedTechnology.com Also here at Middleware: Sue McIntosh
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.