A study of introduction of the virtualization technology into operator consoles T.Ohata, M.Ishii / SPring-8 ICALEPCS 2005, October 10-14, 2005 Geneva, Switzerland
Contents Virtualization technology overview Categorize virtualization technologies Performance evaluation How many virtual machines run on a server Introduction into the control system System setup Conclusion ICALEPCS 2005 in Geneva, Switzerland
What is virtualization technology? ICALEPCS 2005 in Geneva, Switzerland
Overview of a virtualization technology Originated from IBM System/360 Enable to consolidate many computers into a small number of host computer Each virtual machine (VMs) has independent resources (CPU, disks, MAC address, etc.) like a stand-alone computer ICALEPCS 2005 in Geneva, Switzerland VM Host computer Mainframe CPU Network card MEMORY DISK CPU Network card MEMORY DISK
Why we need virtualization technology? ICALEPCS 2005 in Geneva, Switzerland
Problem of present control system Network distributed computing is standard method ICALEPCS 2005 in Geneva, Switzerland We have over 200 computers only in beamline control system We can construct an efficient control system Increasing maintenance tasks such as version up, patching etc. We faced increasing hardware failure maintain them by a few staff Computer proliferation
Virtualization technology has revived ICALEPCS 2005 in Geneva, Switzerland General-purpose server We can reduce a number of computers. consolidation We can cut hardware costs and their maintenance costs drastically.
Category of virtualization technology - Three virtualization approaches - ICALEPCS 2005 in Geneva, Switzerland typical products Resource multiplex Xen*, LPAR(IBM), nPartition(HP) Emulation VMware*, VirtualPC, QEMU, Bochs User-Mode-Linux*, coLinux Application shielding Solaris container*, jail, chroot * Evaluated products
1. Resource multiplex Originated from mainframe Major UNIX vendors released several products A layer multiplexes hardware resources (called hypervisor or virtual machine monitor) Need small patch to kernel Less overhead S/W OS Hardware Multiplex hardware resources CPU, memory, etc. Special OS to suit layer interface ICALEPCS 2005 in Geneva, Switzerland
2. Emulation S/W OS Emulation layer Hardware Operating system Hardware emulation overhead ICALEPCS 2005 in Geneva, Switzerland Many emulator for PC/AT, 68K and game machines Suitable for development and debugging Usable unmodified OS Some overhead in transform instructions
3. Application shielding S/W Hardware Operating system Hardware ICALEPCS 2005 in Geneva, Switzerland Developed for web hosting of IPS (internet service provider) to obtain separate computing environment Partition makes invisible computing space from others No overhead partitions
ICALEPCS 2005 in Geneva, Switzerland Performance evaluation How many VMs can run on a server computer
Evaluated products ICALEPCS 2005 in Geneva, Switzerland Products Host OS Comments Guest OS VMware 4.5 Workstation Linux Commercial, Support many OS Linux User-Mode-Linux (UML) Linux Only Linux on x86 Linux um Solaris containerSolaris 10 Sparc and x86 FSS*, CPU pinning* Xen 2.06 Linux-2.6-xen0 FSS, CPU pinning Live migration* Linux-2.6-xenU * Next sheet
Special function ◆ Fair Share Scheduler (FSS) ◆ Scheduling policy, CPU usage is equally distributed among tasks ◆ CPU pinning ◆ Pin a VM to specific CPU (effective in SMP environment) ◆ (Linux has “affinity” function, which can pin only a process) ◆ Live migration ◆ VMs migrate to other host dynamically ◆ VMs can be running during migration VM Live migration Host 1Host 2 ICALEPCS 2005 in Geneva, Switzerland
Measurement procedure MADOCA: Message And Database Oriented Control Architecture ICALEPCS 2005 in Geneva, Switzerland VM Response time between virtual machine and VME by using MADOCA application Message queue (SYSV IPC) and ONC-RPC network communication protocol VME RPC (Remote Procedure Call) Message size is 350 bytes including RPC header and Ethernet frame header
Measurement bench ICALEPCS 2005 in Geneva, Switzerland Network MADOCA server 1~10 VMs are running on single server computer (Dual Xeon 3.0GHz) MADOCA client is running on each VM 1~10 MADOCA servers on a network Measure response time VM MADOCA client
Number of VM dependency of average response time HP B2000 is present operator console VMware and UML becomes worse at many VMs 5~6 VMs of Solaris and Xen are comparable to HP workstation ICALEPCS 2005 in Geneva, Switzerland HP B2000 (reference) Number of VMs average response time [sec]
Statistics of response 10VMs response time[msec] ICALEPCS 2005 in Geneva, Switzerland better
(%) Number of VMs CPU utilization Limit of hardware resources - CPU utilization - CPU utilization of the Host of VMs No more IDLE time at 5~6 VMs ICALEPCS 2005 in Geneva, Switzerland Solaris container 5~6 VMs are optimum
Traffic on the GbE network interface card Utilization is a few percent of full bandwidth Saturation comes from CPU overload Number of VMs (MB/s) ICALEPCS 2005 in Geneva, Switzerland NIC utilization Limit of hardware resources - Network interface card (NIC) utilization - Solaris container
Page fault wastes CPU time It makes performance deterioration Saturation come from miss hit of TLB and swap out Number of VMs ICALEPCS 2005 in Geneva, Switzerland Limit of hardware resources - Page fault frequency - Solaris container
How many VMs are optimum? Large page size on large addressing space architecture is important. - Physical Address Extension (PAE) or 64-bit architecture Many core CPU is attractive. - One CPU core is enough for 2~3 VMs ICALEPCS 2005 in Geneva, Switzerland 5~6 VMs are Xeon 3.0GHz) If you want to run more VMs…
Introduction into the control system We installed virtualization technology into a beamline control. ICALEPCS 2005 in Geneva, Switzerland We use Xen and Linux PC servers by replacing HP operator console. Control application programs ported onto VM (Linux). replace We installed a pair of Xen host and NFS server to keep image file of VM.
System setup and live migration ICALEPCS 2005 in Geneva, Switzerland Primary Xen host VM Control programs VM Secondary Xen host Migration NFS server Gigabit Ethernet VM Image VM Control programs VM X-server (thin client) It is possible to use continuously during maintenance. A few 100msec Enable shutdown
Future plan - High availability cluster - We are studying high availability Single System Image (SSI) cluster configuration with Xen Migration function of Xen is not effective when host computer suddenly dies. ICALEPCS 2005 in Geneva, Switzerland Xen hypervisor Single System Image cluster Xen hypervisor VM software Structure of OpenSSI with Xen
Future plan (cont’) - reduandant storage - We will introduce a redundant storage system such as SAN, iSCSI and NAS. NFS server is a single failure point ICALEPCS 2005 in Geneva, Switzerland Primary Xen host SAN storage SAN fibers Secondary Xen host FC Switch
ICALEPCS 2005 in Geneva, Switzerland About 50 HP-UX workstations will be replaced 8 PC-base servers + redundant storage (6 VMs runs on each PC server) 75% of total cost can be saved (only hardware) Cost estimation
Conclusion We studied several virtualization technology to introduce as operator console. We measured performances of some virtualization environments, and verified they are stable. 5~6 VMs are optimum for one server computer. We introduced Xen, which has live migration function, into beamline control system. We have plan to apply Xen for more beamline. ICALEPCS 2005 in Geneva, Switzerland
Thank you for your attention. ICALEPCS 2005 in Geneva, Switzerland
Running on Xen primary host Running on Xen secondly host