Lecture 11: Virtualization COSC6376 Cloud Computing Lecture 11: Virtualization Instructor: Weidong Shi (Larry), PhD Computer Science Department University of Houston
Outline Today Plan HW2 Virtualization
Plan Virtualization Software defined X Open source DIY cloud Virtual machines Containers I/O devices Networks Storage Software defined X Software defined network Software defined infrastructure Open source DIY cloud Openstack
HW2 Openstreet map Hbase Pokémon GO
Map
Two technologies for agility Virtualization: The ability to run multiple operating systems on a single physical system and share the underlying hardware resources* Cloud Computing: “The provisioning of services in a timely (near on instant), on-demand manner, to allow the scaling up and down of resources”** * VMware white paper, Virtualization Overview ** Alan Williamson, quoted in Cloud BootCamp March 2009
The traditional server concept Web Server Windows IIS App Server Linux Glassfish DB Server Linux MySQL EMail Windows Exchange
The virtual server concept Virtual Machine Monitor (VMM) layer between Guest OS and hardware
Hardware virtual machines (VMs) 5/13/2018 5:45 PM Hardware virtual machines (VMs) ... App App App VM0 VM1 ... ... App App App App App App Operating System A new layer of software... Guest OS0 ... Guest OS1 GFX Physical Host Hardware VM Monitor (VMM) Processors Memory Graphics Physical Host Hardware Network Storage Keyboard / Mouse Without VMs: Single OS owns all hardware resources With VMs: Multiple OSes share hardware resources Virtualization enables multiple operating systems to run on the same platform © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
The virtual server concept Virtual servers seek to encapsulate the server software away from the hardware This includes the OS, the applications, and the storage for that server. Servers end up as mere files stored on a physical box, or in enterprise storage. A virtual server can be serviced by one or more hosts, and one host may house more than one virtual server.
Virtual server migration
The virtual server concept Virtual servers can be scaled out easily. If the administrators find that the resources supporting a virtual server are being taxed too much, they can adjust the amount of resources allocated to that virtual server Server templates can be created in a virtual environment to be used to create multiple, identical virtual servers Virtual servers themselves can be migrated from host to host almost at will.
Would you believe ~45 - 50 years? 5/13/2018 5:45 PM How long has virtualization been around? Recent development: ~5 years A while: 10 years Older than Microsoft: 30 years A lot longer…..>40 years Would you believe ~45 - 50 years? © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Intel introduces Intel Virtualization Technology Strachey: Time Sharing in Large Fast Computers 5/13/2018 5:45 PM Virtualization Open source Xen is released MIT: Project MAC Connectix is founded Intel introduces Intel Virtualization Technology VMWare is founded Goldberg: Survey of Virtual Machines Research Microsoft acquires Connectix 1950 1960 1970 1980 1990 2000 2006 IBM: M44/44X Project 1950’s IBM & MIT collaborate on the Compatible Time Sharing System (CTSS) Christopher Strachey publishes a paper titled Time Sharing in Large Fast Computers in the Int’l Conference on Information Processing 1960’s IBM works on the M44/44X Project @ the IBM Watson Research Center evaluating time sharing system concepts based on virtual machines MIT’s Project MAC begins with a focus on the design and implementation of a better time sharing system 1970’s Robert P Goldberg authors a paper titled Survey of Virtual Machines Research that describes the shortcomings of typical 3rd generation architectures and multi-programming operating systems 1988 Connectix is founded 1998 VMware is founded 1999 VMware delivers VMware Workstation 2001 VMware delivers VMware GSX Server & VMware ESX Server 2003 Microsoft acquires Connectix to offer virtualization solutions VMware offers VMware VirtualCenter with VMmotion University of Cambridge describes Xen in a paper and provides first public release 2004 Intel introduces Intel Virtualization Technology on client & server platforms 2005 IBM & MIT: Compatible Time Sharing System © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Virtualization status Offerings from many companies e.g. VMware, Microsoft, Citrix, Oracle ... Hardware support Fits well with the move to 64 bit (very large memories) multi-core (concurrency) processors. Intel VT (Virtualization Technology) provides hardware to support the Virtual Machine Monitor layer Virtualization is now a well-established technology
Virtualization challenges 5/13/2018 5:45 PM Virtualization challenges Complexity CPU virtualization requires binary translation or paravirtualization Must emulate I/O devices in software Functionality Paravirtualization may limit supported guest OSes Guest OSes “see” only simulated platform and I/O devices Reliability and Security I/O device drivers run as part of host OS or hypervisor No protection from errant DMA that can corrupt memory Performance Overheads of address translation in software Extra memory required (e.g., translated code, shadow tables) © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Types of virtualization Emulation and dynamic translation Container virtualization Para-virtualization Full-virtualization
QEMU? Quick EMUlator QEMU is a processor emulator Dynamic translation
Container virtualization User-space virtual machines All guests share the same filesystem tree. Same kernel on all virtual machines Unprivileged VMs can’t mount drives or change network settings Provide extra-level of security Native Speeds, no emulation overhead OpenVZ, Virtuozzo, Solaris Containers, FreeBSD Jails, Linux-Vserver
OpenVZ
Paravirtualization Do not try to emulate everything Work as a guard Pass safe instructions directly to CPU and device Guests have some exposure to the hardware Better performance Need to slightly modify guest OS, but no need to modify applications Xen, Sun Logical Domains
Xen
Full virtualization Runs unmodified guests Simulates bios, communicates with VMs through ACPI emulation, BIOS emulation, sometimes custom drivers Hardware assisted virtualization
Kernel-based Virtual Machine
Hypervisor-based virtualization A small virtual machine monitor (known as a hypervisor or VMM) runs on top of machine’s hardware and provides two basic functions. it identifies, traps, and responds to protected or privileged CPU operations made by each virtual machine. It handles queuing, dispatching, and returning the results of hardware requests from your virtual machines. Two type of Hypervisor: Type 1: native, bare metal Xen, VMWare ESXi, Hyper-V Type 2: hosted VirtualBox, VirtualPC, VMWare Workstation
IA32 Protection Rings
SW Solution: guest ring deprivileging Run Guest OS above Ring-0 and have privileged instructions generate faults... Run VMM in Ring-0 as a collection of fault handlers Top IA Virtualization Holes : Ring Aliasing Running software at a privilege level other than the level for which it was written. Non-trapping instructions Excessive Faulting Interrupt Virtualization Issues Addr Space Compression Complex Software Techniques : Source guest OS Modifications Binary guest OS Modifications VM0 VM1 ... ... App App App App App App Guest OS0 ... Guest OS1 VM Monitor Platform Hardware Virtualization of current IA CPUs requires complex software workarounds
5/13/2018 5:45 PM Xen © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Reading assignment
History Xen Project originated as a research project at the University of Cambridge. The first public release of Xen was made in 2003. Xen Project has been supported originally by XenSource Inc. Citrix Systems completed its acquisition of XenSource, 2007. Citrix has also used the Xen brand itself for some proprietary products unrelated to Xen, including at least "XenApp" and "XenDesktop". Xen Project was moved under the auspices of the Linux Foundation as a Collaborative Project. 2013
Xen: introduction Paravirtualization Need to slightly change some guest OS Domain (1-) : guest OS
Xen: CPU scheduling Guest OS runs at a lower privilege level than Xen Guest OS must register exception (trap) handlers with Xen Xen will check the handler Page fault is handled differently System calls : no Xen intervention Use a lightweight event system to handle hardware interrupts
Xen: virtual memory management Paging: Guest OS has direct read access to hardware page tables. Updates are batched and validated by the hypervisor TLB(translation lookaside buffer) flushing CPU cache of page table entries Page table Virtual Address Physical Memory Address
Xen: memory allocation At the beginning of creating guest OS A fixed amount of physical memory is allocated (reservation) Claim additional memory from Xen, when needed; release memory to Xen Allocated memory are not contiguous “Physical memory” a virtual view of contiguous memory by guest OS “hardware memory”: real physical memory
Xen: device I/O Only Domain0 has direct access to disks Other domains need to use virtual block devices Use the I/O ring Reorder requests prior to enqueuing them on the ring
Xen: network Virtual firewall-router attached to all domains To send a packet, enqueue a buffer descriptor into the I/O ring
Partitioning resources between guest OSes Memory- preallocated physical memory Disk – quota CPU and network Involves more complicated procedures
Domain 0 The representative to the Xen hypervisor Provide bootstrap code for different types of VMs Creating/deleting virtual network interfaces and virtual block devices for other domains
System looks like
Xen architecture
Clustered Xen environment
Network flow in Xen
Linux bridge The old version of Citrix XenServer (before v5.6 FP1) using simple Linux Bridge. Many hypervisor based virtualization also apply Linux Bridge model, such as KVM, libvirt. All of bridging work are done by ‘brctl’. Provide simple L2 switching functions. Layer-1 : A network hub, or repeater, is Layer-2: bridge Layer-3: router Layer 3 switch, typically optimized for Ethernet
Xen network environment peth0 — This is the port that connects to the physical network interface in your system. vif0.0 — This is the bridge port that is used by traffic to/from Domain 0. vifX.0 — This is the bridge port that is used by traffic to/from Domain X.
VMware Infrastructure 3 VMware Infrastructure 3 provides a rich set of networking capabilities. Virtual switches are the key networking components, up to 248 virtual switches on each ESX Server 3 host. They provide core Layer 2 forwarding engines. Physical Ethernet adapters (uplinks) serve as bridges between virtual and physical networks.
VMware vSphere’s vDS vNic is logically connected to a dvPort shown as black squares. Each dvPort is implemented by the proxy switch on the host where the VM runs. vSphere’s vNetwork distributed switch (vDS) functions as a single switch across all associated hosts. This enables you to set network configurations that span across all member hosts, and allows virtual machines to maintain consistent network configuration as they migrate across multiple hosts (the vDS centrally managed by vCenter).