Presentation is loading. Please wait.

Presentation is loading. Please wait.

NoHype: Virtualized Cloud Infrastructure without the Virtualization Eric Keller, Jakub Szefer, Jennifer Rexford, Ruby Lee (ISCA 2010 + follow up soon to.

Similar presentations


Presentation on theme: "NoHype: Virtualized Cloud Infrastructure without the Virtualization Eric Keller, Jakub Szefer, Jennifer Rexford, Ruby Lee (ISCA 2010 + follow up soon to."— Presentation transcript:

1 NoHype: Virtualized Cloud Infrastructure without the Virtualization Eric Keller, Jakub Szefer, Jennifer Rexford, Ruby Lee (ISCA 2010 + follow up soon to be “in submission”) Princeton University

2 Virtualized Cloud Infrastructure Run virtual machines on a hosted infrastructure Benefits… –Economies of scale –Dynamically scale (pay for what you use)

3 Without the Virtualization Virtualization used to share servers –Software layer running under each virtual machine 3 Physical Hardware Hypervisor OS Apps Guest VM1Guest VM2 servers

4 Without the Virtualization Virtualization used to share servers –Software layer running under each virtual machine Malicious software can run on the same server –Attack hypervisor –Access/Obstruct other VMs 4 Physical Hardware Hypervisor OS Apps Guest VM1Guest VM2 servers

5 Are these vulnerabilities imagined? No headlines… doesn’t mean it’s not real –Not enticing enough to hackers yet? (small market size, lack of confidential data) 5

6 Are these vulnerabilities imagined? No headlines… doesn’t mean it’s not real –Not enticing enough to hackers yet? (small market size, lack of confidential data) 6 Physical Hardware Hypervisor OS Apps Guest VM1Guest VM2 Large Attack Surface * 56 different exit reasons * Tremendous interaction Modest load => 20,000 exits/sec During boot => 600,000 exits/sec (Only VM, dedicated device, etc.)

7 Are these vulnerabilities imagined? No headlines… doesn’t mean it’s not real –Not enticing enough to hackers yet? (small market size, lack of confidential data) 7 Complex Underlying Code * 100K lines of code in hypervisor * 600K++ lines of code in dom0 * Derived from existing OS Physical Hardware Hypervisor OS Apps Guest VM1Guest VM2 Large Attack Surface * 56 different exit reasons * Tremendous interaction Modest load => 20,000 exits/sec During boot => 600,000 exits/sec (Only VM, dedicated device, etc.)

8 NoHype NoHype removes the hypervisor –There’s nothing to attack –Complete systems solution –Still retains the needs of a virtualized cloud infrastructure 8 Physical Hardware OS Apps Guest VM1Guest VM2 No hypervisor

9 Virtualization in the Cloud Why does a cloud infrastructure use virtualization? –To support dynamically starting/stopping VMs –To allow servers to be shared (multi-tenancy) Do not need full power of modern hypervisors –Emulating diverse (potentially older) hardware –Maximizing server consolidation 9

10 Roles of the Hypervisor Isolating/Emulating resources –CPU: Scheduling virtual machines –Memory: Managing memory –I/O: Emulating I/O devices Networking Managing virtual machines 10 Push to HW / Pre-allocation Remove Push to side NoHype has a double meaning… “no hype”

11 Scheduling Virtual Machines Scheduler called each time hypervisor runs (periodically, I/O events, etc.) –Chooses what to run next on given core –Balances load across cores 11 hypervisor timer switch I/O switch timer switch VMs time Today

12 Dedicate a core to a single VM Ride the multi-core trend –1 core on 128-core device is ~0.8% of the processor Cloud computing is pay-per-use –During high demand, spawn more VMs –During low demand, kill some VMs –Customer maximizing each VMs work, which minimizes opportunity for over-subscription 12 NoHype

13 Managing Memory Goal: system-wide optimal usage –i.e., maximize server consolidation Hypervisor controls allocation of physical memory 13 Today

14 Pre-allocate Memory In cloud computing: charged per unit –e.g., VM with 2GB memory Pre-allocate a fixed amount of memory –Memory is fixed and guaranteed –Guest VM manages its own physical memory (deciding what pages to swap to disk) Processor support for enforcing: –allocation and bus utilization 14 NoHype

15 Emulate I/O Devices Guest sees virtual devices –Access to a device’s memory range traps to hypervisor –Hypervisor handles interrupts –Privileged VM emulates devices and performs I/O 15 Physical Hardware Hypervisor OS Apps Guest VM1Guest VM2 Real Drivers Priv. VM Device Emulation trap hypercall Today

16 Guest sees virtual devices –Access to a device’s memory range traps to hypervisor –Hypervisor handles interrupts –Privileged VM emulates devices and performs I/O Emulate I/O Devices 16 Physical Hardware Hypervisor OS Apps Guest VM1Guest VM2 Real Drivers Priv. VM Device Emulation trap hypercall Today

17 Dedicate Devices to a VM In cloud computing, only networking and storage Static memory partitioning for enforcing access –Processor (for to device), IOMMU (for from device) 17 Physical Hardware OS Apps Guest VM1Guest VM2 NoHype

18 Virtualize the Devices Per-VM physical device doesn’t scale Multiple queues on device –Multiple memory ranges mapping to different queues 18 ProcessorChipset Memory Classify MUX MAC/PHY Network Card Peripheral bus NoHype

19 Ethernet switches connect servers Networking 19 server Today

20 Software Ethernet switches connect VMs Networking (in virtualized server) 20 Virtual server Software Virtual switch Today

21 Software Ethernet switches connect VMs Networking (in virtualized server) 21 OS Apps Guest VM1 Hypervisor OS Apps Guest VM2 hypervisor Today

22 Software Ethernet switches connect VMs Networking (in virtualized server) 22 OS Apps Guest VM1 Hypervisor OS Apps Guest VM2 Software Switch Priv. VM Today

23 Do Networking in the Network Co-located VMs communicate through software –Performance penalty for not co-located VMs –Special case in cloud computing –Artifact of going through hypervisor anyway Instead: utilize hardware switches in the network –Modification to support hairpin turnaround 23 NoHype

24 Removing the Hypervisor Summary Scheduling virtual machines –One VM per core Managing memory –Pre-allocate memory with processor support Emulating I/O devices –Direct access to virtualized devices Networking –Utilize hardware Ethernet switches Managing virtual machines –Decouple the management from operation 24

25 NoHype Double Meaning Means no hypervisor, also means “no hype” Multi-core processors Extended Page Tables SR-IOV and Directed I/O (VT-d) Virtual Ethernet Port Aggregator (VEPA) 25

26 NoHype on Commodity Hardware Goal: semantics of today’s virtualization –xm create guest_01.cfg –xm shutdown guest_01 Pre-allocate resources Use only Virtualized I/O Short circuit the discovery process Unwind indirection 26

27 Pre-allocate Resources So a hypervisor doesn’t need to manage dynamically CPU –Pin a VM to a core –Give complete control over that core (including per core timer and interrupt controller) Memory –Utilize processor mechanism to partition memory –In Intel, EPT can be used for this 27

28 Use Only Virtualized I/O So a hypervisor doesn’t have to emulate Network card: supports virtualization today Disk: use network boot, iSCSI presence) 28 hypervisor Guest VM1 Priv. VM core Loader/OS DHCP/ gPXE servers iSCSI servers

29 Short Circuit System Discovery So a hypervisor doesn’t have to respond to queries (at run time) Allow guest VM to do queries during boot up –Requires a temporary hypervisor –Modify guest OS to read this during initialization (save results for later) Cloud provider supplies the kernel –For security purposes and functionality 29 OS hypervisor What devices are there? What are the processor’s features? What is the clock freq.?

30 Unwind Indirection So a hypervisor doesn’t have to do mappings –Send IPI from core 0 to core 1 (actually core 2 to 3) –Interrupt vector 64 arrives at core 2 (actually vector 77 of Guest 2) 30 OS Apps Guest 2 VCPU1 Core 3Core 2 OS Apps Guest 0 VCPU0 OS Apps Guest 2 VCPU0 VMs can move VMs can share

31 Bring it together: Setup 31 Xen Guest VM1 Priv. VM xm core e.g., Pre-set EPT, assign virtual devices Guest VM space VMX Root loaderkernelCustomer codecreate

32 Bring it together: Network Boot 32 Xen Guest VM1 Priv. VM xm core DHCP gPXE servers Guest VM space VMX Root loaderkernelCustomer codecreate

33 Bring it together: OS Boot-up 33 Xen Guest VM1 Priv. VM xm core kernel System Discovery Guest VM space VMX Root loaderkernelCustomer codecreate

34 Bring it together: Switchover 34 Xen Guest VM1 Priv. VM xm core kernel Hypercall from kernel Before any user code (last command in initrd) Guest VM space VMX Root loaderkernelCustomer codecreate

35 Block All Hypervisor Access 35 Xen Guest VM1 Priv. VM xm core kernel Kill VM iSCSI servers Guest VM space VMX Root loaderkernelCustomer codecreate Any VM Exit kills the VM

36 Evaluation Raw performance Assess main limitations on today’s hardware: –Ability to send IPIs –Resource sharing (side channels) 36

37 Raw Performance 37 About 1%performance improvement over Xen (VTd and EPT alleviate main bottlenecks)

38 IPI DoS Attack Victim: SPEC (libquantum), Apache –Less than 1% performance degradation 38 Victim VM Attacker VM core …

39 Memory Side Channel Information Can attacker tell how loaded victim is? 0%, 25%, 50%, 75%, 100% 39

40 Next Steps Assess needs for future processors –e.g., receiver should know source of IPI (and can mask) Assess OS modifications –e.g., push configuration instead of discovery Asses vulnerabilities from outside –e.g., management channel from customer to start VM 40

41 Conclusions Trend towards hosted and shared infrastructures Significant security issue threatens adoption NoHype solves this by removing the hypervisor Performance improvement is a side benefit 41

42 Questions? Contact info: ekeller@princeton.edu http://www.princeton.edu/~ekeller szefer@princeton.edu http://www.princeton.edu/~szefer 42


Download ppt "NoHype: Virtualized Cloud Infrastructure without the Virtualization Eric Keller, Jakub Szefer, Jennifer Rexford, Ruby Lee (ISCA 2010 + follow up soon to."

Similar presentations


Ads by Google