System-level Virtualization for HPC: Recent work on Loadable Hypervisor Modules Systems Research Team Computer Science and Mathematics Division Oak Ridge.

Slides:



Advertisements
Similar presentations
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Advertisements

Computer System Laboratory
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Department of Computer Science and Engineering University of Washington Brian N. Bershad, Stefan Savage, Przemyslaw Pardyak, Emin Gun Sirer, Marc E. Fiuczynski,
Ensuring Operating System Kernel Integrity with OSck By Owen S. Hofmann Alan M. Dunn Sangman Kim Indrajit Roy Emmett Witchel Kent State University College.
Using VMX within Linux We explore the feasibility of executing ROM-BIOS code within the Linux x86_64 kernel.
Linking & Loading CS-502 Operating Systems
Xen , Linux Vserver , Planet Lab
1 Future Technologies Group Shane Canon, canon at nersc dot govSummer Linux Kernel Class Root Kit Protection and Detection Shane Canon October
Advanced OS Chapter 3p2 Sections 3.4 / 3.5. Interrupts These enable software to respond to signals from hardware. The set of instructions to be executed.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
Chapter 5: Memory Management Dhamdhere: Operating Systems— A Concept-Based Approach Slide No: 1 Copyright ©2005 Memory Management Chapter 5.
Chapter 4 Structure of Operating Systems Copyright © 2008.
A. Frank - P. Weisberg Operating Systems Structure of Operating Systems.
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
Virtualization for Cloud Computing
VIRTUALISATION OF HADOOP CLUSTERS Dr G Sudha Sadasivam Assistant Professor Department of CSE PSGCT.
Efficient Protection of Kernel Data Structures via Object Partitioning Abhinav Srivastava, Jonathon Giffin AT&T Labs-Research, HP Fortify ACSAC 2012.
Virtualization Technology Prof D M Dhamdhere CSE Department IIT Bombay Moving towards Virtualization… Department of Computer Science and Engineering, IIT.
Microkernels, virtualization, exokernels Tutorial 1 – CSC469.
Jakub Szefer, Eric Keller, Ruby B. Lee Jennifer Rexford Princeton University CCS October, 2011 報告人:張逸文.
Virtualization Concepts Presented by: Mariano Diaz.
Secure & flexible monitoring of virtual machine University of Mazandran Science & Tecnology By : Esmaill Khanlarpour January.
Xen I/O Overview. Xen is a popular open-source x86 virtual machine monitor – full-virtualization – para-virtualization para-virtualization as a more efficient.
DiProNN Resource Management System (DiProNN = Distributed Programmable Network Node) Tomáš Rebok Faculty of Informatics MU, Brno Czech.
Virtual Machine Security Systems Presented by Long Song 08/01/2013 Xin Zhao, Kevin Borders, Atul Prakash.
Virtualization for Adaptability Project Presentation CS848 Fall 2006 Umar Farooq Minhas 29 Nov 2006 David R. Cheriton School of Computer Science University.
Sogang University Advanced Operating Systems (Linux Module Programming) Sang Gue Oh, Ph.D.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Presented by System-Level Virtualization & OSCAR-V Stephen L. Scott Thomas Naughton Geoffroy Vallée Computer Science Research Group Computer Science and.
CS533 Concepts of Operating Systems Jonathan Walpole.
High Performance Computing on Virtualized Environments Ganesh Thiagarajan Fall 2014 Instructor: Yuzhe(Richard) Tang Syracuse University.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Memory: Relocation.
 Virtual machine systems: simulators for multiple copies of a machine on itself.  Virtual machine (VM): the simulated machine.  Virtual machine monitor.
CPS3340 COMPUTER ARCHITECTURE Fall Semester, /29/2013 Lecture 13: Compile-Link-Load Instructor: Ashraf Yaseen DEPARTMENT OF MATH & COMPUTER SCIENCE.
1 Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
An OBSM method for Real Time Embedded Systems Veronica Eyo Sharvari Joshi.
EXTENSIBILITY, SAFETY AND PERFORMANCE IN THE SPIN OPERATING SYSTEM
Improving Xen Security through Disaggregation Derek MurrayGrzegorz MilosSteven Hand.
A. Frank - P. Weisberg Operating Systems Structure of Operating Systems.
CSC414 “Introduction to UNIX/ Linux” Lecture 2. Schedule 1. Introduction to Unix/ Linux 2. Kernel Structure and Device Drivers. 3. System and Storage.
Full and Para Virtualization
Lecture 26 Virtual Machine Monitors. Virtual Machines Goal: run an guest OS over an host OS Who has done this? Why might it be useful? Examples: Vmware,
Lecture 12 Virtualization Overview 1 Dec. 1, 2015 Prof. Kyu Ho Park “Understanding Full Virtualization, Paravirtualization, and Hardware Assist”, White.
Protecting The Kernel Data through Virtualization Technology BY VENKATA SAI PUNDAMALLI id :
Operating-System Structures
Lab 12 Department of Computer Science and Information Engineering National Taiwan University Lab12 – Driver 2014/12/16 1 /21.
Protection of Processes Security and privacy of data is challenging currently. Protecting information – Not limited to hardware. – Depends on innovation.
Page Replacement Implementation Issues Text: –Tanenbaum ch. 4.7.
CSE 451: Operating Systems Winter 2015 Module 25 Virtual Machine Monitors Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska,
VMM Based Rootkit Detection on Android
Lecture 5 Rootkits Hoglund/Butler (Chapters 1-3).
1 Xen and the Art of Binary Modification Lies, Damn Lies, and Page Frame Addresses Greg Cooksey and Nate Rosenblum, March 2007.
Kernel Modules – Introduction CSC/ECE 573, Sections 001 Fall, 2012.
OSCAR Symposium – Quebec City, Canada – June 2008 Proposal for Modifications to the OSCAR Architecture to Address Challenges in Distributed System Management.
Virtual Machines (part 2) CPS210 Spring Papers  Xen and the Art of Virtualization  Paul Barham  ReVirt: Enabling Intrusion Analysis through Virtual.
Virtualization Neependra Khare
Virtualization for Cloud Computing
NEWS LAB 薛智文 嵌入式系統暨無線網路實驗室
Chapter 1: Introduction
Linking & Loading.
Operating System Structure
Program Execution in Linux
CS-3013 Operating Systems C-term 2008
Introduction to Operating Systems
Linking & Loading CS-502 Operating Systems
The Design & Implementation of Hyperupcalls
Linking & Loading CS-502 Operating Systems
CSE 451: Operating Systems Autumn Module 24 Virtual Machine Monitors
Presentation transcript:

System-level Virtualization for HPC: Recent work on Loadable Hypervisor Modules Systems Research Team Computer Science and Mathematics Division Oak Ridge National Laboratory

ORNL Systems Research Team (SRT)‏  Ferrol Aderholdt (TNTech/ORAU)‏  Christian Engelmann  Thomas Naughton  Hong Ong  Stephen Scott  Anand Tikotekar  Geoffroy Vallée

Why Virtualization?  Decouple operating system / hardware  Server consolidation (utilization)‏  Development & Testing  Reliability  High availability / Fault Tolerance  Non-stop computing via VM migration  Customization of execution environment  Specialize for an application / user  Environment consistency

Motivation  Why do dynamic customization?  Tailor execution environment  Experimentation at runtime  Why do dynamic customization in hypervisor?  Perform system-level adaptation  Basis for future capabilities (e.g., QoS or FT)‏  Dynamic to avoid VM teardown (i.e., avoid recompile-reboot)‏  Debugging / Instrumentation (e.g., perf. analysis)‏  Add functionality as needed

Overview  Issue: Current Xen is very monolithic  Static customization via rebuild & reboot  To change must shutdown active VMs & save all state  Limits experimentation / investigation  Plan: Extend Xen hypervisor to allow for changes at runtime

Loadable Hypervisor Modules  Mechanism for runtime changes to VMM  Modeled after Linux’s Loadable Kernel Modules  Maximize reuse to help speed development  Maintain consistent file format  Relocatable ELF object with additional segment

Xen Terminology  Domains: term used regarding OS’s running on VMM  User domains (domU)‏  Administrative domain (dom0)‏  Hypercall: call from domain to VMM  Analogous to a system call (user to kernel)‏ Hardware Xen (vmm)‏ Host OS (dom0)‏ GuestOS (domU)‏ GuestOS (domU)‏ Xen: Type-I hypervisor

LHM: Changes to HostOS (dom0)‏  Headers for LHM compilation  Slightly modify module utilities (rmmod & lsmod)‏  Add /proc entry to access addresses of Xen symbols  Export VMM symbol info to dom0  Modify modules sub-system to detect / support LHM  Patch LHMs with VMM symbol info  Make hypercall to map LHM into VMM

LHM: Changes to VMM (xen)‏  New hypercall – HYPERVISOR_do_lhm_op()‏  Map (load) LHM into VMM  Unload LHM from VMM  List loaded LHMs  LHM “mapping” / loading  Allocated memory resources remain in HostOS (dom0)‏  Regions mapped into VMM & write-protected from HostOS

Symbols and Locations  Linux kernel info  Standard feature of HostOS (dom0)‏  Provides kernel symbol names, types & addresses  Exported via /proc/kallsyms  Xen hypervisor info  New feature of HostOS (dom0)‏  Provides hypervisor symbol names, types & addresses  Exported via /proc/hallsyms  Add new hypercall for LHM based info /proc/hallsyms Host OS [xen symtab] VMM kernel /proc/kallsyms [linux symtab]

Example: Loading a LHM  Compile LHM  Standard Linux build environment (kbuild)‏  Load LHM  Standard Linux utilities (insmod)‏  Enhanced Linux modules subsystem detects LHM  Set LHM flag in struct module  Linux allocates space for module (in dom0)‏  Patches undefined symbols with Xen addresses  Ex. Xen’s printk()‏  Module is “hi-jacked”  Hypercall invoked to map into Xen

Build module (gcc)‏ Load module (insmod)‏ init_module()‏ Perform sys_init_module()‏ Load module & fixup symbols - copy data - detect LHM - patch UNDEF symbols hallsyms_lookup() Remove (skip) any Linux ties - skip: sysfs add - skip: module list add Make hcall to “map” module Write protect memory from Linux Kernel User Lookup Xen symbol info Perform hypercall LHM_map_module()‏ - Add new LHM to list - If exist, call LHM’s init()‏ Host OS (dom0)‏ VMM (xen)‏

Mapping of LHM

Current Status  Initial LHM prototype  Based upon Xen & Linux xen.hg  Supports x86-32 & x86-64  Limited testing at this point  See SRT website,

Conclusion  Virtualization for HPC  Dynamic modification for HPC VMM  Enable new capabilities (e.g., runtime instrumentation)‏  Developed new VMM mechanism  Based on Linux modules  Called Loadable Hypervisor Modules (LHM)‏

Future Work  Continue LHM development  Stabilize and release  Continue first LHM use case  Hypervisor instrumentation  Help in debugging & performance analysis (e.g., traces, counters)‏  Investigate other use cases  Adapt VMM policies at runtime (e.g., resilience?)‏  VM Introspection

Questions? Supported by: This work was supported by the U.S. Department of Energy, under Contract DE-AC05-00OR22725.

Backup / Additional Slides  Extra slides…

Hypervisor-level Dynamic Instrumentation  First use case of LHM  Instrument hypervisor code at runtime  Instrument at function boundaries (except Type-4)‏  Instrument without modification to target source (except Type-4)‏  No attempts made to modify active functions (in use on stack)‏  i.e., only affects uses after insertion of instrumentation point

Hypervisor-level Dynamic Instrumentation (2)‏  Instrumentation Terminology (Tamches:2001)‏  “code splicing” – add additional functionality to code  “code replacement” – replace functionality of code by overwriting a portion  Locations for Hypervisor Instrumentation Point (HIPT)‏  Type-1: code replacement of an entire function  Type-2: code splicing at the function entry  Type-3: code splicing at the function exit  Type-4: code splicing at pre-defined location in user specified code (label)‏

Implementation Details  Add new system call sys_instr() to Host OS  Sets up instrumentation point structure  Invokes appropriate hypercall (add/del) with structure info  Add new hypercall do_instrument() to VMM  Invokes add or delete of instrumentation point  Maintains list of active instrumentation points  HIPT Add  Replace old_address with new_address  Stores clobbered addresses for full restoration at HIPT delete  HIPT Delete  Restores original instructions at old_address (instrumented point)‏  Remove item from list

Current Status  Instrumentation support under active development  Requires LHM support in Xen  Architectures  Intel x86-32  Intel x86-64  Future updates on SRT website,