© 2010 VMware Inc. All rights reserved Confidential Storage Virtualization VMware, Inc.

Slides:



Advertisements
Similar presentations
Data Storage Solutions Module 1.2. Data Storage Solutions Upon completion of this module, you will be able to: List the common storage media and solutions.
Advertisements

© 2006 DataCore Software Corp SANmotion New: Simple and Painless Data Migration for Windows Systems Note: Must be displayed using PowerPoint Slideshow.
Proposed Storage Area Network Facilities For Discussion.
Virtual Machine Technology Dr. Gregor von Laszewski Dr. Lizhe Wang.
Virtualisation From the Bottom Up From storage to application.
What’s New: Windows Server 2012 R2 Tim Vander Kooi Systems Architect
Virtual Machine Security Design of Secure Operating Systems Summer 2012 Presented By: Musaad Alzahrani.
2 June 2015 © Enterprise Storage Group, Inc. 1 The Case for File Server Consolidation using NAS Nancy Marrone Senior Analyst The Enterprise Storage Group,
Introduction to Virtualization
Network Implementation for Xen and KVM Class project for E : Network System Design and Implantation 12 Apr 2010 Kangkook Jee (kj2181)
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
Leaders Have Vision™ visionsolutions.com 1 Double-Take for Hyper-V Overview.
Module – 11 Local Replication
Storage Networking Technologies and Virtualization Section 2 DAS and Introduction to SCSI1.
VIRTUALIZATION AND YOUR BUSINESS November 18, 2010 | Worksighted.
Virtualization for Cloud Computing
Module – 7 network-attached storage (NAS)
Copyright © 2005 VMware, Inc. All rights reserved. VMware Virtualization Phil Anthony Virtual Systems Engineer
Implementing Failover Clustering with Hyper-V
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
VMware vSphere 4 Introduction. Agenda VMware vSphere Virtualization Technology vMotion Storage vMotion Snapshot High Availability DRS Resource Pools Monitoring.
File Systems and N/W attached storage (NAS) | VTU NOTES | QUESTION PAPERS | NEWS | VTU RESULTS | FORUM | BOOKSPAR ANDROID APP.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
E Virtual Machines Lecture 4 Device Virtualization
Tanenbaum 8.3 See references
Object-based Storage Long Liu Outline Why do we need object based storage? What is object based storage? How to take advantage of it? What's.
Chapter 10 : Designing a SQL Server 2005 Solution for High Availability MCITP Administrator: Microsoft SQL Server 2005 Database Server Infrastructure Design.
Making the Virtualization Decision. Agenda The Virtualization Umbrella Server Virtualization Architectures The Players Getting Started.
Hyper-V Storage Senthil Rajaram Senior PM Microsoft Corporation.
Microkernels, virtualization, exokernels Tutorial 1 – CSC469.
SAIGONTECH COPPERATIVE EDUCATION NETWORKING Spring 2010 Seminar #1 VIRTUALIZATION EVERYWHERE.
SAIGONTECH COPPERATIVE EDUCATION NETWORKING Spring 2009 Seminar #1 VIRTUALIZATION EVERYWHERE.
Disk Access. DISK STRUCTURE Sector: Smallest unit of data transfer from/to disk; 512B 2/4/8 adjacent sectors transferred together: Blocks Read/write heads.
Module 7: Hyper-V. Module Overview List the new features of Hyper-V Configure Hyper-V virtual machines.
Chapter 5 Section 2 : Storage Networking Technologies and Virtualization.
Virtualization Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content of this presentation is licensed.
Virtualization for Storage Efficiency and Centralized Management Genevieve Sullivan Hewlett-Packard
Virtualization: Not Just For Servers Hollis Blanchard PowerPC kernel hacker.
Virtual Machine and its Role in Distributed Systems.
Our work on virtualization Chen Haogang, Wang Xiaolin {hchen, Institute of Network and Information Systems School of Electrical Engineering.
Storage Trends: DoITT Enterprise Storage Gregory Neuhaus – Assistant Commissioner: Enterprise Systems Matthew Sims – Director of Critical Infrastructure.
Enable Multi Tenant Clouds Network Virtualization. Dynamic VM Placement. Secure Isolation. … High Scale & Low Cost Datacenters Leverage Hardware. High.
Using NAS as a Gateway to SAN Dave Rosenberg Hewlett-Packard Company th Street SW Loveland, CO 80537
 Virtual machine systems: simulators for multiple copies of a machine on itself.  Virtual machine (VM): the simulated machine.  Virtual machine monitor.
MATSUMOTO Hitoshi SCSI support on Xen MATSUMOTO Hitoshi Fujitsu Ltd.
VMware vSphere Configuration and Management v6
Rick Claus Sr. Technical Evangelist,
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Full and Para Virtualization
Operating-System Structures
Cloud Computing Lecture 5-6 Muhammad Ahmad Jan.
Internet Protocol Storage Area Networks (IP SAN)
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
E Virtual Machines Lecture 1 What is Virtualization? Scott Devine VMware, Inc.
Module Objectives At the end of the module, you will be able to:
Solving Today’s Data Protection Challenges with NSB 1.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
Open Source Virtualization Andrey Meganov RHCA, RHCX Consultant / VDEL
PHD Virtual Technologies “Reader’s Choice” Preferred product.
Agenda Hardware Virtualization Concepts
Virtualization Dr. Michael L. Collard
Virtualization overview
Storage Virtualization
GGF15 – Grids and Network Virtualization
Module – 7 network-attached storage (NAS)
Direct Attached Storage and Introduction to SCSI
OS Virtualization.
Storage Trends: DoITT Enterprise Storage
Specialized Cloud Architectures
Virtualization Dr. S. R. Ahmed.
Presentation transcript:

© 2010 VMware Inc. All rights reserved Confidential Storage Virtualization VMware, Inc.

2 Agenda  Introduction to Storage Storage Basics “Enterprise” Storage Storage Management  Storage Virtualization Storage Virtualization in a Hypervisor General Storage Virtualization Technologies Currently Industry Trends

3 Storage Basics - Simplistic View  An IDE or SATA disk drive directly connected to a computer. IDE – Integrated Device Electronics ATA – PC/AT bus attachment or Advanced Technology Attachment SATA – Serial ATA

4 Storage Basics – Low-end Storage  Direct Attached Storage (DAS)  Typically not shared between hosts  Typically provides less device connectivity and more restrictive transmission distance limitations  Typically used in small and entry level solutions that don’t have specific reliability requirements  Examples: IDE, SATA, SAS

5 Storage Basics – Enterprise Storage  Computers connected through a switch fabric to a Storage Array Switch Storage Array

6 Confidential Storage Basics – Enterprise Storage  Block-based SAN Protocols E.g. FC, iSCI Allows multiple physical machines to access the same storage across multiple paths Disk arrays on SANs can provide reliable storage that can easily be divided into arbitrary-sized logical disks (SCSI logical units) Avoids situation where computer has enough CPU power for a workload, but not enough disk Makes it much easier to migrate VMs between hosts (no copying of large virtual disks, just copy VM memory contents) Greatly enhances the flexibility provided by VMs

7 Storage Basics – Enterprise Storage  File-based Network Attached Storage E.g. NFS, CIFS Many of the same benefits of SAN storage, but at a lower price point (with potential performance penalities) SAN/NAS Hybrids Bandwidth scaling using parallel data access paths (pNFS), Object-based Storage Devices (OSD), e.g. Panasas’ cluster file system

8 Storage Management  Storage Platforms add significant functionality to just a bunch of disks (JBODs) SCSI Logical Unit (LUN or LU) Virtualization Provides abstractions for the physical media that data is stored on allowing easier management of the ‘devices’ seen by servers RAID – provide hardware failure recovery (striping, parity, mirroring, nested levels)

9 Storage Management (Contd.)  Volume/Virtual Device Management Provides further abstraction and capabilities for the devices exposed by the storage platform Local Replication – (split) Mirror, Clone, Snapshot – provide point in time backup/restore points Provisioning – thin, thick, pass-through Policy Based Device Provisioning and Mobility

10 Storage Management (Contd.)  Disaster Recovery Allows datacenters/servers to recover from catastrophic environmental/infrastructure failures Progression: offline backups, online backups w/snapshots, synchronous remote mirrors, asynchronous remote mirror, CDP  Continuous Data Protection Archives all changes to the protected storage allowing information to be restored from any point in time dependent write synchronization integration w/application level coherency reduces recovery time objectives (RTO) to zero

11 Storage Management (Contd.)  Storage Platform Virtualization Storage Platform and Fabric Based abstraction of storage platforms Adds generic abstractions for heterogeneous array farms that allow many of the previous features

12 Confidential Virtualizing Storage Resources  Store a VM’s virtual disk as a file  IO Scheduling between multiple VMs  Provide multipathing, snapshots VM1.vmdk VM1.vmdk: File backing the virtual disk for VM1 VMFS: VMware’s SAN File system, an example cluster file system

13 Confidential Virtualizing Storage Resources (Implications)  Several differences are introduced by new layer Virtualization isn’t free The fast path access to commonly used device is highly optimized Still, the storage virtualization stack is longer than on native Extra features gained significantly outweigh the extra stack depth

14 Virtualizing Storage Resources (Implications)  Guest is oblivious to the real hardware complexities The complexity of different types of storage devices, and transport protocols are hidden Hypervisor can be a single up-to-date place where the storage stack is well- maintained Don’t have to build drivers for every conceivable type of operating system  Hypervisor provides reliable connection Guest doesn’t have to worry about multipathing, path or even device failover

15 Specialized Blocks (Redo Logs) Linked Clones Example Physical Disk Common OS Base Disk Linked Clone Microsoft Office Guest Filesystem outlook.exe VM 1 (Alice) Microsoft Office Guest Filesystem outlook.exe VM 2 (Bob)

16 Confidential Sharing Storage Resources  Highly scalable cluster file system (e.g. VMFS)  Concurrent accesses from multiple hosts  No centralized control  Essential for Live migration of VMs High availability (VM restarts in case of host failure)

17 Proportional Sharing of Storage Resources  Provide differentiated QoS for IO resources If we assign per-VM disk shares for shared VMFS LUN How to provide proportional sharing of storage resources without centralized control? A. Gulati, I. Ahmad, and C. Waldspurger. PARDA: Proportional Allocation of Resources for Distributed Storage Access. In Proc. of FAST, Feb

18 Efficient Sharing of Storage Resources  Deduplicate identical blocks in a cluster FS*  Efficient block layout for multi- VM workloads  Virtuaized storage power management  Hierarchical Storage * A. Clements, I. Ahmad, M. Vilayannur and J. Li. Decentralized Deduplication in a SAN Cluster Filesystem. In Proc. of USENIX Annual Technical Conference, June 2009.

19 Intelligent Sharing of Storage Resources  Open research questions  Bridge semantic gap of the scsi/block interface  Enhanced security against guest rootkits or viruses  Virtual I/O speculation

20 Live Migration of VM Storage  State-of-the-art solution to perform live migration of virtual machine disk files  Across heterogeneous storage arrays with complete transaction integrity  No interruption in service for critical applications

21 Architecture for Storage in Virtualization

22 Contrast Virtualization Architectures  Hosted System Virtualization  General purpose OS in parent partition  All I/O shared device traffic going thru parent partition  Bare-metal System Virtualization Ultra small, virtualization centric kernel Embedded driver model optimized for VMs Xen/Viridian Drivers Virtual Machine Virtual Machine Dom0 (Linux) or Parent VM (Windows) Drivers Virtual Machine Virtual Machine General Purpose OS Drivers Virtual Machine Virtual Machine Drivers Virtual Machine Virtual Machine Drivers Virtual Machine Drivers Virtual Machine Drivers

23 Contrast Virtualization Architectures  Passthrough Disks  Preserves Complex SAN management  Each VM has dedicated LUN(s)  Provisioning a VM requires provisioning LUN  Clustered Storage “Extra” storage virtualization layer Storage independence and portability Instant Provisioning Virtual Machine Guest OS Application Virtual Machine Guest OS Application Physical Disks Virtual Machine Guest OS Application Clustered Virtual Volume Virtual Machine Guest OS Application Virtual Disks Physical Storage

24 Typical Operating System I/O Path  Read contents of a file  Application opens a file and issues a read() syscall  File System maps the read request to a location within a volume  The LVM maps the logical location to a block on the physical mass storage device  The device driver issues the read of a block to the physical storage device  Block of data is returned up the stack to the application Application read() syscall File System FS operation Logical Volume Manager Block operation Device Driver Storage Platform SCSI request

25 Add in a hypervisor (VMware ESX Example)  Tracing a read request from guest OS through VMM to ESX Guest OS device driver enqueues SCSI read CDB within IOCB to HBA via PCI I/O space instruction or PCI memory space reference PCI bus emulated adapter in virtual machine monitor traps PCI I/O space instruction reference or PCI memory space memory reference Emulated adapter parses IOCB, retrieves SCSI CDB and remaps IOCB S/G list Emulated adapter passes SCSI CDB and remapped S/G list to ESX Virtual Machine VM Emulated PCI Adapter ESX Guest OS Device Driver SCSI command PCI Mem/IO Space Ref

26 Confidential Virtualized I/O End-to-end (VMware ESX example)  Tracing a read request from the Virtual Machien to the storage platform I/O issued by the Virtual Machine to emulation layer Emulation handles conversion of request to format used by ESX and issues a file system request File system converts to a block operation and issues request to logical device Storage stack maps request to a ‘physical’ device issues to HBA HBA initiates and completes request and data traverses back up the stack Virtual Machine Logical Volume Manager Storage Core SCSI command SCSI Virtualization Engine FS operation VMFS Block operation Device Driver Storage Platform

27 Virtualized I/O – Advanced Topics  Potential performance optimizations Accelerate guest code Idealized virtual device with paravirtualized guest drivers Accelerate with variety of I/O assists Intel VT-d & AMD IOMMU: Faster CPU and MMU virtualization PCI-SIG SR-IOV: passthrough I/O RDMA: accelerate VMotion, NFS, iSCSI Device Driver I/O Stack Guest OS Device Driver Device Emulation Hypervisor

28 Differences in VMs  Virtualized deployments Large set of physical machines consolidated Diverse set of applications  Workload characteristics Difference I/O patterns to the same volume I/O from one app split to different volumes Provisioning operations along with applications  Hypervisor and the storage subsystem Clustered file system locking CPU and virtual device emulation, CPU and memory affinity settings, new Hardware Assist technology  System setup can affect performance Partition alignment affects performance Raw Device Mapping of File System Protocol conversion (e.g. SCSI “over” NFS) Virtualization file systems often optimized very differently. Standard benchmarks not always sufficient.

29 Other Storage Virtualization Technologies  Technologies NPIV – allow multiple VMs to have unique identifiers while sharing a single physical port Pass-through I/O Dedicated I/O or Device/Driver Domains  Overall Implications Mobility Performance

30 Technologies – NPIV  ANSI T11 standard for multi-homed fabric-attached N-ports Enables end-to-end visibility of storage I/O on per VM basis Facilitates per VM storage QoS at target Improves WWN zoning effectiveness at cost of increased SAN administration Requires FC driver, HBA, and switch hardware NPIV support V-port (virtualized N-port) identified by unique node/port WWN pair V-port per VM as long as VM is powered on  Overall Implications V-port migrates with VM No significant performance cost or benefit

31 Technologies – Passthrough I/O  The idealized I/O devices seen by the guest hide any special features of the underlying physical controllers  Passthrough I/O: virtualization extension to Pcle from standards body (PCI-SIG) Virtualizes a single PCI device into multiple virtual PCI devices Enables a direct guest OS access to PCIe I/O or memory space Improves performance by eliminating some of the virtualization overheads However live migration of VMs becomes harder sicne VMs are now tied to a particular type of hardware Requires host platform IOMMU platform and PCI MSI/MSI-X  Overall Implications Yields improvements in I/O efficiency Complex support for migration of VMs using passthrough I/O

32 Technologies – I/O Domains  Basics Provides isolation via dedicated address space (domain) for all devices Can provide flexibility by leveraging existing device drivers e.g. Xen uses Dom0 to perform all I/O on behalf of VMs  Implications Performance can suffer due to scheduling latency for I/O domain In this model, the VM issuing I/O doesn’t result in the hypervisor immediately putting it on the wire Instead, the hypervisor will wake up the I/O domain and pass the request on to it. I/O domains can become bottleneck

33 Conclusions  Storage plays a critical role in Virtualization  The choice of architectures has implications for the complexity versus feature tradeoffs.  Many problems remain unsolved in this area of research