Queensland University of Technology CRICOS No. 00213J VMware as implemented by the ITS department, QUT Scott Brewster 7 December 2006.

Slides:



Advertisements
Similar presentations
Housekeeping Utilities for VMware. 11 June Housekeeping is preparing meals for oneself and family and the managing of other domestic concerns.
Advertisements

With ovirt & virt manager
Clemens Rossell (clrossel) UCBU Unity Connection 8.0(2) Virtualization TOI.
Implementing vSphere David J Young. Implementing vSphere Agenda Virtualization vSphere ESXi vSphere Client vCenter Storage Implementation Benefits Lessons.
Storage and Server Virtualization at Seton Hall Matt Stevenson IT Architect Seton Hall University Copyright Matt Stevenson This.
2-1 Chapter 2 Review Questions 1. Which Virtualization technologies support USB? A. VMware ESX 3.5C. VMware Workstation B. VMware ServerD. Microsoft Virtual.
7-1 Configure Software Initiator: Enable Topic 1: iSCSI Storage (GUI & Command Line) Enable the iSCSI initiator, the iSCSI name and alias are automatically.
12-1 VMware HA in Action VC Server ESX Server Virtual Machine B Virtual Machine C ESX Server Virtual Machine D Virtual Machine A Virtual Machine E Virtual.
Virtualisation From the Bottom Up From storage to application.
What’s New: Windows Server 2012 R2 Tim Vander Kooi Systems Architect
© 2010 VMware Inc. All rights reserved Confidential Performance Tuning for Windows Guest OS IT Pro Camp Presented by: Matthew Mitchell.
SQL Server on VMware Jonathan Kehayias (MCTS, MCITP) SQL Database Administrator Tampa, FL.
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
Introducing VMware vSphere 5.0
Virtualization 101.
Copyright © 2005 VMware, Inc. All rights reserved. VMware Virtualization Phil Anthony Virtual Systems Engineer
Implementing Failover Clustering with Hyper-V
High Availability Module 12.
VMware vCenter Server Module 4.
Virtualization 101.
Virtual Desktop Infrastructure Solution Stack Cam Merrett – Demonstrator User device Connection Bandwidth Virtualisation Hardware Centralised desktops.
VMware vSphere 4 Introduction. Agenda VMware vSphere Virtualization Technology vMotion Storage vMotion Snapshot High Availability DRS Resource Pools Monitoring.
GDC Workshop Session 1 - Storage 2003/11. Agenda NAS Quick installation (15 min) Major functions demo (30 min) System recovery (10 min) Disassembly (20.
Copyright 2007 FUJITSU LIMITED Systemwalker Resource Coordinator Virtual server Edition V13.2 December, 2007 Fujitsu Limited Functions Blade Server Management.
Tanenbaum 8.3 See references
Windows Server MIS 424 Professor Sandvig. Overview Role of servers Performance Requirements Server Hardware Software Windows Server IIS.
Changing the Way Systems are Deployed 1. 2 * Ghost since 1999 * Almost 4500 licenses * Prior to 2007 license count increase of 5% or greater a year *
Yury Kissin Infrastructure Consultant Storage improvements Dynamic Memory Hyper-V Replica VM Mobility New and Improved Networking Capabilities.
Module 10 Configuring and Managing Storage Technologies.

Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of.
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 3.
Virtualization Infrastructure Administration Network Jakub Yaghob.
Module 9: Configuring Storage
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
VMware Infrastructure 3 The Next Generation in Virtualization.
Virtualization Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content of this presentation is licensed.
ITServices Virtualization Terry Black January 2013.
Indiana University’s Name for its Sakai Implementation Oncourse CL (Collaborative Learning) Active Users = 112,341 Sites.
Adam Duffy Edina Public Schools.  Traditional server ◦ One physical server ◦ One OS ◦ All installed hardware is limited to that one server ◦ If hardware.
Cisco Confidential © 2010 Cisco and/or its affiliates. All rights reserved. 1 MSE Virtual Appliance Presenter Name: Patrick Nicholson.
Microsoft Virtual Academy Module 8 Managing the Infrastructure with VMM.
VMware vSphere Configuration and Management v6
UNC VMware Environment. UNC VMware Microsoft Windows 2003 (145 physical servers) HP hardware - Server and SAN infrastructure Developed a partnership with.
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Restricted Module 7.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Microsoft Virtual Academy. Microsoft Virtual Academy First HalfSecond Half (01) Introduction to Microsoft Virtualization(05) Hyper-V Management (02) Hyper-V.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Alessandro Cardoso, Microsoft MVP Creating your own “Private Cloud” with Windows 10 Hyper- V WIN443.
Deployment options for Fluid Cache for SAN with VMware
1 Veloxum Corporation © Veloxum ACO solution improves the efficiency and capacity of your environment for both physical and.
© 2015 VMware Inc. All rights reserved. Software-Defined Data Center Module 2.
Module Objectives At the end of the module, you will be able to:
U N C L A S S I F I E D LA-UR Leveraging VMware to implement Disaster Recovery at LANL Anil Karmel Technical Staff Member
vSphere 6 Foundations Exam Training
Microsoft Virtual Academy Module 9 Configuring and Managing the VMM Library.
UFIT Infrastructure Self-Service. Service Offerings And Changes Virtual Machine Hosting Self service portal Virtual Machine Backups Virtual Machine Snapshots.
Power Systems with POWER8 Technical Sales Skills V1
vSphere 6 Foundations Beta Question Answer
Installing VMware ESX and ESXi
VSPHERE 6 FOUNDATIONS BETA Study Guide QUESTION ANSWER
Agenda Hardware Virtualization Concepts
Virtualization OVERVIEW
Welcome! Thank you for joining us. We’ll get started in a few minutes.
Virtualization overview
Cisco Dumps With Real Exam Question Answers - Free Study Material
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Virtualization 101.
Presentation transcript:

Queensland University of Technology CRICOS No J VMware as implemented by the ITS department, QUT Scott Brewster 7 December 2006

CRICOS No J a university for the world real R Note IT services at QUT are provided primarily by the central ITS department and additionally by the IT departments of various faculties and divisions. This presentation focuses on the VMware implementation managed by the central ITS department. (There are other VMware implementations at QUT managed by faculty IT departments.)

CRICOS No J a university for the world real R Overview Why VMware? VMware software Physical hardware –Host hardware –Network hardware –Storage hardware Virtual machine configuration Guest operating-systems Backup of virtual machines VirtualCenter Future directions

CRICOS No J a university for the world real R Why VMware? Server consolidation through server virtualisation –Relocating instances of operating-systems on multiple under-utilised physical servers to multiple virtual machines on a single physical server –Test and development environments are key targets for virtualisation

CRICOS No J a university for the world real R VMware software Timeframe: Late-2005: Initial deployment: 6 hosts running ESX Server Mid-2006: –Installed ESX Server 3.0 on 8 new hosts –Migrated virtual machines from 6 original hosts: Manually shutdown and migrated existing virtual machines one at a time from the ESX Server hosts to the new ESX Server 3.0 hosts, leaving all ESX Server hosts empty of virtual machines. Unfortunately required virtual machine downtime! –Re-installed ESX Server 3.0 on the original 6 hosts Late-2006: Upgraded all hosts to ESX Server –Used VMotion to migrate all virtual machines from a given host prior to its updating to ESX Server No virtual machine downtime required! Now: Another 8 new hosts awaiting installation of ESX Server 3.0.1

CRICOS No J a university for the world real R Physical hardware VMware implementation requires three key types of physical hardware: hosts, a network, and shared storage –Hosts: 22 Hewlett-Packard (HP) ProLiant-series servers –Network: 1000 Mb/s Ethernet Cisco and Nortel network infrastructure –Storage: Local boot disks Shared storage provided by SAN SAN consists of HP storage arrays and fibre channel switches

CRICOS No J a university for the world real R Host hardware 22 physical hosts dedicated to VMware implementation: –4  HP ProLiant DL380 G4 2  3.4 GHz Intel Xeon CPU’s 5 GiB memory 2  200 MiB/s Fibre channel (200-M5-SN-I) ports 4  1000 Mb/s Ethernet (1000BASE-T) ports –10  HP ProLiant DL385 G1 2  2.2 GHz AMD Opteron (dual core) CPU’s 9 GiB memory 2  400 MiB/s Fibre channel (400-M5-SN-I) ports 4  1000 Mb/s Ethernet (1000BASE-T) ports –8  HP ProLiant BL465c G1 2  2.6 GHz AMD Opteron (dual core) CPU’s 14 GiB memory 2  400 MiB/s Fibre channel (400-M5-SN-I) ports 4  1000 Mb/s Ethernet (1000BASE-T) ports

CRICOS No J a university for the world real R Network hardware Each host has 4  1000 Mb/s network connections: 1.IP subnet /25 for the service console 2.IP subnet /8 on a dedicated VLAN for VMotion 3.IP subnet /24 or /25 for use by virtual machines 4.Additional connection identical to (3) above, for redundancy. Some hosts have an extra 2  1000 Mb/s network connections: 5.IP subnet /24 or /24 for use by virtual machines 6.Additional connection identical to (5) above, for redundancy.

CRICOS No J a university for the world real R Network hardware Now: External switch tagging (EST) mode Vswitch / / / /24 Physical network connections Service console Vmotion module Virtual machines

CRICOS No J a university for the world real R Network hardware Currently need access to four IP subnets just for virtual machines with desired access to even more subnets. Intention is to use virtual switch tagging (VST) mode –Allows virtual machines to access any subnet –Provides redundancy for all connections (including Service Console and Vmotion) –Allows Vmotion between more ESX Server hosts

CRICOS No J a university for the world real R Network hardware Desired: Virtual switch tagging (VST) mode Vswitch / / / /24 Physical trunk connections Service console Vmotion module Virtual machines

CRICOS No J a university for the world real R Storage hardware Hosts boot from local disks: –Local disks (all SCSI) are configured into a RAID-1 logical disk. –Our non-blade servers use an extra local disk as a hot spare. All other storage is shared and presented from a SAN: –Hosts have dual 200 MiB/s (or 400 MiB/s for newer hosts) fibre channel connections to the SAN one to each SAN fabric. (QUT has two identical SAN fabrics for redundancy.) –HP Storage arrays (EVA8000 in this case) provide shared SAN LUN’s to the hosts. –SAN LUN’s for use by VMware are 500 GiB RAID-5 LUN’s.

CRICOS No J a university for the world real R Storage Each SAN LUN provides the backing for a single ESX datastore. Datastores can span SAN LUN’s but we haven’t tried this. In turn, a datastore can be formatted with the VMFS3 filesystem. Virtual machine’s virtual disks are backed by files in VMFS3 filesystems. We keep all of a virtual machine’s virtual disks on the same datastore.

CRICOS No J a university for the world real R Virtual machine configuration Currently hosting 64 virtual machines CPU: –Majority of virtual machines configured with a single “virtual” CPU –Some are configured with dual “virtual” CPU’s Memory: –Majority are configured with 512 MiB or less –Some use 1 GiB or more Network: –All currently use a single virtual network interface Storage: –Most have a relatively small boot virtual disk with one or more large data virtual disks –Some have a larger combined boot/data virtual disk

CRICOS No J a university for the world real R Guest operating-systems Red Hat Enterprise Linux 4 –29 virtual machines running this OS –Even physical host hardware cannot always keep up with the default system timer rate of 1000 clock interrupts/s. A custom kernel is therefore required to reduce this rate to 100 interrupts/s for virtual machines. –Virtual machine is created manually by system-administrator. –Operating-system is then installed using network-based Kickstart process from the university’s Red Hat Satellite. Custom scripts install additional QUT-specific software and customisation. –The host is automatically registered for updates as part of the Kickstart process.

CRICOS No J a university for the world real R Installation of guest operating-systems Microsoft Windows 2003 Server –35 virtual machines running this OS –Clock interrupts already occur at less than 100 interrupts/s, so no customisation of the system timer is required. –Virtual machine is created by cloning a virtual machine template which has previously been manually installed from a Windows installation CD. The template is configured to both run Sysprep and add the instance to the WSUS server for updates. –The system-administrator then modifies the newly created virtual machine if extra disks, memory, etc. are required.

CRICOS No J a university for the world real R Backup of virtual machines No backup of ESX Server hosts is made: –Virtual machines are stored on the shared SAN LUN’s and can be restarted from a different ESX Server host if an ESX host is lost. Each virtual machine is backed-up traditionally using a network-backup agent: –If a virtual machine is lost is must be recreated and restored from tape. The shared SAN LUN’s are not backed-up: –If a shared SAN LUN is lost, all virtual machines it contained must be recreated and restored from tape.

CRICOS No J a university for the world real R VirtualCenter VirtualCenter version –Late-2005: Initial deployment used VirtualCenter –Mid-2006: Fresh installation of VirtualCenter 2.0 –Late-2006: Upgrade to VirtualCenter Client: Only supported on Windows –Linux users have to use Terminal Services client to first connect to Windows host –Virtual consoles become unreliable when this is done – key-press and key-release events are delayed causing unwanted repetition on virtual consoles Server: Only supported on Windows –Installed on a physical host License server –Dedicated license server running on the same physical host as the VirtualCenter server VirtualCenter database –Oracle database running under Linux on a physical host Vmotion –Separately licensed and additional cost, but essential tool in our experience –Allows on-line migration of virtual machines between physical hosts

CRICOS No J a university for the world real R Future directions Review virtual machine backup –Current backup strategy does nothing to reduce the number of costly network backup licenses required –Network backups generate a lot of extra network traffic, which is undesirable on virtual machines Configuration of resource pools –Currently little consideration is being given to guaranteeing resources for virtual machines –Appropriately configured resource pools should help