Module 9 PS-M4110 Overview <Place supporting graphic here>

Slides:



Advertisements
Similar presentations
The Intelligent, Automated IT Infrastructure. 2 EQUALLOGIC IS CHANGING THE WAY PEOPLE EXPERIENCE STORAGE By delivering dynamic virtual storage solutions.
Advertisements

© 2010 Cisco and/or its affiliates. All rights reserved. 1 Microsoft Exchange Server Ready UCS Data Center & Virtualizacion Cisco Latin America Valid through.
4/11/2017 © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks.
Overview of DVX 9000.
Brocade VDX 6746 switch module for Hitachi Cb500
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
PowerEdge M-Series CMC Management
SGI ® Company Proprietary SGI ® Modular InfiniteStorage Sales Deck – V4 – Feb 2013 An evolutionary new compute & storage platform exclusive to SGI.
Copyright © 2014 EMC Corporation. All Rights Reserved. ESXi Host Installation and Integration for Block Upon completion of this module, you should be able.
AUTHOR: Michael Hassan Product Manager - Managed Hosting Date: 29/04/2008 How Windows Server 2008 can optimise your managed hosting platform.
The Efficient Fabric Presenter Name Title. The march of ethernet is inevitable Gb 10Gb 8Gb 4Gb 2Gb 1Gb 100Mb +
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Session Agenda Introducing the Serverquarium for 2013.
MCITP Guide to Microsoft Windows Server 2008 Server Administration (Exam #70-646) Chapter 11 Windows Server 2008 Virtualization.
Copyright 2009 FUJITSU TECHNOLOGY SOLUTIONS PRIMERGY Servers and Windows Server® 2008 R2 Benefit from an efficient, high performance and flexible platform.
Intelligent Storage Systems
IBM® Spectrum Storage Virtualize™ V V7000 Unified in a nutshell
Module 8 Installation & Setup M1000e Chassis
Networking Interconnects
The Efficient Data Center - Dell Advanced Infrastructure Manager Presenter Name Title.
Introduction: The Need for Networking Innovation
Windows Server Scalability And Virtualized I/O Fabric For Blade Server
Storage Management Module 5.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Module 10 PS M4110 Components. Module Objectives: Describe the indicators on the front panel of the PS-4110 array Demonstrate how to replace a Disk Drive,
Microsoft Private Cloud Fast Track: The Next Generation of Private Cloud Reference Architecture Mike Truitt Sr. Product Planner Bryon Surace Sr. Program.
Solutions Road Show – 13 December 2013 | India Raghavendra S Specialist Dell Networking Solutions Right Size your Data center Networking.
5.3 HS23 Blade Server. The HS23 blade server is a dual CPU socket blade running Intel´s new Xeon® processor, the E5-2600, and is the first IBM BladeCenter.
Understand what’s new for Windows File Server Understand considerations for building Windows NAS appliances Understand how to build a customized NAS experience.
Module 12 MXL DCB <Place supporting graphic here>
© 2006 EQUALLOGIC, INC. │ ALL RIGHTS RESERVED 1 The Marriage of Virtual Systems with Virtual Storage.
GeoVision Solutions Storage Management & Backup. ๏ RAID - Redundant Array of Independent (or Inexpensive) Disks ๏ Combines multiple disk drives into a.
PRESENTATION TITLE GOES HERE DAS to SAN -- iSCSI Offers a Compelling Solution Presented by: Jason Blosil, NetApp and Gary Gumanow, Dell.
About the Presentations The presentations cover the objectives found in the opening of each chapter. All chapter objectives are listed in the beginning.
Module 7: Hyper-V. Module Overview List the new features of Hyper-V Configure Hyper-V virtual machines.
Storage Module 6.
School of EECS, Peking University Microsoft Research Asia UStore: A Low Cost Cold and Archival Data Storage System for Data Centers Quanlu Zhang †, Yafei.
Storage Systems Market Analysis Dec 04. Storage Market & Technologies.
Implementing Hyper-V®
MDC417 Follow me on Working as Practice Manager for Insight, he is a subject matter expert in cloud, virtualization and management.
Data Center Bridging
Mike Truitt Sr. Product Planner Bryon Surace Sr. Program Manager
Microsoft Virtual Academy Module 8 Managing the Infrastructure with VMM.
Windows Server 2012 Hyper-V Networking
VIR327 Private Public Enterprise Service Enterprise Service Public CloudPrivate Cloud Service IT Department Cloud Provider Service.
VMware vSphere Configuration and Management v6
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Deployment options for Fluid Cache for SAN with VMware
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
Hands-On Virtual Computing
Network Virtualization Policy-Based Isolation QoS Performance Metrics Live & Storage Migrations Cross-Premise Connectivity Dynamic & Multi-Tenant.
GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage SuperBlade ® Configuration Training Francis Lam Blade.
BORN TO BE VIRTUALIZED & DATA PROTECTION Jeffrey Chen Dec 2009.
CONNECTIVTY for the modern data center
Dell PowerEdge Blade Server PDVSA Jun-05
By Harshal Ghule Guided by Mrs. Anita Mahajan G.H.Raisoni Institute Of Engineering And Technology.
1062m0656 between 10692m2192 DS/ICI/CIF EqualLogic PS6510E
E2800 Marco Deveronico All Flash or Hybrid system
The Efficient Fabric Presenter Name Title.
Ryan Leonard Storage and Solutions Architect
EonStor DS 2000.
DSS-G Configuration Bill Luken – April 10th , 2017
FlexPod Update.
Virtualization OVERVIEW
Module 2: DriveScale architecture and components
Appro Xtreme-X Supercomputers
Welcome! Thank you for joining us. We’ll get started in a few minutes.
Cisco MDS 9124e Fabric Switch for HP c-Class BladeSystem
Managing Clouds with VMM
Presentation transcript:

Module 9 PS-M4110 Overview <Place supporting graphic here> Welcome to the Dell 12G version of the PowerEdge M1000e training course. This training provides an overview of the components and features of the Dell PowerEdge M1000e Blade Server enclosure.

Module Objectives: Describe the PS-M4110 solution Describe the features of the PS-M4110 Install the PS-M4110 in the M1000 Chassis Operate the drawer of the PS-M4110 Describe the airflow features of the M1000 Identify whether the M1000 chassis has a version 1.1 Midplane Describe how the PS-M4110 connects to the Servers via the Fabric modules . 9-2

Introduction

PS Series Continuous advancements of inclusive software features Early releases PS100 PS3000 PS-M4110 PS4000 Series FS7500 Sync Rep IPSEC SED Product timeline ASM/ME for Hyper-V™ Auto-Snapshot Manager/ VMware® Edition SAN Headquarters VMware® vStorage Data Center Bridging (DCB) Core File Capability 2003 + Thin Provisioning VMware® Site Recovery Manager Continuous advancements of inclusive software features Start of Solution innovation

Dell PS-M4110 Blade Array CMC provides M1000e chassis management EqualLogic iSCSI SAN for the Dell M1000e Blade chassis 14 hot-plug 2.5” disk drives 10G controllers PS Series firmware / Group Manager M1000e CMC provides M1000e chassis management M1000e chassis provides power, cooling, integrated networking and density Blade server storage array. Managed by the M1000e CMC (Chassis Management Controller). Minimum version CMC firmware that supports the PS-M4110 is 4.11-A01.

PS-M4110 Blade Array - Installed in the M1000e blade chassis. Two control modules Two 10 GbE ports - Configurable Management port - 14 hot pluggable 2.5” disks - 6Gb/s SAS drives M4110 Blade array. Slides into two M1000e slots for power, cooling, and management. Connects to the M1000e midplane. No cables.

Dell PS M4110 and the PowerEdge M1000e 12G Enclosure The Dell PowerEdge M1000e Modular Server Enclosure is a breakthrough in enterprise server architecture. The enclosure and its components spring from a revolutionary, ground up design incorporating the latest advances in power, cooling, I/O, and management technologies. Each blade server often is dedicated to a single application. The blades are literally servers on a card, containing processors, memory, integrated network controllers, an optional fiber channel host bus adaptor (HBA) and other input/output (IO) ports. The PowerEdge M1000e solution is designed for customers that require high performance, high availability, and manageability in the most rack-dense form factor including: Corporate Public Small/Medium Business (SMB) Modular blade servers are typically implemented in: Virtualization environments SAN applications (Exchange, database) High performance cluster and grid environments Front-end applications (web apps/Citrix/terminal services) File sharing access Web page serving and caching SSL encrypting of Web communication Audio and video streaming Like most clustering applications, blade servers can also be managed to include load balancing and failover capabilities. PS M4110 inserts into two M1000e slots. M1000e chassis provides power, cooling, and management for the M4110 and server blades.

PS-M4110 Blade Array Two 10Gbe iSCSI Ports wired internally to the M-series Ethernet fabrics M-series Blade Server and M4110 Chassis Environment Chassis provides power, cooling, Ethernet switch configuration CMC provides chassis management, configuration and monitoring. 10G Ethernet fabric selection - M4110 defaults to B fabric - Can be on the A fabric with version 1.1 midplane Support’s DCB (Data Center Bridging) M4110 performance is similar to PS6100E PS-M4110 can be installed in slots 1-2, 3-4, 5-6, 7-8, 9-10, 11-12, 13-14, 15-16, or 17-18. RAID types 6, 6 accelerated 50 and 10 are all supported.

The power to do more in less space M1000e and the PS-M4110 EqualLogic Blade Array 4 Switches 2 PS Series Array’s 24 Servers 10 U 2 PS-M4110’s Consider the following: Rack space saved = 22 U’s Network cables not needed: - 54 Power cables not needed: 4 Switches 24 Servers Physical Convergence = 3x space savings

Installation and Drawer Operation

Installing The PS M4110 The PS M4110 installs in the top slots using the rail guides on the top. The PS M4110 installs in the bottom slots using the slot guides on the bottom. Slide the array into the M1000e available slots. Seat the array by sliding the handle firmly in place Before installing the PS-M4110 in an M1000e enclosure, note the following: • You should wear an electrostatic strap to prevent electrostatic damage. • When shipped by itself, the PS-M4110 includes a retaining clip on the front to prevent the array drawer from sliding out of the array. It also includes protective plastic covers on the back to protect the rear connectors from damage. You must remove the retaining clip and protective covers before installing the array in the M1000e enclosure. Optionally, you can also remove the protective caps covering the serial ports on the front. Save the clip and protective covers for future use. After you have installed the PS-M4110 into the M1000e enclosure, you can verify proper installation by turning on power to the M1000e enclosure. If the PS-M4110 is properly installed, the Blade System Status LED on its front panel will light up shortly after the M1000e is powered on.

PS M4110 Drawer To open the array inner drawer: Push the array's front panel and release it quickly to unlatch the array’s inner drawer. When the drawer is unlatched will be a “Caution” label visible Caution: The front panel is not designed as a handle. It can break if treated roughly. When opening the array inner drawer, don't pull on the front panel. Grip and pull the drawer by its top, bottom, or sides. The inner drawer can be opened while the member is operating to gain access to the hot swap components.

PS M4110 Drawer When the PS-M4110 is out of the M1000 enclosure, the array's drawer cannot be opened unless its safety locking mechanism is released. There is a release button located on the side of the PS-M4110 array that releases the latch that secures the array's drawer to its outer housing. This prevents the array's drawer from opening accidentally during handling when outside of the M1000e enclosure. To open the array drawer, press and hold the release button to manually unlock the safety latch.

PS-M4110 / M1000e

EqualLogic PS-M4110 Ecosystem M1000E Chassis PS-M4110 Dell Force10 or PowerConnect Switch PowerEdge Blades 11G or 12G PS-M4110 Components EqualLogic Host Software Dual, hot-pluggable 10GbE controllers 4GB of memory per controller 1 x dedicated 10/100 management port – accessible through CMC 6Gb/s SAS backend 14x 2.5” drives Auto-Snapshot Manager – Microsoft®, VMware®, Linux® Multi-Path I/O PowerShell Tools Datastore manager SAN HQ PS-M4110 Design PS-M4110 Scalability EqualLogic Array Software Drawer in Drawer Double-wide, half-height storage blade Operate/interoperate with servers inside/outside chassis Up to 4 PS-M4110 per blade chassis Up to 2 PS-M4110 per EqualLogic group Scale up to 16 EqualLogic arrays per group by joining arrays outside chassis Peer storage architecture Advanced load balancing Snapshots, cloning, replication Thin provisioning Thin clones

PS-M4110 Configuration Option #1 Fabric B Switch (Default) Midplane Fabric A1 Ethernet I/O Module Fabric A2 Ethernet I/O Module Fabric B1 Switch Fabric B2 Switch Fabric C1 I/O Module Fabric C2 I/O Module External Fabric Connections PS-M4110 Fabric Interface Module Half-Height Blade Server (16 of 16) Fabric A E’net LOM Fabric B Mezz Fabric C Mezz Server Logic CM0 (Active) CM1 For increased availability, the Ethernet ports on both PS-M4110 control modules are automatically connected to each redundant M1000e IO module (IOM) of the configured fabric. (Assuming that both IO modules are installed.) One port is active and one port is passive. For example, if a PS-M4110 is configured for Fabric B, and both the B1 IOM and B2 IOM are installed, the ethernet ports from each control module are connected to both B1 and B2 IOMs. This provides a total of four potential ethernet paths. However, only one ethernet path is active at any given time. In the above example, if the B1 IO module fails, both active and passive PS-M4110 ports will automatically failover to the B2 IO module.

PS-M4110 Configuration Option #2 Fabric A Switch Midplane PS-M4110 Fabric A1 Ethernet Switch Fabric A2 Ethernet Switch Fabric B1 I/O Module Fabric B2 I/O Module Fabric C1 I/O Module Fabric C2 I/O Module External Fabric Connections Fabric Interface Module CM0 CM1 To install the M4110 on the A Fabric you must have a midplane version 1.1 installed. Half-Height Blade Server (16 of 16) Fabric A E’net LOM Fabric B Mezz Fabric C Mezz Server Logic

External Fabric Connections PS-M4110 Configuration Option #3/4 Fabric A/B Pass-Thru to External Switch Stack Midplane Fabric A1 Pass-Thru Fabric A2 Pass-Thru Fabric B1 I/O Module Fabric B2 I/O Module Fabric C1 I/O Module Fabric C2 I/O Module External Fabric Connections PS-M4110 Fabric Interface Module Half-Height Blade Server (16 of 16) Fabric A E’net LOM Fabric B Mezz Fabric C Mezz Server Logic CM0 CM1 Ethernet Switch To install the M4110 on the A Fabric you must have a midplane version 1.1 installed.

PS-M4110 Configuration PS-M4110 provides a single 10Gb Ethernet port that can be connected to one of two redundant Fabrics (A or B) in the M1000e chassis. Above is a depiction of the internal links between the FIOM and the fabric IO modules. When a customer configures Colossus for Fabric B, the 2xFIOM 10G ports (one active and one passive) will establish links with the Fabric B IO module, both ports either link to B1 or both ports link to B2. This is shown in RED in the diagram above (in the example above both active and passive ports establish links to B1). The port from active FIOM is active and the other port is passive. But user can see both ports as being connected in the fabric B1 switch configurations. If fabric B1 were to fail, both active and passive FIOM ports will automatically link to fabric B2. When a user configures Colossus to work with fabric B, the user cannot specify if to use B1 or B2 fabric. Colossus will automatically choose either B1 or B2 fabric. So when the user has multiple Colossus inside a chassis configured for fabric B, then B1 and B2 must be stacked, or else the two Colossus arrays may not talk to one another.

M1000e I/O Module Placement Options Single Fabric Split Fabric Blade Server Mezz B Mezz C Blade Server Mezz B Mezz C Put the same kinds of modules in the same fabric, for example: green modules in “B” fabric and Yellow modules in “C” Fabric. Do not put green and yellow in “B” Fabric and Green and Yellow in the “C” fabric. That would be considered a Split Fabric.

10G IOM fabric 10G K supports PS-M4110 Pass through KR with top of rack (TOR) 8024F or TOR NX-5020 Force 10 (Navasota MLX) M8024-K (10 gig) (Lavaca) Brocade M8428-K (Brazos) Other M-Series IOM fabrics – not supported for PS-M4110 10G XAUI 1GE FC Infiniband The PS-M4110 only operates with 10GbE I/O modules 10GbE K Based I/O module (Must be K for PS-M4110) 10GbE Pass through I/O module All M4110 connections will be 10GbE

Switch Configurations Stack or create a LAG between switches Stack Switches Together When using a PS-M4110 inside an M1000e enclosure, the IO modules must be interconnected (stacked or LAG’d together). For example, if fabric B is configured, the B1 and B2 IOMs must be stacked or LAG’d together. The redundant fabric IO modules must be connected using interswitch links (stack interfaces or link aggregation groups (LAG). The links must have sufficient bandwidth to handle the iSCSI traffic.

Multiple M1000e Chassis Switch IOM’s Stacked IOMs LAG m1000e For multiple chassis Stacking should be done in two rings, a left side ring and a right side ring LAGs should be connected between the rings so that each port on the storage devices and can communicate to any other port on the storage devices. DCB Support Requires: DCB external switch (B8000) IOM: Navasota only Intel or Brocade CNA Mezz

Module Summary Now that you have completed this module you should be able to: Describe the PS-M4110 solution Describe the features of the PS-M4110 Install the PS-M4110 in the M1000 Chassis Operate the drawer of the PS-M4110 Describe the airflow features of the M1000 Identify whether the M1000 chassis has a version 1.1 Midplane Describe how the PS-M4110 connects to the Servers via the Fabric modules 9-24

Questions? Nashua site do a Lab walk-thru