Presentation is loading. Please wait.

Presentation is loading. Please wait.

EMC OPENSTACK CLOUD SOLUTIONS

Similar presentations


Presentation on theme: "EMC OPENSTACK CLOUD SOLUTIONS"— Presentation transcript:

1 EMC OPENSTACK CLOUD SOLUTIONS
Reference architecture with REDHAT ENTERPRISE OPENSTACK PLATFORM

2 IT As A Service Delivers Business Agility
Broker & Builder New Business Model New Technology Architecture New Operation models and roles Cost Efficiency CULTURE Open Source Agile Apps Big Data TECH BUSINESS DevOps Mobile Customer Data Speed The changes that are going on in IT with disruptions from technology, business and culture and so IT to solve the issues has to change from moving from traditional models to broker provider model. For this to occur they need to drive to a Hybrid IT model where they can need to adopt a Cloud Framework. Adoption of a hybrid IT model for capability and capacity. Manage risk across the cloud supply chain. Visibility, Governance and control across Clouds.

3 OpenStack As An Enabler For Transformation
New Apps Application Fabric Data Fabric Metering Engine Service Catalog Orchestration Engine User Portal Policy Engine Dev-Ops New Roles Agile Processes PaaS Lends itself nicely to 3rd Platform Apps Developer Friendly API provide capability to Automate Services for Cost Effective Operations. Need new skill sets and roles OpenStack is an enabler for IT Organizations to build a Private Cloud geared towards a S/W defined model and at same time effectively provide the capability to move towards a Dev-Op model. EMC solutions and service will deliver a service which enables cloud management, right applications and application architectures to run on the Cloud and provide dev-ops advisory services. Lends itself to Applications which uses API to ask for “sets of resources”…OpenStack provides open standard API which can be consumed and OpenStack controller has the capability to schedule the reosource in an optimal way and manage the resources. API lends itself for dev-ops where operators can automate infrastructure pre and post using Puppet/Chef and openstack has good integration with Config Mgmt tools and CI/CD tools. Service APIs SOFTWARE DEFINED DC TRANSFORMATION Cloud Software Platform a foundation for SDDC enablement

4 OPERATIONAL EFFICIENCY
Why OpenStack? INNOVATE AND COMPETE OPERATIONAL EFFICIENCY OPEN PLATFORM CHOICE OF TECHNOLOGY COST SAVINGS Reasons customers are giving for why the want OpenStack (Drive each to agility when natural in the talk track) Operational Efficiency – OpenStack designed for self-service; end-users get to resources more quickly Open Platform – Open source; avoids vendor lock-in and leverages an entire community Choice – large ecosystem of vendors to choose from to deploy OpenStack – hardware/software agnostic Choose your own storage/network etc. Innovate and Compete – more agile; design for Platform 3 – use case changes come from community and can head towards the new; Competes on next generation applications source: OpenStack User Survey, 2014

5 What Is OpenStack? Talk about OpenStack how the architecture is modular and flexible – you can pick the service you need Delivers IaaS servie Flexible and modular architecture. Foundation for a Software Defined DC. Delivering IaaS service : compute, networking & storage services and more. Analogous to the Linux kernel (very tunable) All services are expose via API (Infra as code)

6 EXISTING APPLICATION INVENTORY & STRATEGY SOFTWARE DEFINED DATACENTER
Application RightFit EXISTING APPLICATION INVENTORY & STRATEGY NEW USE CASES Digital Experience Real-time Analytics PLATFORM 3.0 3RD GEN APPS AND DATA PLATFORM Re-write / Replace PLATFORM 2.5 Platform 1: Mainframe era with mainframe application workloads Platform 2: Client-Server and virtualized x86 traditional app workloads Platform 3: Application built on NGN architecture for cloud, social, mobile and big data Platform 2: Monolithic Applications Workloads Scale Up Applications Expect Resilient Infrastructure Infrastructure Provides Resiliency High Degree of Virtualization IT Operational Processes Largely Unchanged Platform 2.5: Applications Assume Resilient Infrastructure IT Process Automation – Accelerate/Automate IT Processes More Agile IT More Agile DevOps Focus Platform 3 Apps Loosely Coupled, Small Components Stateless Execution Modules Application Takes Responsibility of Resiliency, Fault Tolerant. Assume High Data Resiliency Workloads Easily Scale Out DevOps Focus SOFTWARE DEFINED DATACENTER Refactor / Migrate PLATFORM 2.0 Leave in place/Retire PLATFORM 1.0

7 Platform Definition Platform 2 Platform 3 No-SQL Components in
Monolithic Applications Components re-architected loosely coupled, elastic, fault tolerant Platform 2 Platform 3 No-SQL Relational In Memory, Distributed Kernel Virtualization Kernel Virtualization / Containers

8 Personas Enterprise Admin Cloud Admin Dev-Ops
Clear and efficient catalogue to manage the infra lifecycle Administrators responsible for managing and maintaining an IT infrastructure (in a private cloud) Years of experience with Unix and Linux systems administration. Manages IT infrastructure, hypervisors and Cloud platform. Interested in how to deal with failure (planned, unplanned), maintenance of system and utilization. Cloud Admin Management interface for utilization, quotas etc. APIs to integrate into tools Cluster Mgmt (IT Automation) - Config/package mgmt --- what/how do things get installed Deployment Naming Monitoring How to deal with failures….: planned and unplanned….? Utilization Maintainence Capistrano for deployment… Enterprise Infra Admin Needs: A great management interface for hardware resource utilization, quotas, and good back-ups and a recovery plan. A user friendly administrative GUI interface, and a logically set out and explained command line instruction set. Well documented APIs to integrate into other tools. Cloud Admin: Dev-Ops/Developer Needs: A user friendly GUI interface, and a logically set-out and explained command line instruction set. Integration with her preferred CI tool. Proficient in administering Unix and Linux systems. Competent shell and Python programmer. Early adopter of Puppet. Already using AWS for IaaS service Dev-Ops Need a catalogue or CLI for initial deployment rest done via API calls Have been using AWS for a while. Primarily developing web applications for internal usage. API driven. Will integrate with the CI/CD tools and open to OpenSource.

9 Celiometer telemetry service
Openstack framework Horizon Dashboard Swift object store Glance image store Nova compute node Cinder volume service Keystone identity service Heat Orchestration Celiometer telemetry service Trove database Neutron networking S3 EC2 EBS vPC RDS AMI IAM Cloud Formation Trove and Sahara Framework… Currently 14 integrated projects within OpenStack All these projects communicate via public API’s Quite a few new projects focused on Mgmt and Operations Service have behavioral compatibility with AWS

10 EMC Integration with openstack
OpenStack Drivers Broad Portfolio Fit Your Environment Evolve With Your Cloud Software Defined Efficient Management Hyper Converged, S/W Defined Use Your Hardware Delivers On Speed And Space Flash Performance Reduce Deployment Costs File or Block Hybrid Low $ Per Transaction Any Workload Data Lake Scale out File and Object System Isilon - ADD XtremIO ScaleIO ViPR Isilon

11 Technical evidence solution
Reference architecture with Redhat – juno release

12 EMC + RedHat Technical Evidence
Storage Arrays Certified & Validated Designs Partner Tools Integration Joint Services Cooperative Support EMC Large service Solution Focused Partnered with RedHat to provide Validated reference designs. Integrated with RedHat tool set to enable better manageability. Joint Service and Support

13 Red Hat Enterprise Linux OpenStack Platform
RedHats officially supported OpenStack distribution. Tightly integrated with RH Linux. Focus on Code maturity, stability and security 3rd party eco-system of certified platforms Product documentation and reference architecture. 3 year lifecycle and global support

14 EMC Reference Architecture with RedHat OpenStack
RedHat: OpenStack Deployment and Management Core OpenStack & Related Projects Cinder Drivers Fuel Automated Install Full stack support Robust OpenStack distribution; Best in Class Recognized OpenStack training Best in class storage Wide storage portfolio Cinder Project Leadership Already a big contributor Leverage Community and listen to our customers to Continually improve and innovate EMC Cinder Storage Drivers Become larger contributor

15 Solution Components Capability Components Supported Hardware
VNX, XTREMIO, ScaleIO Cinder Block Driver - iSCSI, FC, SDC Software Open Stack RH Juno Release KVM Hypervisor in RHEL Kernel 7.1 OS for cloud Software Tools RH OpenStack Platform Installer Version 1.71 DM -Multipath el7

16 Logical Architecture

17 VNX with OpenStack Unified Block and File Storage system
OpenStack Cinder Support From Grizzly Supported drivers FC and iSCSI Support all main volume operations Supports Manila (File Share Service) Not part of reference architecture testing. Versality for wide variety of use cases in OpenStack environment Fully automated storage tiering support VNX supports Fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key storagetype:tiering to set the tiering policy of a volume and use the extra spec key fast_support=True to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering: StartHighThenAuto (Default option) Auto HighestAvailable LowestAvailable NoMovement Tiering policy can not be set for a deduplicated volume. The user can check storage pool properties on VNX to know the tiering policy of a deduplicated volume. Here is an example about how to create a volume with tiering policy: $ cinder type-create "AutoTieringVolume" $ cinder type-key "AutoTieringVolume" set storagetype:tiering=Auto fast_support=True $ cinder type-create "ThinVolumeOnLowestAvaibleTier" $ cinder type-key "CompressedVolumeOnLowestAvaibleTier" set storagetype:provisioning=thin storagetype:tiering=Auto fast_support=True  FAST Cache support VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. The OpenStack administrator can use the extra spec key fast_cache_enabled to choose whether to create a volume on the volume back end which manages a pool with FAST Cache enabled. The value of the extra spec key fast_cache_enabled is either True or False. When creating a volume, if the key fast_cache_enabled is set to True in the volume type, the volume will be created by a back end which manages a pool with FAST Cache enabled.  Storage group automatic deletion For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances that are going to consume VNX Block Storage (using the compute node's hostname as the storage group's name). All the volumes attched to the vm instances in a computer node will be put into the corresponding Storage Group. If destroy_empty_storage_group=True, the driver will remove the empty storage group when its last volume is detached. For data safety, it does not suggest to set the option destroy_empty_storage_group=True unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.  EMC storage-assisted volume migration EMC VNX direct driver supports storage-assisted volume migration, when the user starts migrating with cinder migrate --force-host-copy False volume_id host or cinder migrate volume_id host, cinder will try to leverage the VNX's native volume migration functionality. In the following scenarios, VNX native volume migration will not be triggered: Volume migration between back ends with different storage protocol, ex, FC and iSCSI. Volume is being migrated across arrays.  Initiator auto registration If initiator_auto_registration=True, the driver will automatically register iSCSI initiators with all working iSCSI target ports on the VNX array during volume attaching (The driver will skip those initiators that have already been registered). If the user wants to register the initiators with some specific ports on VNX but not register with the other ports, this functionality should be disabled.  Initiator auto deregistration Enabling storage group automatic deletion is the precondition of this functionality. If initiator_auto_deregistration=True is set, the driver will deregister all the iSCSI initiators of the host after its storage group is deleted.  Read-only volumes OpenStack supports read-only volumes. The following command can be used to set a volume to read- only. $ cinder readonly-mode-update volume TrueAfter a volume is marked as read-only, the driver will forward the information when a hypervisor is attaching the volume and the hypervisor will have an implementation-specific way to make sure the volume is not written.  Multiple pools support Normally a storage pool is configured for a Block Storage back end (named as pool-based back end), so that only that storage pool will be used by that Block Storage back end. If storage_vnx_pool_name is not given in the configuration file, the driver will allow user to use the extra spec key storagetype:pool in the volume type to specify the storage pool for volume creation. If storagetype:pool is not specified in the volume type and storage_vnx_pool_name is not found in the configuration file, the driver will randomly choose a pool to create the volume. This kind of Block Storage back end is named as array-based back end. Here is an example about configuration of array-based back end: 1 2 3 4 5 6 7 8 9 san_ip = #Directory path that contains the VNX security file. Make sure the security file is generated first storage_vnx_security_file_dir = /etc/secfile/array1 storage_vnx_authentication_type = global naviseccli_path = /opt/Navisphere/bin/naviseccli default_timeout = 10 volume_driver = cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver destroy_empty_storage_group = False volume_backend_name = vnx_41 In this configuration, if the user wants to create a volume on a certain storage pool, a volume type with a extra spec specified the storage pool should be created first, then the user can use this volume type to create the volume. Here is an example about creating the volume type: $ cinder type-create "HighPerf" $ cinder type-key "HighPerf" set storagetype:pool=Pool_02_SASFLASH volume_backend_name=vnx_41Multiple pool support is still an experimental workaround before blueprint pool-aware-cinder-scheduler is introduced. It is NOT recommended to enable this feature since Juno just supports pool-aware-cinder-scheduler. In later driver update, the driver side change which cooperates with pool-aware-cinder-scheduler will be introduced.  Volume number threshold In VNX, there is a limit on the maximum number of pool volumes that can be created in the system. When the limit is reached, no more pool volumes can be created even if there is enough remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the limit, the back end will fail to create the corresponding volume. The default value of the option check_max_pool_luns_threshold is False. When check_max_pool_luns_threshold=True, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.  FC SAN auto zoning EMC direct driver supports FC SAN auto zoning when ZoneManager is configured. Set zoning_mode to fabric in back-end configuration section to enable this feature. For ZoneManager configuration, please refer to the section called “Fibre Channel Zone Manager”.  Multi-backend configuration 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 [DEFAULT] enabled_backends = backendA, backendB [backendA] storage_vnx_pool_name = Pool_01_SAS #Directory path that contains the VNX security file. Make sure the security file is generated first. #Timeout in Minutes volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver initiator_auto_registration = True [backendB] storage_vnx_pool_name = Pool_02_SAS san_ip = san_login = username san_password = password [database] max_pool_size = 20 max_overflow = 30 For more details on multi-backend, see OpenStack Cloud Administration Guide.  Force delete volumes in storage groups Some available volumes may remain in storage groups on the VNX array due to some OpenStack timeout issues. But the VNX array does not allow the user to delete the volumes which are still in storage groups. The option force_delete_lun_in_storagegroup is introduced to allow the user to delete the available volumes in this tricky situation. When force_delete_lun_in_storagegroup=True is set in the back-end section, the driver will move the volumes out of storage groups and then delete them if the user tries to delete the volumes that remain in storage groups on the VNX array. The default value of force_delete_lun_in_storagegroup is False. Questions? Discuss on ask.openstack.org Found an error? Report a bug against this page Legal notices

18 VNX : Reference Architecture
Block Stroage services Cinder Supported Protocols tested iSCSI and FC Multipathing must be installed and configured to ensure proper operations. Supports all main volume operations Integrated into OpenStack trunk OpenStack Cinder Cinder Driver Unified Hybrid Storage for the Mid-Range UNIFIED All mixed workloads All access protocols HYBRID Optimized for FLASH Benefits of tiered storage PRICE OPTIMIZED Lowest $/IO Lowest $/GB Technology Leadership Multicore Optimized Designed for Virtualization Unified Storage File and Block Fully automated storage tiering support VNX supports Fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key storagetype:tiering to set the tiering policy of a volume and use the extra spec key fast_support=True to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering: StartHighThenAuto (Default option) Auto HighestAvailable LowestAvailable NoMovement Tiering policy can not be set for a deduplicated volume. The user can check storage pool properties on VNX to know the tiering policy of a deduplicated volume. Here is an example about how to create a volume with tiering policy: $ cinder type-create "AutoTieringVolume" $ cinder type-key "AutoTieringVolume" set storagetype:tiering=Auto fast_support=True $ cinder type-create "ThinVolumeOnLowestAvaibleTier" $ cinder type-key "CompressedVolumeOnLowestAvaibleTier" set storagetype:provisioning=thin storagetype:tiering=Auto fast_support=True  FAST Cache support VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. The OpenStack administrator can use the extra spec key fast_cache_enabled to choose whether to create a volume on the volume back end which manages a pool with FAST Cache enabled. The value of the extra spec key fast_cache_enabled is either True or False. When creating a volume, if the key fast_cache_enabled is set to True in the volume type, the volume will be created by a back end which manages a pool with FAST Cache enabled.  Storage group automatic deletion For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances that are going to consume VNX Block Storage (using the compute node's hostname as the storage group's name). All the volumes attched to the vm instances in a computer node will be put into the corresponding Storage Group. If destroy_empty_storage_group=True, the driver will remove the empty storage group when its last volume is detached. For data safety, it does not suggest to set the option destroy_empty_storage_group=True unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.  EMC storage-assisted volume migration EMC VNX direct driver supports storage-assisted volume migration, when the user starts migrating with cinder migrate --force-host-copy False volume_id host or cinder migrate volume_id host, cinder will try to leverage the VNX's native volume migration functionality. In the following scenarios, VNX native volume migration will not be triggered: Volume migration between back ends with different storage protocol, ex, FC and iSCSI. Volume is being migrated across arrays.  Initiator auto registration If initiator_auto_registration=True, the driver will automatically register iSCSI initiators with all working iSCSI target ports on the VNX array during volume attaching (The driver will skip those initiators that have already been registered). If the user wants to register the initiators with some specific ports on VNX but not register with the other ports, this functionality should be disabled.  Initiator auto deregistration Enabling storage group automatic deletion is the precondition of this functionality. If initiator_auto_deregistration=True is set, the driver will deregister all the iSCSI initiators of the host after its storage group is deleted.  Read-only volumes OpenStack supports read-only volumes. The following command can be used to set a volume to read- only. $ cinder readonly-mode-update volume TrueAfter a volume is marked as read-only, the driver will forward the information when a hypervisor is attaching the volume and the hypervisor will have an implementation-specific way to make sure the volume is not written.  Multiple pools support Normally a storage pool is configured for a Block Storage back end (named as pool-based back end), so that only that storage pool will be used by that Block Storage back end. If storage_vnx_pool_name is not given in the configuration file, the driver will allow user to use the extra spec key storagetype:pool in the volume type to specify the storage pool for volume creation. If storagetype:pool is not specified in the volume type and storage_vnx_pool_name is not found in the configuration file, the driver will randomly choose a pool to create the volume. This kind of Block Storage back end is named as array-based back end. Here is an example about configuration of array-based back end: 1 2 3 4 5 6 7 8 9 san_ip = #Directory path that contains the VNX security file. Make sure the security file is generated first storage_vnx_security_file_dir = /etc/secfile/array1 storage_vnx_authentication_type = global naviseccli_path = /opt/Navisphere/bin/naviseccli default_timeout = 10 volume_driver = cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver destroy_empty_storage_group = False volume_backend_name = vnx_41 In this configuration, if the user wants to create a volume on a certain storage pool, a volume type with a extra spec specified the storage pool should be created first, then the user can use this volume type to create the volume. Here is an example about creating the volume type: $ cinder type-create "HighPerf" $ cinder type-key "HighPerf" set storagetype:pool=Pool_02_SASFLASH volume_backend_name=vnx_41Multiple pool support is still an experimental workaround before blueprint pool-aware-cinder-scheduler is introduced. It is NOT recommended to enable this feature since Juno just supports pool-aware-cinder-scheduler. In later driver update, the driver side change which cooperates with pool-aware-cinder-scheduler will be introduced.  Volume number threshold In VNX, there is a limit on the maximum number of pool volumes that can be created in the system. When the limit is reached, no more pool volumes can be created even if there is enough remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the limit, the back end will fail to create the corresponding volume. The default value of the option check_max_pool_luns_threshold is False. When check_max_pool_luns_threshold=True, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.  FC SAN auto zoning EMC direct driver supports FC SAN auto zoning when ZoneManager is configured. Set zoning_mode to fabric in back-end configuration section to enable this feature. For ZoneManager configuration, please refer to the section called “Fibre Channel Zone Manager”.  Multi-backend configuration 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 [DEFAULT] enabled_backends = backendA, backendB [backendA] storage_vnx_pool_name = Pool_01_SAS #Directory path that contains the VNX security file. Make sure the security file is generated first. #Timeout in Minutes volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver initiator_auto_registration = True [backendB] storage_vnx_pool_name = Pool_02_SAS san_ip = san_login = username san_password = password [database] max_pool_size = 20 max_overflow = 30 For more details on multi-backend, see OpenStack Cloud Administration Guide.  Force delete volumes in storage groups Some available volumes may remain in storage groups on the VNX array due to some OpenStack timeout issues. But the VNX array does not allow the user to delete the volumes which are still in storage groups. The option force_delete_lun_in_storagegroup is introduced to allow the user to delete the available volumes in this tricky situation. When force_delete_lun_in_storagegroup=True is set in the back-end section, the driver will move the volumes out of storage groups and then delete them if the user tries to delete the volumes that remain in storage groups on the VNX array. The default value of force_delete_lun_in_storagegroup is False. Questions? Discuss on ask.openstack.org Found an error? Report a bug against this page Legal notices Multipath requirements apply to VNX and XtremIO for both FC and iSCSI protocols. Multipathing must be installed and configured on all OpenStack nodes to ensure proper operation of Cinder attach and detach commands with VNX and XtremIO Cinder backends. To configure multipath for VNX and XtremIO follow the instructions in the following sections. VNX Multipath

19 XTREMIO : OpenStack All-Flash Array
Unique scale-out array with in- memory data services Up to 1,500,000 IOPs Breakthrough workload acceleration, consolidation, agility Block Storage Service Cinder Supported Protocols: FC and iSCSI Provide support for all main Volume Operations ABILITY TO EXECUTE COMPLETENESS OF VISION

20 XTREMIO : Reference Architecture
Block Storage Cinder Support Supported Protocols Certified: iSCSI, FC Multipathing must be installed and configured to ensure proper operations All main Volume Operations supported Juno support in OpenStack trunk All Flash array ideal for High Performance Scale Out Architecture - scale storage resources together with cloud infrastructure Supported Protocols: FC and iSCSI Provide support for main Volume Operations Integrated into OpenStack trunk Setting thin provisioning and multipathing parameters To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows: Thin Provisioning All XtremIO volumes are thin provisioned. The default value of 20 should be maintained for the max_over_subscription_ratio parameter. The use_cow_images parameter in thenova.conffile should be set to False as follows: use_cow_images = false Multipathing The use_multipath_for_image_xfer parameter in thecinder.conf file should be set to True as follows: use_multipath_for_image_xfer = true [Default] enabled_backends = XtremIO [XtremIO] volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver san_ip = XMS_IP xtremio_cluster_name = Cluster01 san_login = XMS_USER san_password = XMS_PASSWD volume_backend_name = XtremIOAFA

21 ScaleIO – Block services
SDC Metadata Mgr (MDM) Applications (NOVA) SDS Cinder Volume GATEWAY Driver Hyperscale Converged Server SAN Solution for Commodity Platforms Software runs on physical & virtual Four Key Components SDC, SDS, MDM, GATEWAY Cinder Driver executes volume operations thru the REST Gateway to backend ScaleIO Components Nova driver handles compute and instance volume-related operations Software solution Elastically can scale Data protection – replicates twice…supports erasure coding Cinder driver interfaces between ScaleIO and OpenStack Presents volumes to OpenStack as block devices available for storage ScaleIO driver executes volume operations by communicating with the backend ScaleIO components through the ScaleIO REST Gateway Presentation Layer: ScaleIO Data Client (SDC) Block Device Driver Exposes volumes to applications Service must run to provide access to volumes Over TCP/IP Data Server: ScaleIO Data Server (SDS) Abstracts storage media Contributes to storage pools Performs I/O operations ScaleIO Metadata Manager Not in the data path Monitoring and Configuration Holds cluster wide component mapping Commodity Platforms SDS SDS

22 ScaleIO – Reference Architecture
SDC Metadata Mgr (MDM) Applications (NOVA) SDS Cinder Volume GATEWAY Driver Block Storage Cinder Support Supported Protocols Certified: FC, iSCSI All main Volume Operations supported Juno Support available via EMC GIT. Software solution Elastically can scale Data protection – replicates twice…supports erasure coding Cinder driver interfaces between ScaleIO and OpenStack Presents volumes to OpenStack as block devices available for storage ScaleIO driver executes volume operations by communicating with the backend ScaleIO components through the ScaleIO REST Gateway Presentation Layer: ScaleIO Data Client (SDC) Block Device Driver Exposes volumes to applications Service must run to provide access to volumes Over TCP/IP Data Server: ScaleIO Data Server (SDS) Abstracts storage media Contributes to storage pools Performs I/O operations ScaleIO Metadata Manager Not in the data path Monitoring and Configuration Holds cluster wide component mapping Commodity Platforms SDS

23 Certified Volume Operations
VNX Extreme IO Scale IO Create, Delete, Extend Volume Snapshot volume , Delete snapshots List volume and snapshots Attach, Detach volume Create volume from snapshot Copy image to volume and volume to image Clone Volume Create volume with backend Migrate volume, Retype a volume Create and Delete Consistency Groups Create and Delete Consistency Group Snapshots

24 Solution Architecture

25 Emc openstack cloud solution
CINDER FUNCTIONALITY

26 Cinder – Block Storage Service
Persistent block level storage devices for use with OpenStack compute instances. Manages the creation, attaching and detaching of the block devices to servers Block storage volumes are fully integrated into OpenStack Compute and the Dashboard allowing for cloud users to manage their own storage needs. Snapshots are supported and can be restored or used to create a new block storage volume. The OpenStack Block Storage service provides persistent block storage resources that OpenStack Compute instances can consume. This includes secondary attached storage similar to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write images to a Block Storage device for Compute to use as a bootable persistent instance. The Block Storage service differs slightly from the Amazon EBS offering. The Block Storage service does not provide a shared storage solution like NFS. With the Block Storage service, you can attach a device to only one instance. The Block Storage service provides:

27 Cinder Capabilities Volumes: Snapshots : Backups:
Allocated block storage resources that can be attached to instances as secondary storage or they can be used as the root store to boot instances. Volumes are persistent R/W block storage devices most commonly attached to the compute node through iSCSI. Snapshots : A read-only point in time copy of a volume. The snapshot can be created from a volume that is currently in use (through the use of --force True) or in an available state. The snapshot can then be used to create a new volume through create from snapshot. Backups: An archived copy of a volume currently stored in OpenStack Object Storage (swift).

28 Cinder Capabilities : VOLUME
Attached to instances as secondary storage Can be used as root store to boot instances Persistent R/W Block storage Manage volume lifecycle Create, Delete, Extend volumes Attach/Detach Volume Ability to create different volume type.

29 Cinder Capabilities : Snapshots
A read-only point in time copy of a volume Create snapshots, Delete snapshots Make volume out of the created Snapshots The cinder command-line interface provides the tools for creating a volume backup. You can restore a volume from a backup as long as the backup's associated database information (or backup metadata) is intact in the Block Storage database. Run this command to create a backup of a volume: $ cinder backup-create VOLUMEWhere VOLUME is the name or ID of the volume. This command also returns a backup ID. Use this backup ID when restoring the volume: $ cinder backup-restore BACKUP_ID Alternatively, you can export and save the metadata of selected volume backups. Doing so precludes the need to back up the entire Block Storage database. This is useful if you need only a small subset of volumes to survive a catastrophic database failure. Because volume backups are dependent on the Block Storage database, you must also back up your Block Storage database regularly to ensure data recovery.

30 CINDER Capabilities - BACKUP
$ cinder backup-create “volume_id” $ swift list $ cinder backup-restore “BACKUP_ID” $ cinder list Backup Operations is an admin task and done via CLI today Backup is to Swift (Object). Find the volume you want to backup. Create backup of a volume Make sure of backup container. Restore the volume Because volume backups are dependent on the Block Storage database, you must also back up your Block Storage database regularly to ensure data recovery. Export and import backup metadata A volume backup can only be restored on the same Block Storage service. This is because restoring a volume from a backup requires metadata available on the database used by the Block Storage service. NoteFor information about how to back up and restore a volume, see the section called “Back up and restore volumes”. You can, however, export the metadata of a volume backup. To do so, run this command as an OpenStack admin user (presumably, after creating a volume backup): $ cinder backup-export BACKUP_IDWhere BACKUP_ID is the volume backup's ID. This command should return the backup's corresponding database information as encoded string metadata.

31 Consistency Groups Today in Cinder, every operation happens at the volume level. Consistency Groups (CGs) enable Data Protection (snapshots and backups) Disaster Recovery (remote replication) Consistency Group function Leverages volumes of same type to be part of CG so can be snapshot/backed up Enable Cinder to leverage volume replication feature available in the storage backends (drivers). Orchestration layer above Cinder that understands which volumes should be grouped together.  .      Create a number of volumes in Cinder. should create the CG first, and then associate the volume with it at volume create time Can we not just "Create within CG" ? Yes, that could be done as well.  I think that is what is being proposed. 2.     Create a CG, specifying volumes to be added to the CG. And the volume type You can only have one volume type within a CG 3.     Create a snapshot of the CG. -       Cinder API creates cgsnapshot and individual snapshot entries in the db and sends request to Cinder volume node. -       Cinder manager calls novaclient which calls a new Nova admin API "quiesce" that uses QEMU guest agent to freeze the guest filesystem. Can leverage this work: -       Cinder manager calls Cinder driver. -       Cinder driver communicates with backend array to create a point-in-time consistency snapshot of the CG. -       Cinder manager calls novaclient which calls a new Nova admin API "unquiesce" that uses QEMU guest agent to thraw the guest filesystem. Need to think about a tool (nova-manage or cinder-manage or similar) to fix things up if cinder goes down between quiesce and unquiesce Nova will likely be polling for updates and eventually timeout and unfreeze the instance? 4.     Create a backup of the CG. -       Cinder backup API creates cgbackup and individual backup entries in the db and sends request to Cinder volume node. -       Cinder backup manager calls novaclient which calls a new Nova admin API "quiesce" that uses QEMU guest agent to freeze the guest filesystem. -       Cinder backup manager calls Cinder driver which calls the backup driver. -       Cinder backup driver communicates with backup backend (swift, ceph, or other vendor specific backends) to create a point-in-time consistency backup of the CG. -       Cinder backup manager calls novaclient which calls a new Nova admin API "unquiesce" that uses QEMU guest agent to thraw the guest filesystem. If a CG is to be modified by adding or removing volumes, we’ll check whether it already has cgsnapshots and cpbackups.  If it does, then the CG cannot be modified. Currently, a volume can be backed up only when it is available.  For creating backups of a CG, we need to support backups when the volume is attached. This needs driver work, so might come later

32 Consistency Groups Caveats Allow for snapshot of multiple volumes
Make sure the “storage platform” supports consistency group (ex: VNX) Can set Consistency groups only via CLI ; no support from Portal yet Certain operations are not permitted if a volume is in a consistency group Volume Migration, Volume Re-Type, Volume deletion. A consistency group has to deleted as whole with all volumes and same for volume snapshots.

33 High Availability High availability for Cinder
Deploy a Multi-Node with HA OpenStack environment. Cinder services can be installed on each controller and provide high availability in case of a controller reboot or loss. If a controller is lost all control plane functions are lost the data plane works. Controller-1 Controller-2 Database Message Q API Services Identity Image Blk Storage Dashboard

34 Projects and Quotas Admins have the capability to group tenants
Using Projects Map specific users who can access the project. Quotas can be set for operational limits Enforced per tenant (project) level Number of volumes Number of volume gigabytes allowed per Number of Block Storage snapshots allowed Quotas are operational limits. For example, the number of gigabytes allowed per tenant can be controlled to ensure that a single tenant cannot consume all of the disk space. Quotas are currently enforced at the tenant (or project) level, rather than the user level.

35 MULTI-BACKEND SUPPORT
Configuration File: Cinder.conf enabled_backends=XtremeIO, VNX [XtremeIO] volume_driver = cinder.volume.drivers.emc.xtremio XtremIOIscsiDriver volume_backend_name=xtremIO_40 [lVNX] storage_vnx_pool_name = Pool_01_SAS volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver volume_backend_name=vnx_41 Map the backend to volume types $ cinder type-create "HighPerf” $ cinder type-key "HighPerf” volume_backend_name=xtremeIO_40 $ cinder type-create ”MedPerf” $ cinder type-key ”MedPerf” volume_backend_name=vnx_41 Cinder-Volume Cinder-driver Cinder-driver High Perf Med Perf

36 OpenStack Controller + Data Plane
Logging - Cinder Log files used by Block Storage Log file of each Block Storage service is stored in the /var/log/cinder/ directory of the host Most Block Storage errors are caused by incorrect volume configurations that result in volume creation failures. To resolve failures, review logs: cinder-api log (/var/log/cinder/api.log) cinder-volume log (/var/log/cinder/volume.log) Forward the logs to syslog server OpenStack Controller + Data Plane Local log files Rsyslog pull <a id="d9e5978" class="indexterm">use_syslog=True syslog_log_facility=LOG_LOCAL2</a> Elasticsearch - index and enables search and storage Logstash ElasticsSearch Kibana

37 Monitoring - CEILOMETER
Notification BUS Volume Notification Agents Collectors External Systems Volume Stats Health, Size, Usage. Thresholds for alarms The data can be used by external systems for Metering/chargeback Monitoring. Logstash Elasticsearch Kibaba spluk

38 Volume Type $ cinder type-create "HighPerf" $ cinder type-key "HighPerf" set storagetype:pool=Pool_02_SASFLASH volume_backend_name=vnx_41 $ cinder type-create "ThickVolume" $ cinder type-create "ThinVolume" $ cinder type-create "DeduplicatedVolume" $ cinder type-create "CompressedVolume" $ cinder type-key "ThickVolume" set storagetype:provisioning=thick $ cinder type-key "ThinVolume" set storagetype:provisioning=thin $ cinder type-key "DeduplicatedVolume" set storagetype:provisioning=deduplicated deduplication_support=True $ cinder type-key "CompressedVolume" set storagetype:provisioning=compressed compression_support=True User wants to create a volume on a certain storage pool, a volume type with an extra spec specified the storage pool should be created first, then the user can use this volume type to create the volume.

39 Cinder architecture walk thru
CINDER FUNCTIONALITY

40 Conceptual Architecture
Glance Cinder Neutron Nova KeyStone Horizon Swift Storage Lifecycle Create Volume Attach Volume Snapshot Volume Orchestrate Heat Directs services 2 1 3 Dashboard ("Horizon") provides a web front end to the other OpenStack services Compute ("Nova") stores and retrieves virtual disks ("images") and associated metadata in Image ("Glance") Network ("Quantum") provides virtual networking for Compute. Block Storage ("Cinder") provides storage volumes for Compute. Image ("Glance") can store the actual virtual disk files in the Object Store("Swift") All the services authenticate with Identity ("Keystone") 4 Poll Data From Metering Provides Auth for Celiometer Backup volumes in

41 Cinder Architectural Overview
Volume Functions Create Extend Delete Attach Detach Volume Types Cinder Client Cinder API Cinder Scheduler Cinder Volume Cinder Backup Cinder-driver REST AMPQ sql Snapshot Functions Create Delete Update Volume from Snapshot Cinder API A WSGI app that authenticates and routes requests throughout the Block Storage service. It supports the OpenStack APIs Cinder Scheduler Schedules and routes requests to the appropriate volume service. Depending upon THE configuration, could be simple round-robin scheduling or it can be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is the default and enables filters on things like Capacity, Availability Zone, Volume Types, and Custom filters Cinder Volume Manages Block Storage devices, specifically the back-end devices themselves Cinder Backup Provides a means to back up a Block Storage volume to OpenStack Object Storage. Backup Functions Backup Restore Delete

42 Cinder Architecture Building Blocks
Cinder API A WSGI app that authenticates and routes requests throughout the Block Storage service. It supports the OpenStack APIs Cinder Scheduler Schedules and routes requests to the appropriate volume service. Depending upon THE configuration, could be simple round-robin scheduling or it can be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is the default and enables filters on things like Capacity, Availability Zone, Volume Types, and Custom filters Cinder Volume Manages Block Storage devices, specifically the back-end devices themselves Cinder Backup Provides a means to back up a Block Storage volume to OpenStack Object Storage. Think of it as a toolkit to build private clouds.

43 Logical Flow 1. The Cinder client, in this case Horizon, makes a request to create a volume on the block storage. 2. The Cinder REST API processes the request and validates it, making sure that the correct credentials are provided. It then places the message into the Cinder Message BUS. 3. The Cinder volume process picks the request from the Message BUS and sends it to the Cinder scheduler to determine which block based storage to provision to based on the capabilities asked for in the request. 4. The Cinder Scheduler now takes the message off of the queue and generates a list of possible storage candidates, based on the capabilities required from the request, such as volume type, sizing etc. 5. The Cinder Volume process reads the response from the Scheduler and looks through the list and invokes the correct storage drivers, for example, in this case the EMC VNX Cinder drivers. 6. The VNX Cinder driver creates the requested storage volume, interacting with the storage sub- systems. For the VNX it will be a direct CLI call. 7. he Cinder Volume driver gets the response back for connection information and puts it into the Message Queue. 8. The Cinder API process reads the response in the queue and responds to client. 9. Finally, the Cinder client (in this case Horizon) gets the response informing of the status for the creation request, ie: volume UUID.

44 Authentication - Keystone
Provide credentials to authenticate to the system. Admin User Credentials used by all services to talk to each other

45 Self Service Portal - Horizon

46 Volume Creation - Cinder
Group volumes based on performance SIze Data Volume Boot Volume Defaults to Nova-AZ if not specified

47 Volume Types Volume Type Size Availability Zone

48 Managing the volumes Increase the volume size Delete the volumes
Creates snapshots of volumes

49 Launching an instance- Nova
Initiate creation of an instance. Based on flavor Based on number Based on AZ Flavor Count Image

50 Attaching a volume to an instance

51 Snapshot Create a volume from a snap

52 References RedHat OpenStack Platform installer
EMC RedHat Reference Architecture Guide canonical-openstack-ra.pdf OpenStack Configuration/Design guide RedHat OpenStack Platform installer US/Red_Hat_Enterprise_Linux_OpenStack_Platform/6/html/Installer_and_ Foreman_Guide/

53


Download ppt "EMC OPENSTACK CLOUD SOLUTIONS"

Similar presentations


Ads by Google