Download presentation
Presentation is loading. Please wait.
Published byLilla Fazekasné Modified over 5 years ago
1
ONE DOES NOT SIMPLY CREATE MULTIPLE BACKENDS IN CINDER
Walter Boring IV - HPE hemna Jay Bryant - IBM jungleboyj Sean McGinnis - Dell smcginnis Kendall Nelson - IBM diablo_rojo YOU HAVE TO LISTEN TO OUR PRESENTATION FIRST
2
Agenda What is Cinder? How Does Cinder Fit?
Attaching a Volume Examples Cinder Architecture Cinder Services Cinder.conf Cinder Drivers Multiple Backends!! Volume Types Retype vs. Migration Future Plans
3
What is Cinder? Created in Folsom Release (~4 years ago)
Spun off from Nova volume Cinder manages block storage Different from file level storage- that’s Manila Different from object storage- that’s Swift Focuses on Attaching volumes to VM instances Booting from volumes Volumes have life cycles independent of VM instances Talk about the fact that this was spun off from Nova Volume because it was becoming too big to be within Nova. Note that we now have a very active community with many active developers from many companies. Contrast to Swift and Manila: Swift is where you do backups. Data that you don't need quickly. Manila is more like Samba, a shared file system. Cinder is for persistent data that you need quickly. It is OpenStack's Software Defined Storage solution.
4
How Does Cinder Fit? Cinder provides all commands and API’s to interact with vendors’ storage backends Exposes vendors storage hardware to the cloud Provides persistent storage to VM’s Enables user to manage their storage Create Snapshot Backup Attach/Detach This slide is pretty self explanatory. What you may want to add here is the fact that you can have Cinder manage multiple backends and from a combination of vendors. This makes it a control plane for combining all of your storage together.
5
Attaching a Volume Ex. 1 Cinder Nova Note that iSCSI is just an example – several additional protocols are supported (e.g., FC, NFS) VM instance LVM /dev/vda iSCSI target Legend Persistent volume control Persistent volume data KVM iSCSI initiator I think you have heard me talk through this before. Just explain the parts on the compute node and what happens on the control node.
6
Attaching a Volume Ex. 2 Cinder Note that iSCSI is just an example – several additional protocols are supported (e.g., FC, NFS) Nova Storage Controller VM instance /dev/vda iSCSI target Legend Persistent volume control Persistent volume data KVM iSCSI initiator Same thing here but using a storage appliance like a Storwize or XIV box where the cinder volume service talks out to the storage to manage it.
7
Cinder Architecture client SQL DB REST cinder-api cinder-scheduler cinder-backup cinder-volume driver cinder-volume driver Storage Talk briefly about the different components. Mention periodic tasks The client is the command line interface. The box is around the components in the middle as those all generally run on the control node for the cloud.
8
Cinder Services API Scheduler
REST interface to Cinder Generally runs on control node Scheduler Takes requests from API service Works with volume services to satisfy requests Note that both of these are processes that can be run active/active in an HA environment.
9
Cinder Services cont. Volume Backup
Interacts with vendor storage backends Create Manage Export Can run on control node, or some other node Backup Interface from backup volumes to storage like Swift, TSM, etc. Can explain why you might run on a separate node from the control node for the volume service. Also note that a process per backend is created for the volume service. Also note that if you are using the default config for the volume service it uses LVM that is local to the node where it is running. Talk about challenges of all the data that can need to go through the volume service.
10
Cinder Client /OpenStack Client
python-cinderclient is the command line interface to Cinder ‘ cinder <command> ’ Uses REST to communicate with the cinder-api service Generally runs of control node OpenStack Client All projects moving towards using the OpenStack Client Deprecating all individual project CLI’s KENDALL STARTS HERE • Python-cinderclient is the command line interface that uses REST calls to communicate with the API service • Commands are formatted to be ‘cinder <command>’ • This generally is run on the control node • We are working on achieving parity with the OpenStack Client since it lacks many of the central commands to cinder- ie consistency groups
11
Cinder Client / API Volume create/delete/list/show
Create from image/snapshot/volume Attach/detach (called in Nova) Snapshot create/delete/list/show Backup create/delete/list/show/restore/import/export Volume types create/update/delete QoS Quotas • Users can create, delete, list, and show volumes, snapshots, and backups • Can also restore, import, and export backups • Can also attach & detach volumes through Nova • Can create, update, and delete volume types (Will get into that more later) • Cinder supports basic Quality of Service and Quota commands for volumes, snapshots and backups • In addition, for quotas, the user can control how many volumes/snapshots/backups can be created
12
cinder.conf Used by all of Cinder's services
Multi-backend support is enabled by a different section for each driver enabled_backends is used to set which drivers you want actually running for your system enabled_backends = lvmdriver-1,lvmdriver-2,lvmdriver-3 • cinder.conf gets used by everything-all of the cinder services • Multi-backend us enabled here, through its different for each driver • Enabled_backends is used to set which drivers you actuall want running for your system • Multiples LVMs etc • A driver wont work if it isn’t in the list here • Users can enable debug for additional logging here • Here is where set the iSCSI helpers • Any changes here require a restart of the Cinder services to take effect • Working on dynamic reconfiguration so that this isn’t necessary • Things set in the DEFUALT section will override driver settings Important to note that the backend will not work if it isn't in enabled_backends list.
13
cinder.conf (cont.) Set debug=True and verbose=True to get additional logging output By default Cinder's logs go to /var/log/cinder When using devstack: /opt/stack/logs When using iSCSI make sure the right iscsi_helper is set: tgtadm for ubuntu, lioadm for RHEL Any changes made to the cinder.conf file require the Cinder services to be restarted before they can take affect
14
(Drivers in bold are the reference for the architecture)
Cinder Drivers Block Device Driver (local) Blockbridge (iSCSI) CloudByte (iSCSI) Coho (NFS) Datera (iSCSI) Dell Equallogic (iSCSI) Dell Storage Center (iSCSI/FC) Disco (disco) DotHill (iSCSI/FC) DRBD (DRBD/iSCSI) EMC VMAX (iSCSI/FC) EMC VNX (iSCSI/FC) EMC XtremIO (iSCSI/FC) EMC ScaleIO (scaleio) Fujitsu ETERNUS (iSCSI/FC) GlusterFS (GlusterFS) HGST (NFS) HPE 3PAR (iSCSI/FC) HPE LeftHand (iSCSI) HPE MSA (iSCSI/FC) Oracle Zfssa (iSCSI/NFS) Pure Storage (iSCSI/FC) ProphetStor (iSCSI/FC) Quobyte (quobyte) RBD (Ceph) - Reference Scality SOFS (scality) Sheepdog (sheepdog) SMBFS (SMB) SolidFire (iSCSI) Tegile (iSCSI/FC) Tintri (NFS) Violin (FC) VMware (VMDK) Virtuozzo Storage (NFS) Windows (SMB) X-IO technologies (iSCSI/FC) HPE XP (FC) Hitachi HBSD (iSCSI/FC) Hitachi HNAS (iSCSI/NFS) Huawei (iSCSI/FC) IBM DS8000 (FC) IBM Flashsystem (iSCSI/FC) IBM GPFS (GPFS) IBM Storwize SVC (iSCSI/FC) IBM XIV (iSCSI/FC) Infortrend (iSCSI/FC) Lenovo (iSCSI/FC) LVM (iSCSI) - Reference NetApp ONTAP (iSCSI/NFS/FC) NetApp E Series (iSCSI/FC) Nexenta (iSCSI/NFS) NFS (NFS) – Reference Nimble Storage (iSCSI) (Drivers in bold are the reference for the architecture) •Have somewhere about 70 drivers now from a variety of companies as of the end of Mitaka •The majority of the drivers are based off of the reference drivers •Each driver has a different protocol as well the two most common are Fiber Channel and iSCSI •LVM is the original •NFS is for shared filesystems •RBD is for distributed filesystems
15
Ex. cinder.conf with Multiple Backends
Stripped down conf file Default usually has a lot more things, but the most important for enabling multiple backends is the enabled_backends Can see two different drivers set up here, the default and a san based driver- IBM’s storwize If
16
Enabling Multiple Backends
17
Volume Types Only admins can create volume types
Control user’s access to different storage Type can be associated with a particular backend Type can have a list of desired capabilities Users specify the volume type when they create a volume
18
Volume Types (cont.) In Horizon you can: Create vol types
List vol_types Set keys for a type Show the keys set for a type Using CinderClient you can:
19
Volume Types (cont.)
20
Volume Types (cont.)
21
Volume Types (cont.)
22
Volume Types (cont.)
23
Volume Types (cont.)
24
Volume Types (cont.)
25
Retype Vs. Migration CONFUSION
Retype is used to change settings of a volume on the same backend i.e. Storwize SCSI disks to SSD disks on the same array Some retypes can happen without the data moving Migration is used to move a volume between two different backends i.e. from LVM to Storwize Retypes may require a migration Sean here to the end.
26
Retype Change volume types:
cinder create 1 --name vol1 --volume-type dellsc1-nightly cinder retype vol1 dellsc1-hourly name: dellsc1-nightly extra_specs: {volume_backend_name: sn12345, storagetype:replayprofile: nightly} name: dellsc1-hourly extra_specs: {volume_backend_name: sn12345, storagetype:replayprofile: hourly}
27
Retype with Migration Change volume types:
cinder create 1 --name vol1 --volume-type dellsc1-nightly cinder retype vol1 dellsc2-hourly FAILS! name: dellsc1-nightly extra_specs: {volume_backend_name: sn12345, storagetype:replayprofile: nightly} name: dellsc2-hourly extra_specs: {volume_backend_name: sn54321, storagetype:replayprofile: hourly}
28
Retype with Migration Change volume types:
cinder create 1 --name vol1 --volume-type dellsc1-nightly cinder retype vol1 dellsc2-hourly \ --migration-policy on-demand name: dellsc1-nightly extra_specs: {volume_backend_name: sn12345, storagetype:replayprofile: nightly} name: dellsc2-hourly extra_specs: {volume_backend_name: sn54321, storagetype:replayprofile: hourly}
29
Migration Migrating volume to new host:
cinder create 1 --name vol1 --volume-type lvm1 cinder migrate vol1 Cinder1 Cinder2 LVM1 VendorX Pool1 Pool2
30
Retype Vs. Migration (cont.)
31
Retype Vs. Migration (cont.)
32
Retype Vs. Migration (cont.)
33
Retype Vs. Migration (cont.)
34
The Future! Active/Active High Availability Multi-attach
DISCLAIMER: These are things being currently worked, but may not necessarily make it into Newton The Future! Active/Active High Availability Two cinder-volume services managing the same backend storage Multi-attach Attach the same volume to more than one host/instance Scaleable Backup Containerizing Cinder Services Capability Reporting
35
Thank You!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.