docs: add settings for purestorage (#635)
Reviewed-by: Mohammed Naser <mnaser@vexxhost.com>
diff --git a/doc/source/deploy/cinder.rst b/doc/source/deploy/cinder.rst
index 41a0bc2..f09fa69 100644
--- a/doc/source/deploy/cinder.rst
+++ b/doc/source/deploy/cinder.rst
@@ -7,6 +7,11 @@
with different backend technologies, each of which might require specific
configuration steps.
+Cinder can be configured with multiple backends which would all be configured
+inside of ``cinder_helm_values.conf.backends``. The documentation below explains
+how to configure a specific backend, but you can add multiple backends by
+adding additional entries to the ``cinder_helm_values.conf.backends`` dictionary.
+
********
Ceph RBD
********
@@ -88,6 +93,119 @@
not necessarily to use iSCSI as the storage protocol. In this case, the
PowerStore driver will use the storage protocol specified inside Cinder,
+************
+Pure Storage
+************
+
+Pure maintains a native Cinder driver that can be used to integrate with the
+Pure Storage FlashArray. To enable the Pure Storage driver for Cinder, you need
+to provide the necessary configuration settings in your Ansible inventory.
+
+In order to use Pure Storage, you'll need to have the following information
+available:
+
+Volume Driver (``volume_driver``)
+ Use ``cinder.volume.drivers.pure.PureISCSIDriver`` for iSCSI,
+ ``cinder.volume.drivers.pure.PureFCDriver`` for Fibre Channel or
+ ``cinder.volume.drivers.pure.PureNVMEDriver`` for NVME connectivity.
+
+ If using the NVME driver, specify the ``pure_nvme_transport`` value, which the
+ supported values are ``roce`` or ``tcp``.
+
+Pure API Endpoint (``san_ip``)
+ The IP address of the Pure Storage array’s management interface or a domain name
+ that resolves to that IP address.
+
+Pure API Token (``pure_api_token``)
+ A token generated by the Pure Storage array that allows the Cinder driver to
+ authenticate with the array.
+
+You can use any other configuration settings that are specific to your needs
+by referencing the `Cinder Pure Storage documentation <https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/pure-storage-driver.html>`_.
+
+.. code-block:: yaml
+
+ cinder_helm_values:
+ storage: pure
+ pod:
+ useHostNetwork:
+ volume: true
+ backup: true
+ security_context:
+ cinder_volume:
+ container:
+ cinder_volume:
+ readOnlyRootFilesystem: true
+ privileged: true
+ cinder_backup:
+ container:
+ cinder_backup:
+ privileged: true
+ dependencies:
+ static:
+ api:
+ jobs:
+ - cinder-db-sync
+ - cinder-ks-user
+ - cinder-ks-endpoints
+ - cinder-rabbit-init
+ backup:
+ jobs:
+ - cinder-db-sync
+ - cinder-ks-user
+ - cinder-ks-endpoints
+ - cinder-rabbit-init
+ scheduler:
+ jobs:
+ - cinder-db-sync
+ - cinder-ks-user
+ - cinder-ks-endpoints
+ - cinder-rabbit-init
+ volume:
+ jobs:
+ - cinder-db-sync
+ - cinder-ks-user
+ - cinder-ks-endpoints
+ - cinder-rabbit-init
+ volume_usage_audit:
+ jobs:
+ - cinder-db-sync
+ - cinder-ks-user
+ - cinder-ks-endpoints
+ - cinder-rabbit-init
+ conf:
+ enable_iscsi: true
+ cinder:
+ DEFAULT:
+ default_volume_type: purestorage
+ enabled_backends: purestorage
+ backends:
+ rbd1: null
+ purestorage:
+ volume_backend_name: purestorage
+ volume_driver: <FILL IN>
+ san_ip: <FILL IN>
+ pure_api_token: <FILL IN>
+ # pure_nvme_transport:
+ use_multipath_for_image_xfer: true
+ pure_eradicate_on_delete: true
+ manifests:
+ deployment_backup: false
+ job_backup_storage_init: false
+ job_storage_init: false
+
+ nova_helm_values:
+ conf:
+ enable_iscsi: true
+
+.. admonition:: About ``conf.enable_iscsi``
+ :class: info
+
+ The ``enable_iscsi`` setting is required to allow the Nova instances to
+ expose volumes by making the `/dev` devices available to the containers,
+ not necessarily to use iSCSI as the storage protocol. In this case, the
+ Cinder instances will use the volume driver specified in ``volume_driver``.
+
********
StorPool
********