docs: clean-up rbd persistent cache
diff --git a/docs/storage.md b/docs/storage.md
index 07d5f9e..73bade0 100644
--- a/docs/storage.md
+++ b/docs/storage.md
@@ -1,49 +1,74 @@
# Storage
-## Enable RBD persistent write-back cache
+## Built-in Ceph cluster
-There are frequently cases of environments that sustain write-heavy loads which can overwhelm the underlying storage OSDs. It's also possible that non-enterprise 3D NAND flash is in use with small memory buffers which can do fast upfront writes but unable to sustain those writes.
+### RBD persistent write-back cache
-In those environments, it's possible that a lot of slow operations will start to cumulate inside of the cluster, which will bubble up as performance issues inside the virtual machines. This can be highly impactful on workloads, increase the latency of the VMs and drop IOPs down to near zero.
+There are frequently cases of environments that sustain write-heavy loads which
+can overwhelm the underlying storage OSDs. It's also possible that
+non-enterprise 3D NAND flash is in use with small memory buffers which can do
+fast upfront writes but unable to sustain those writes.
-There are environments that have local storage on all of the RBD clients (in this case, VMs) which can potentially be backed by battery backed hardware RAID with a local cache. This can significantly drive down the latency and increase write speeds since they are persisted onto the local system. The only fallback is that if the client crashes (or hypervisor in this case), the data won't easily be recoverable, however, this is a small risk to take in workloads replacing the storage infrastructure is not realistic and the stability of systems is relatively high.
+In those environments, it's possible that a lot of slow operations will start to
+cumulate inside of the cluster, which will bubble up as performance issues
+inside the virtual machines. This can be highly impactful on workloads,
+increase the latency of the VMs and drop IOPs down to near zero.
-### Configuring compute hosts
+There are environments that have local storage on all of the RBD clients (in
+this case, VMs) which can potentially be backed by battery backed hardware RAID
+with a local cache. This can significantly drive down the latency and increase
+write speeds since they are persisted onto the local system.
+
+!!! warning
+
+ The only drawback is that if the client crashes (or hypervisor in this
+ case), the data won't easily be recoverable, however, this is a small risk
+ to take in workloads replacing the storage infrastructure is not realistic
+ and the stability of systems is relatively high.
+
+#### Configuration
+
+##### Compute hosts
There are a few steps that needs to be done on the underlying operating system
-which runs the compute nodes. The steps bellow assumes that a device `/deb/<dev>`
+which runs the compute nodes. The steps bellow assumes that a device `/dev/<dev>`
is already setup accordingly to the host's storage capabilities.
-1. Create filesystem for cache:
+1. Create filesystem for the cache:
- ```yaml
- mkfs.ext4 /dev/<dev>
+ ```shell
+ mkfs.ext4 /dev/<dev>
```
-2. Get fs uuid
- ```yaml
- blkid
+2. Grab the `UUID` for the filesystem, this will be used in order to automatically
+ mount the volume on boot even if the device name changes:
+
+ ```shell
+ blkid /dev/<dev>
```
-3. Add record to mount filesystem in `/etc/fstab`:
- ```yaml
- UUID="XXXXX" /var/lib/libvirt/rbd-cache ext4 defaults 0 1
+3. Add record to automatically mount the filesystem in `/etc/fstab`:
+
+ ```shell
+ UUID="<UUID>" /var/lib/libvirt/rbd-cache ext4 defaults 0 1
```
-4. Create folder for cache
- ```yaml
- mkdir /var/lib/libvirt/rbd-cache
+4. Create folder for the RBD persistent write-back cache:
+
+ ```shell
+ mkdir /var/lib/libvirt/rbd-cache
```
-5. Mount the cache
- ```yaml
- mount /var/lib/libvirt/rbd-cache
- ```
+5. Mount the cache folder and verify that it's mounted:
-### Enabling rbd write-back cache
+ ```shell
+ mount /var/lib/libvirt/rbd-cache
+ ```
-In order to be able to confirgure write-back cache, you need to override the following
-values for ceph provisioners to your Ansible inventory:
+##### Atmosphere
+
+In order to be able to configure write-back cache, you will need to override the
+following values for Ceph provisioners in your Ansible inventory:
```yaml
openstack_helm_infra_ceph_provisioners_values:
@@ -56,22 +81,26 @@
rbd_persistent_cache_size: 30G
```
-### Verifying
+The above values will enable the persistent write-back cache for all RBD volumes
+with a 30 gigabyte cache size. The cache will be stored in the folder
+`/var/lib/libvirt/rbd-cache` which is mounted on the host's filesystem.
-Once the configurations are applied to the ceph provisioners, once you create a virtual machine backed by ceph, you should be able to see a file for cache inside `/var/lib/libvirt/rbd-cache`:
+#### Verification
-```yaml
-/var/lib/libvirt# ls -l rbd-cache/
+After the Atmosphere configurations is applied, once you create a virtual
+machine backed by ceph, you should be able to see a file for the write-back cache
+inside `/var/lib/libvirt/rbd-cache`:
+
+```console
+# ls -l /var/lib/libvirt/rbd-cache/
total 344
-drwxrwxr-x 2 42424 syslog 16384 Nov 30 20:38 lost+found
-rw-r--r-- 1 42424 syslog 29999759360 Dec 1 17:37 rbd-pwl.cinder.volumes.a8eba89efc83.pool
```
- > **Note**
->
-> For existing vms to take advantage of write-back cache, you will need
-to hard reboot them.
+!!! note
+ For existing virtual machines to take advatange of write-back cache, you
+ will hard reboot the virtual machine (or safely shutdown and start up).
## External storage
@@ -89,7 +118,7 @@
that they are not inside a host group or otherwise individual attachments will
not work.
-### CSI
+#### CSI
You'll need to enable the Kubernetes cluster to use the PowerStore driver by
using adding the following YAML to your Ansible inventory:
@@ -107,7 +136,7 @@
blockProtocol: <FILL IN> # FC or iSCSI
```
-### Glance
+#### Glance
Since Glance does not have a native PowerStore driver, you'll need to enable
the use of the Cinder driver by adding the following to your Ansible inventory:
@@ -129,7 +158,7 @@
avoid any issues with the PowerStore driver having to constantly download and
upload the images.
-### Cinder
+#### Cinder
You can enable the native PowerStore driver for Cinder with the following
configuration inside your Ansible inventory:
@@ -187,7 +216,7 @@
backup service. In the future, we'll update this sample configuration to use
the Cinder backup service.
-### Nova
+#### Nova
You can enable the native PowerStore driver for Cinder with the following
configuration inside your Ansible inventory:
@@ -198,7 +227,7 @@
enable_iscsi: true
```
-> **Note**
->
-> The PowerStore driver will use the storage protocol specified inside Cinder,
-> even if the above mentions `iscsi`.
+!!! note
+
+ The PowerStore driver will use the storage protocol specified inside Cinder,
+ even if the above mentions `iscsi`.