Remove rbd write-cache docs
After some usage of rbd write feature, we came to
the conclusion that it is not mature yet to work
with openstack.
The issue we currently see is when a vm with write-cache
enabled is rebooted, it doesnt come bad and error with
https://paste.opendev.org/raw/817888/
We dont recomment the enablement of that feature for now.
diff --git a/docs/storage.md b/docs/storage.md
index 73bade0..6dc3f1e 100644
--- a/docs/storage.md
+++ b/docs/storage.md
@@ -2,106 +2,6 @@
## Built-in Ceph cluster
-### RBD persistent write-back cache
-
-There are frequently cases of environments that sustain write-heavy loads which
-can overwhelm the underlying storage OSDs. It's also possible that
-non-enterprise 3D NAND flash is in use with small memory buffers which can do
-fast upfront writes but unable to sustain those writes.
-
-In those environments, it's possible that a lot of slow operations will start to
-cumulate inside of the cluster, which will bubble up as performance issues
-inside the virtual machines. This can be highly impactful on workloads,
-increase the latency of the VMs and drop IOPs down to near zero.
-
-There are environments that have local storage on all of the RBD clients (in
-this case, VMs) which can potentially be backed by battery backed hardware RAID
-with a local cache. This can significantly drive down the latency and increase
-write speeds since they are persisted onto the local system.
-
-!!! warning
-
- The only drawback is that if the client crashes (or hypervisor in this
- case), the data won't easily be recoverable, however, this is a small risk
- to take in workloads replacing the storage infrastructure is not realistic
- and the stability of systems is relatively high.
-
-#### Configuration
-
-##### Compute hosts
-
-There are a few steps that needs to be done on the underlying operating system
-which runs the compute nodes. The steps bellow assumes that a device `/dev/<dev>`
-is already setup accordingly to the host's storage capabilities.
-
-1. Create filesystem for the cache:
-
- ```shell
- mkfs.ext4 /dev/<dev>
- ```
-
-2. Grab the `UUID` for the filesystem, this will be used in order to automatically
- mount the volume on boot even if the device name changes:
-
- ```shell
- blkid /dev/<dev>
- ```
-
-3. Add record to automatically mount the filesystem in `/etc/fstab`:
-
- ```shell
- UUID="<UUID>" /var/lib/libvirt/rbd-cache ext4 defaults 0 1
- ```
-
-4. Create folder for the RBD persistent write-back cache:
-
- ```shell
- mkdir /var/lib/libvirt/rbd-cache
- ```
-
-5. Mount the cache folder and verify that it's mounted:
-
- ```shell
- mount /var/lib/libvirt/rbd-cache
- ```
-
-##### Atmosphere
-
-In order to be able to configure write-back cache, you will need to override the
-following values for Ceph provisioners in your Ansible inventory:
-
-```yaml
-openstack_helm_infra_ceph_provisioners_values:
- conf:
- ceph:
- global:
- rbd_plugins: pwl_cache
- rbd_persistent_cache_mode: ssd
- rbd_persistent_cache_path: /var/lib/libvirt/rbd-cache
- rbd_persistent_cache_size: 30G
-```
-
-The above values will enable the persistent write-back cache for all RBD volumes
-with a 30 gigabyte cache size. The cache will be stored in the folder
-`/var/lib/libvirt/rbd-cache` which is mounted on the host's filesystem.
-
-#### Verification
-
-After the Atmosphere configurations is applied, once you create a virtual
-machine backed by ceph, you should be able to see a file for the write-back cache
-inside `/var/lib/libvirt/rbd-cache`:
-
-```console
-# ls -l /var/lib/libvirt/rbd-cache/
-total 344
--rw-r--r-- 1 42424 syslog 29999759360 Dec 1 17:37 rbd-pwl.cinder.volumes.a8eba89efc83.pool
-```
-
-!!! note
-
- For existing virtual machines to take advatange of write-back cache, you
- will hard reboot the virtual machine (or safely shutdown and start up).
-
## External storage
When using an external storage platform, it's important to create to disable Ceph