Mohammed Naser | 2473fae | 2024-09-09 13:07:38 -0400 | [diff] [blame] | 1 | ########## |
| 2 | Ceph Guide |
| 3 | ########## |
| 4 | |
| 5 | *************************************** |
| 6 | Placement Groups (PGs) and Auto-scaling |
| 7 | *************************************** |
| 8 | |
| 9 | In Ceph, Placement Groups (PGs) are an important abstraction that help |
| 10 | distribute objects across the cluster. Each PG can be thought of as a logical |
| 11 | collection of objects and Ceph uses these PGs to assign data to the appropriate |
| 12 | OSDs (Object Storage Daemons). The proper management of PGs is critical to |
| 13 | ensure the health and performance of your Ceph cluster. |
| 14 | |
| 15 | The number of PGs must be carefully configured depending on the size and layout |
| 16 | of your cluster. The cluster performance can be negatively impacted if you |
| 17 | have too many or too few placement groups. |
| 18 | |
| 19 | To learn more about placement groups and their role in Ceph, refer to the |
| 20 | `placement groups <https://docs.ceph.com/en/latest/rados/operations/placement-groups/>`_ |
| 21 | documentation from the Ceph project. |
| 22 | |
| 23 | The primary recommendations for a Ceph cluster is the following: |
| 24 | |
| 25 | - Enable placement group auto-scaling |
| 26 | - Enable the Ceph balancer module to ensure data is evenly distributed across OSDs |
| 27 | |
| 28 | The following sections provide guidance on how to enable these features in your |
| 29 | Ceph cluster. |
| 30 | |
| 31 | Enabling PG Auto-scaling |
| 32 | ======================== |
| 33 | |
| 34 | Ceph provides a built-in placement group auto-scaling module, which can |
| 35 | dynamically adjust the number of PGs based on cluster utilization. This is |
| 36 | particularly useful as it reduces the need for manual intervention when |
| 37 | scaling your cluster up or down. |
| 38 | |
| 39 | To enable PG auto-scaling, execute the following command in your Ceph cluster: |
| 40 | |
| 41 | .. code-block:: console |
| 42 | |
| 43 | $ ceph mgr module enable pg_autoscaler |
| 44 | |
| 45 | You can configure auto-scaling to be on a per-pool basis by setting the target |
| 46 | size or percentage of the cluster you want a pool to occupy. For example, |
| 47 | to enable auto-scaling for a specific pool: |
| 48 | |
| 49 | .. code-block:: console |
| 50 | |
| 51 | $ ceph osd pool set <pool_name> pg_autoscale_mode on |
| 52 | |
| 53 | For more detailed instructions, refer to the `Autoscaling Placement Groups <https://docs.ceph.com/en/reef/rados/operations/placement-groups/#autoscaling-placement-groups>`_ |
| 54 | documentation from the Ceph project. |
| 55 | |
| 56 | Managing the Ceph Balancer |
| 57 | ========================== |
| 58 | |
| 59 | The Ceph Balancer tool helps redistribute data across OSDs in order to maintain |
| 60 | an even distribution of data in the cluster. This is especially important as |
| 61 | the cluster grows, new OSDs are added, or during recovery operations. |
| 62 | |
| 63 | To enable the balancer, run: |
| 64 | |
| 65 | .. code-block:: console |
| 66 | |
| 67 | $ ceph balancer on |
| 68 | |
| 69 | You can check the current balancer status using: |
| 70 | |
| 71 | .. code-block:: console |
| 72 | |
| 73 | $ ceph balancer status |
| 74 | |
| 75 | For a more in-depth look at how the balancer works and how to configure it, |
| 76 | refer to the `Balancer module <https://docs.ceph.com/en/latest/rados/operations/balancer/>`_ |
| 77 | documentation from the Ceph project. |