blob: 4c1e12051b8e340891de6c5f098923279bf2125a [file] [log] [blame]
Mohammed Naser2473fae2024-09-09 13:07:38 -04001##########
2Ceph Guide
3##########
4
5***************************************
6Placement Groups (PGs) and Auto-scaling
7***************************************
8
9In Ceph, Placement Groups (PGs) are an important abstraction that help
10distribute objects across the cluster. Each PG can be thought of as a logical
11collection of objects and Ceph uses these PGs to assign data to the appropriate
12OSDs (Object Storage Daemons). The proper management of PGs is critical to
13ensure the health and performance of your Ceph cluster.
14
15The number of PGs must be carefully configured depending on the size and layout
16of your cluster. The cluster performance can be negatively impacted if you
17have too many or too few placement groups.
18
19To learn more about placement groups and their role in Ceph, refer to the
20`placement groups <https://docs.ceph.com/en/latest/rados/operations/placement-groups/>`_
21documentation from the Ceph project.
22
23The primary recommendations for a Ceph cluster is the following:
24
25- Enable placement group auto-scaling
26- Enable the Ceph balancer module to ensure data is evenly distributed across OSDs
27
28The following sections provide guidance on how to enable these features in your
29Ceph cluster.
30
31Enabling PG Auto-scaling
32========================
33
34Ceph provides a built-in placement group auto-scaling module, which can
35dynamically adjust the number of PGs based on cluster utilization. This is
36particularly useful as it reduces the need for manual intervention when
37scaling your cluster up or down.
38
39To enable PG auto-scaling, execute the following command in your Ceph cluster:
40
41.. code-block:: console
42
43 $ ceph mgr module enable pg_autoscaler
44
45You can configure auto-scaling to be on a per-pool basis by setting the target
46size or percentage of the cluster you want a pool to occupy. For example,
47to enable auto-scaling for a specific pool:
48
49.. code-block:: console
50
51 $ ceph osd pool set <pool_name> pg_autoscale_mode on
52
53For more detailed instructions, refer to the `Autoscaling Placement Groups <https://docs.ceph.com/en/reef/rados/operations/placement-groups/#autoscaling-placement-groups>`_
54documentation from the Ceph project.
55
56Managing the Ceph Balancer
57==========================
58
59The Ceph Balancer tool helps redistribute data across OSDs in order to maintain
60an even distribution of data in the cluster. This is especially important as
61the cluster grows, new OSDs are added, or during recovery operations.
62
63To enable the balancer, run:
64
65.. code-block:: console
66
67 $ ceph balancer on
68
69You can check the current balancer status using:
70
71.. code-block:: console
72
73 $ ceph balancer status
74
75For a more in-depth look at how the balancer works and how to configure it,
76refer to the `Balancer module <https://docs.ceph.com/en/latest/rados/operations/balancer/>`_
77documentation from the Ceph project.