tree: a1c6b72b9b56b743d50f15962b43121d07cf5abf [path history] [tgz]
  1. charts/
  2. templates/
  3. .helmignore
  4. Chart.lock
  5. Chart.yaml
  6. README.md
  7. values.schema.json
  8. values.yaml
charts/keycloak/charts/postgresql/README.md

Bitnami package for PostgreSQL

PostgreSQL (Postgres) is an open source object-relational database known for reliability and data integrity. ACID-compliant, it supports foreign keys, joins, views, triggers and stored procedures.

Overview of PostgreSQL

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://registry-1.docker.io/bitnamicharts/postgresql

Looking to use PostgreSQL in production? Try VMware Tanzu Application Catalog, the enterprise edition of Bitnami Application Catalog.

Introduction

This chart bootstraps a PostgreSQL deployment on a Kubernetes cluster using the Helm package manager.

For HA, please see this repo

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/postgresql

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcePreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Customizing primary and read replica services in a replicated configuration

At the top level, there is a service object which defines the services for both primary and readReplicas. For deeper customization, there are service objects for both the primary and read types individually. This allows you to override the values in the top level service object so that the primary and read can be of different service types and with different clusterIPs / nodePorts. Also in the case you want the primary and read to be of type nodePort, you will need to set the nodePorts to different values to prevent a collision. The values that are deeper in the primary.service or readReplicas.service objects will take precedence over the top level service object.

Use a different PostgreSQL version

To modify the application version used in this chart, specify a different version of the image using the image.tag parameter and/or a different repository using the image.repository parameter.

LDAP

LDAP support can be enabled in the chart by specifying the ldap. parameters while creating a release. The following parameters should be configured to properly enable the LDAP support in the chart.

  • ldap.enabled: Enable LDAP support. Defaults to false.
  • ldap.uri: LDAP URL beginning in the form ldap[s]://<hostname>:<port>. No defaults.
  • ldap.base: LDAP base DN. No defaults.
  • ldap.binddn: LDAP bind DN. No defaults.
  • ldap.bindpw: LDAP bind password. No defaults.
  • ldap.bslookup: LDAP base lookup. No defaults.
  • ldap.nss_initgroups_ignoreusers: LDAP ignored users. root,nslcd.
  • ldap.scope: LDAP search scope. No defaults.
  • ldap.tls_reqcert: LDAP TLS check on server certificates. No defaults.

For example:

ldap.enabled="true"
ldap.uri="ldap://my_ldap_server"
ldap.base="dc=example\,dc=org"
ldap.binddn="cn=admin\,dc=example\,dc=org"
ldap.bindpw="admin"
ldap.bslookup="ou=group-ok\,dc=example\,dc=org"
ldap.nss_initgroups_ignoreusers="root\,nslcd"
ldap.scope="sub"
ldap.tls_reqcert="demand"

Next, login to the PostgreSQL server using the psql client and add the PAM authenticated LDAP users.

Note: Parameters including commas must be escaped as shown in the above example.

postgresql.conf / pg_hba.conf files as configMap

This helm chart also supports to customize the PostgreSQL configuration file. You can add additional PostgreSQL configuration parameters using the primary.extendedConfiguration/readReplicas.extendedConfiguration parameters as a string. Alternatively, to replace the entire default configuration use primary.configuration.

You can also add a custom pg_hba.conf using the primary.pgHbaConfiguration parameter.

In addition to these options, you can also set an external ConfigMap with all the configuration files. This is done by setting the primary.existingConfigmap parameter. Note that this will override the two previous options.

Initialize a fresh instance

The Bitnami PostgreSQL image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, you can specify custom scripts using the primary.initdb.scripts parameter as a string.

In addition, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the primary.initdb.scriptsConfigMap parameter. Note that this will override the two previous options. If your initialization scripts contain sensitive information such as credentials or passwords, you can use the primary.initdb.scriptsSecret parameter.

The allowed extensions are .sh, .sql and .sql.gz.

Securing traffic using TLS

TLS support can be enabled in the chart by specifying the tls. parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the chart:

  • tls.enabled: Enable TLS support. Defaults to false
  • tls.certificatesSecret: Name of an existing secret that contains the certificates. No defaults.
  • tls.certFilename: Certificate filename. No defaults.
  • tls.certKeyFilename: Certificate key filename. No defaults.

For example:

  • First, create the secret with the cetificates files:

    kubectl create secret generic certificates-tls-secret --from-file=./cert.crt --from-file=./cert.key --from-file=./ca.crt
    
  • Then, use the following parameters:

    volumePermissions.enabled=true
    tls.enabled=true
    tls.certificatesSecret="certificates-tls-secret"
    tls.certFilename="cert.crt"
    tls.certKeyFilename="cert.key"
    

    Note TLS and VolumePermissions: PostgreSQL requires certain permissions on sensitive files (such as certificate keys) to start up. Due to an on-going issue regarding kubernetes permissions and the use of containerSecurityContext.runAsUser, you must enable volumePermissions to ensure everything works as expected.

Sidecars

If you need additional containers to run within the same pod as PostgreSQL (e.g. an additional metrics or logging exporter), you can do so via the sidecars config parameter. Simply define your container according to the Kubernetes container spec.

# For the PostgreSQL primary
primary:
  sidecars:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
    - name: portname
     containerPort: 1234
# For the PostgreSQL replicas
readReplicas:
  sidecars:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
    - name: portname
     containerPort: 1234

Metrics

The chart optionally can start a metrics exporter for prometheus. The metrics endpoint (port 9187) is not exposed and it is expected that the metrics are collected from inside the k8s cluster using something similar as the described in the example Prometheus scrape configuration.

The exporter allows to create custom metrics from additional SQL queries. See the Chart's values.yaml for an example and consult the exporters documentation for more details.

Use of global variables

In more complex scenarios, we may have the following tree of dependencies

                     +--------------+
                     |              |
        +------------+   Chart 1    +-----------+
        |            |              |           |
        |            --------+------+           |
        |                    |                  |
        |                    |                  |
        |                    |                  |
        |                    |                  |
        v                    v                  v
+-------+------+    +--------+------+  +--------+------+
|              |    |               |  |               |
|  PostgreSQL  |    |  Sub-chart 1  |  |  Sub-chart 2  |
|              |    |               |  |               |
+--------------+    +---------------+  +---------------+

The three charts below depend on the parent chart Chart 1. However, subcharts 1 and 2 may need to connect to PostgreSQL as well. In order to do so, subcharts 1 and 2 need to know the PostgreSQL credentials, so one option for deploying could be deploy Chart 1 with the following parameters:

postgresql.auth.username=testuser
subchart1.postgresql.auth.username=testuser
subchart2.postgresql.auth.username=testuser
postgresql.auth.password=testpass
subchart1.postgresql.auth.password=testpass
subchart2.postgresql.auth.password=testpass
postgresql.auth.database=testdb
subchart1.postgresql.auth.database=testdb
subchart2.postgresql.auth.database=testdb

If the number of dependent sub-charts increases, installing the chart with parameters can become increasingly difficult. An alternative would be to set the credentials using global variables as follows:

global.postgresql.auth.username=testuser
global.postgresql.auth.password=testpass
global.postgresql.auth.database=testdb

This way, the credentials will be available in all of the subcharts.

Backup and restore PostgreSQL deployments

To back up and restore Bitnami PostgreSQL Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool.

These are the steps you will usually follow to back up and restore your PostgreSQL cluster data:

  • Install Velero on the source and destination clusters.
  • Use Velero to back up the PersistentVolumes (PVs) used by the deployment on the source cluster.
  • Use Velero to restore the backed-up PVs on the destination cluster.
  • Create a new deployment on the destination cluster with the same chart, deployment name, credentials and other parameters as the original. This new deployment will use the restored PVs and hence the original data.

Refer to our detailed tutorial on backing up and restoring PostgreSQL deployments on Kubernetes for more information.

NetworkPolicy

To enable network policy for PostgreSQL, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set networkPolicy.enabled to true.

For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for all pods in the namespace:

kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"

With NetworkPolicy enabled, traffic will be limited to just port 5432.

For more precise policy, set networkPolicy.allowExternal=false. This will only allow pods with the generated client label to connect to PostgreSQL. This label will be displayed in the output of a successful install.

Differences between Bitnami PostgreSQL image and Docker Official image

  • The Docker Official PostgreSQL image does not support replication. If you pass any replication environment variable, this would be ignored. The only environment variables supported by the Docker Official image are POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD, POSTGRES_INITDB_ARGS, POSTGRES_INITDB_WALDIR and PGDATA. All the remaining environment variables are specific to the Bitnami PostgreSQL image.
  • The Bitnami PostgreSQL image is non-root by default. This requires that you run the pod with securityContext and updates the permissions of the volume with an initContainer. A key benefit of this configuration is that the pod follows security best practices and is prepared to run on Kubernetes distributions with hard security constraints like OpenShift.
  • For OpenShift up to 4.10, let set the volume permissions, security context, runAsUser and fsGroup automatically by OpenShift and disable the predefined settings of the helm chart: primary.securityContext.enabled=false,primary.containerSecurityContext.enabled=false,volumePermissions.enabled=false,shmVolume.enabled=false
  • For OpenShift 4.11 and higher, let set OpenShift the runAsUser and fsGroup automatically. Configure the pod and container security context to restrictive defaults and disable the volume permissions setup: primary. podSecurityContext.fsGroup=null,primary.podSecurityContext.seccompProfile.type=RuntimeDefault,primary.containerSecurityContext.runAsUser=null,primary.containerSecurityContext.allowPrivilegeEscalation=false,primary.containerSecurityContext.runAsNonRoot=true,primary.containerSecurityContext.seccompProfile.type=RuntimeDefault,primary.containerSecurityContext.capabilities.drop=['ALL'],volumePermissions.enabled=false,shmVolume.enabled=false

Setting Pod's affinity

This chart allows you to set your custom affinity using the XXX.affinity parameter(s). Find more information about Pod's affinity in the kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset, XXX.podAntiAffinityPreset, or XXX.nodeAffinityPreset parameters.

Persistence

The Bitnami PostgreSQL image stores the PostgreSQL data and configurations at the /bitnami/postgresql path of the container.

Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the Parameters section to configure the PVC or to disable persistence.

If you already have data in it, you will fail to sync to standby nodes for all commits, details can refer to the code present in the container repository. If you need to use those data, please covert them to sql and import after helm install finished.

Parameters

Global parameters

NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.storageClassGlobal StorageClass for Persistent Volume(s)""
global.postgresql.auth.postgresPasswordPassword for the "postgres" admin user (overrides auth.postgresPassword)""
global.postgresql.auth.usernameName for a custom user to create (overrides auth.username)""
global.postgresql.auth.passwordPassword for the custom user to create (overrides auth.password)""
global.postgresql.auth.databaseName for a custom database to create (overrides auth.database)""
global.postgresql.auth.existingSecretName of existing secret to use for PostgreSQL credentials (overrides auth.existingSecret).""
global.postgresql.auth.secretKeys.adminPasswordKeyName of key in existing secret to use for PostgreSQL credentials (overrides auth.secretKeys.adminPasswordKey). Only used when global.postgresql.auth.existingSecret is set.""
global.postgresql.auth.secretKeys.userPasswordKeyName of key in existing secret to use for PostgreSQL credentials (overrides auth.secretKeys.userPasswordKey). Only used when global.postgresql.auth.existingSecret is set.""
global.postgresql.auth.secretKeys.replicationPasswordKeyName of key in existing secret to use for PostgreSQL credentials (overrides auth.secretKeys.replicationPasswordKey). Only used when global.postgresql.auth.existingSecret is set.""
global.postgresql.service.ports.postgresqlPostgreSQL service port (overrides service.ports.postgresql)""
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto

Common parameters

NameDescriptionValue
kubeVersionOverride Kubernetes version""
nameOverrideString to partially override common.names.fullname template (will maintain the release name)""
fullnameOverrideString to fully override common.names.fullname template""
clusterDomainKubernetes Cluster Domaincluster.local
extraDeployArray of extra objects to deploy with the release (evaluated as a template)[]
commonLabelsAdd labels to all the deployed resources{}
commonAnnotationsAdd annotations to all the deployed resources{}
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the statefulset["sleep"]
diagnosticMode.argsArgs to override all containers in the statefulset["infinity"]

PostgreSQL common parameters

NameDescriptionValue
image.registryPostgreSQL image registryREGISTRY_NAME
image.repositoryPostgreSQL image repositoryREPOSITORY_NAME/postgresql
image.digestPostgreSQL image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
image.pullPolicyPostgreSQL image pull policyIfNotPresent
image.pullSecretsSpecify image pull secrets[]
image.debugSpecify if debug values should be setfalse
auth.enablePostgresUserAssign a password to the "postgres" admin user. Otherwise, remote access will be blocked for this usertrue
auth.postgresPasswordPassword for the "postgres" admin user. Ignored if auth.existingSecret is provided""
auth.usernameName for a custom user to create""
auth.passwordPassword for the custom user to create. Ignored if auth.existingSecret is provided""
auth.databaseName for a custom database to create""
auth.replicationUsernameName of the replication userrepl_user
auth.replicationPasswordPassword for the replication user. Ignored if auth.existingSecret is provided""
auth.existingSecretName of existing secret to use for PostgreSQL credentials. auth.postgresPassword, auth.password, and auth.replicationPassword will be ignored and picked up from this secret. The secret might also contains the key ldap-password if LDAP is enabled. ldap.bind_password will be ignored and picked from this secret in this case.""
auth.secretKeys.adminPasswordKeyName of key in existing secret to use for PostgreSQL credentials. Only used when auth.existingSecret is set.postgres-password
auth.secretKeys.userPasswordKeyName of key in existing secret to use for PostgreSQL credentials. Only used when auth.existingSecret is set.password
auth.secretKeys.replicationPasswordKeyName of key in existing secret to use for PostgreSQL credentials. Only used when auth.existingSecret is set.replication-password
auth.usePasswordFilesMount credentials as a files instead of using an environment variablefalse
architecturePostgreSQL architecture (standalone or replication)standalone
replication.synchronousCommitSet synchronous commit mode. Allowed values: on, remote_apply, remote_write, local and offoff
replication.numSynchronousReplicasNumber of replicas that will have synchronous replication. Note: Cannot be greater than readReplicas.replicaCount.0
replication.applicationNameCluster application name. Useful for advanced replication settingsmy_application
containerPorts.postgresqlPostgreSQL container port5432
audit.logHostnameLog client hostnamesfalse
audit.logConnectionsAdd client log-in operations to the log filefalse
audit.logDisconnectionsAdd client log-outs operations to the log filefalse
audit.pgAuditLogAdd operations to log using the pgAudit extension""
audit.pgAuditLogCatalogLog catalog using pgAuditoff
audit.clientMinMessagesMessage log level to share with the usererror
audit.logLinePrefixTemplate for log line prefix (default if not set)""
audit.logTimezoneTimezone for the log timestamps""
ldap.enabledEnable LDAP supportfalse
ldap.serverIP address or name of the LDAP server.""
ldap.portPort number on the LDAP server to connect to""
ldap.prefixString to prepend to the user name when forming the DN to bind""
ldap.suffixString to append to the user name when forming the DN to bind""
ldap.basednRoot DN to begin the search for the user in""
ldap.binddnDN of user to bind to LDAP""
ldap.bindpwPassword for the user to bind to LDAP""
ldap.searchAttributeAttribute to match against the user name in the search""
ldap.searchFilterThe search filter to use when doing search+bind authentication""
ldap.schemeSet to ldaps to use LDAPS""
ldap.tls.enabledSe to true to enable TLS encryptionfalse
ldap.uriLDAP URL beginning in the form ldap[s]://host[:port]/basedn. If provided, all the other LDAP parameters will be ignored.""
postgresqlDataDirPostgreSQL data dir folder/bitnami/postgresql/data
postgresqlSharedPreloadLibrariesShared preload libraries (comma-separated list)pgaudit
shmVolume.enabledEnable emptyDir volume for /dev/shm for PostgreSQL pod(s)true
shmVolume.sizeLimitSet this to enable a size limit on the shm tmpfs""
tls.enabledEnable TLS traffic supportfalse
tls.autoGeneratedGenerate automatically self-signed TLS certificatesfalse
tls.preferServerCiphersWhether to use the server's TLS cipher preferences rather than the client'strue
tls.certificatesSecretName of an existing secret that contains the certificates""
tls.certFilenameCertificate filename""
tls.certKeyFilenameCertificate key filename""
tls.certCAFilenameCA Certificate filename""
tls.crlFilenameFile containing a Certificate Revocation List""

PostgreSQL Primary parameters

NameDescriptionValue
primary.nameName of the primary database (eg primary, master, leader, ...)primary
primary.configurationPostgreSQL Primary main configuration to be injected as ConfigMap""
primary.pgHbaConfigurationPostgreSQL Primary client authentication configuration""
primary.existingConfigmapName of an existing ConfigMap with PostgreSQL Primary configuration""
primary.extendedConfigurationExtended PostgreSQL Primary configuration (appended to main or default configuration)""
primary.existingExtendedConfigmapName of an existing ConfigMap with PostgreSQL Primary extended configuration""
primary.initdb.argsPostgreSQL initdb extra arguments""
primary.initdb.postgresqlWalDirSpecify a custom location for the PostgreSQL transaction log""
primary.initdb.scriptsDictionary of initdb scripts{}
primary.initdb.scriptsConfigMapConfigMap with scripts to be run at first boot""
primary.initdb.scriptsSecretSecret with scripts to be run at first boot (in case it contains sensitive information)""
primary.initdb.userSpecify the PostgreSQL username to execute the initdb scripts""
primary.initdb.passwordSpecify the PostgreSQL password to execute the initdb scripts""
primary.standby.enabledWhether to enable current cluster's primary as standby server of another cluster or notfalse
primary.standby.primaryHostThe Host of replication primary in the other cluster""
primary.standby.primaryPortThe Port of replication primary in the other cluster""
primary.extraEnvVarsArray with extra environment variables to add to PostgreSQL Primary nodes[]
primary.extraEnvVarsCMName of existing ConfigMap containing extra env vars for PostgreSQL Primary nodes""
primary.extraEnvVarsSecretName of existing Secret containing extra env vars for PostgreSQL Primary nodes""
primary.commandOverride default container command (useful when using custom images)[]
primary.argsOverride default container args (useful when using custom images)[]
primary.livenessProbe.enabledEnable livenessProbe on PostgreSQL Primary containerstrue
primary.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe30
primary.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
primary.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
primary.livenessProbe.failureThresholdFailure threshold for livenessProbe6
primary.livenessProbe.successThresholdSuccess threshold for livenessProbe1
primary.readinessProbe.enabledEnable readinessProbe on PostgreSQL Primary containerstrue
primary.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
primary.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
primary.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
primary.readinessProbe.failureThresholdFailure threshold for readinessProbe6
primary.readinessProbe.successThresholdSuccess threshold for readinessProbe1
primary.startupProbe.enabledEnable startupProbe on PostgreSQL Primary containersfalse
primary.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe30
primary.startupProbe.periodSecondsPeriod seconds for startupProbe10
primary.startupProbe.timeoutSecondsTimeout seconds for startupProbe1
primary.startupProbe.failureThresholdFailure threshold for startupProbe15
primary.startupProbe.successThresholdSuccess threshold for startupProbe1
primary.customLivenessProbeCustom livenessProbe that overrides the default one{}
primary.customReadinessProbeCustom readinessProbe that overrides the default one{}
primary.customStartupProbeCustom startupProbe that overrides the default one{}
primary.lifecycleHooksfor the PostgreSQL Primary container to automate configuration before or after startup{}
primary.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if primary.resources is set (primary.resources is recommended for production).nano
primary.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
primary.podSecurityContext.enabledEnable security contexttrue
primary.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
primary.podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
primary.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
primary.podSecurityContext.fsGroupGroup ID for the pod1001
primary.containerSecurityContext.enabledEnabled containers' Security Contexttrue
primary.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
primary.containerSecurityContext.runAsUserSet containers' Security Context runAsUser1001
primary.containerSecurityContext.runAsGroupSet containers' Security Context runAsGroup1001
primary.containerSecurityContext.runAsNonRootSet container's Security Context runAsNonRoottrue
primary.containerSecurityContext.privilegedSet container's Security Context privilegedfalse
primary.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context readOnlyRootFilesystemtrue
primary.containerSecurityContext.allowPrivilegeEscalationSet container's Security Context allowPrivilegeEscalationfalse
primary.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
primary.containerSecurityContext.seccompProfile.typeSet container's Security Context seccomp profileRuntimeDefault
primary.automountServiceAccountTokenMount Service Account token in podfalse
primary.hostAliasesPostgreSQL primary pods host aliases[]
primary.hostNetworkSpecify if host network should be enabled for PostgreSQL pod (postgresql primary)false
primary.hostIPCSpecify if host IPC should be enabled for PostgreSQL pod (postgresql primary)false
primary.labelsMap of labels to add to the statefulset (postgresql primary){}
primary.annotationsAnnotations for PostgreSQL primary pods{}
primary.podLabelsMap of labels to add to the pods (postgresql primary){}
primary.podAnnotationsMap of annotations to add to the pods (postgresql primary){}
primary.podAffinityPresetPostgreSQL primary pod affinity preset. Ignored if primary.affinity is set. Allowed values: soft or hard""
primary.podAntiAffinityPresetPostgreSQL primary pod anti-affinity preset. Ignored if primary.affinity is set. Allowed values: soft or hardsoft
primary.nodeAffinityPreset.typePostgreSQL primary node affinity preset type. Ignored if primary.affinity is set. Allowed values: soft or hard""
primary.nodeAffinityPreset.keyPostgreSQL primary node label key to match Ignored if primary.affinity is set.""
primary.nodeAffinityPreset.valuesPostgreSQL primary node label values to match. Ignored if primary.affinity is set.[]
primary.affinityAffinity for PostgreSQL primary pods assignment{}
primary.nodeSelectorNode labels for PostgreSQL primary pods assignment{}
primary.tolerationsTolerations for PostgreSQL primary pods assignment[]
primary.topologySpreadConstraintsTopology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template[]
primary.priorityClassNamePriority Class to use for each pod (postgresql primary)""
primary.schedulerNameUse an alternate scheduler, e.g. "stork".""
primary.terminationGracePeriodSecondsSeconds PostgreSQL primary pod needs to terminate gracefully""
primary.updateStrategy.typePostgreSQL Primary statefulset strategy typeRollingUpdate
primary.updateStrategy.rollingUpdatePostgreSQL Primary statefulset rolling update configuration parameters{}
primary.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the PostgreSQL Primary container(s)[]
primary.extraVolumesOptionally specify extra list of additional volumes for the PostgreSQL Primary pod(s)[]
primary.sidecarsAdd additional sidecar containers to the PostgreSQL Primary pod(s)[]
primary.initContainersAdd additional init containers to the PostgreSQL Primary pod(s)[]
primary.pdb.createEnable/disable a Pod Disruption Budget creationtrue
primary.pdb.minAvailableMinimum number/percentage of pods that should remain scheduled""
primary.pdb.maxUnavailableMaximum number/percentage of pods that may be made unavailable. Defaults to 1 if both primary.pdb.minAvailable and primary.pdb.maxUnavailable are empty.""
primary.extraPodSpecOptionally specify extra PodSpec for the PostgreSQL Primary pod(s){}
primary.networkPolicy.enabledSpecifies whether a NetworkPolicy should be createdtrue
primary.networkPolicy.allowExternalDon't require server label for connectionstrue
primary.networkPolicy.allowExternalEgressAllow the pod to access any range of port and all destinations.true
primary.networkPolicy.extraIngressAdd extra ingress rules to the NetworkPolicy[]
primary.networkPolicy.extraEgressAdd extra ingress rules to the NetworkPolicy[]
primary.networkPolicy.ingressNSMatchLabelsLabels to match to allow traffic from other namespaces{}
primary.networkPolicy.ingressNSPodMatchLabelsPod labels to match to allow traffic from other namespaces{}
primary.service.typeKubernetes Service typeClusterIP
primary.service.ports.postgresqlPostgreSQL service port5432
primary.service.nodePorts.postgresqlNode port for PostgreSQL""
primary.service.clusterIPStatic clusterIP or None for headless services""
primary.service.annotationsAnnotations for PostgreSQL primary service{}
primary.service.loadBalancerClassLoad balancer class if service type is LoadBalancer""
primary.service.loadBalancerIPLoad balancer IP if service type is LoadBalancer""
primary.service.externalTrafficPolicyEnable client source IP preservationCluster
primary.service.loadBalancerSourceRangesAddresses that are allowed when service is LoadBalancer[]
primary.service.extraPortsExtra ports to expose in the PostgreSQL primary service[]
primary.service.sessionAffinitySession Affinity for Kubernetes service, can be "None" or "ClientIP"None
primary.service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
primary.service.headless.annotationsAdditional custom annotations for headless PostgreSQL primary service{}
primary.persistence.enabledEnable PostgreSQL Primary data persistence using PVCtrue
primary.persistence.volumeNameName to assign the volumedata
primary.persistence.existingClaimName of an existing PVC to use""
primary.persistence.mountPathThe path the volume will be mounted at/bitnami/postgresql
primary.persistence.subPathThe subdirectory of the volume to mount to""
primary.persistence.storageClassPVC Storage Class for PostgreSQL Primary data volume""
primary.persistence.accessModesPVC Access Mode for PostgreSQL volume["ReadWriteOnce"]
primary.persistence.sizePVC Storage Request for PostgreSQL volume8Gi
primary.persistence.annotationsAnnotations for the PVC{}
primary.persistence.labelsLabels for the PVC{}
primary.persistence.selectorSelector to match an existing Persistent Volume (this value is evaluated as a template){}
primary.persistence.dataSourceCustom PVC data source{}
primary.persistentVolumeClaimRetentionPolicy.enabledEnable Persistent volume retention policy for Primary Statefulsetfalse
primary.persistentVolumeClaimRetentionPolicy.whenScaledVolume retention behavior when the replica count of the StatefulSet is reducedRetain
primary.persistentVolumeClaimRetentionPolicy.whenDeletedVolume retention behavior that applies when the StatefulSet is deletedRetain

PostgreSQL read only replica parameters (only used when architecture is set to replication)

NameDescriptionValue
readReplicas.nameName of the read replicas database (eg secondary, slave, ...)read
readReplicas.replicaCountNumber of PostgreSQL read only replicas1
readReplicas.extendedConfigurationExtended PostgreSQL read only replicas configuration (appended to main or default configuration)""
readReplicas.extraEnvVarsArray with extra environment variables to add to PostgreSQL read only nodes[]
readReplicas.extraEnvVarsCMName of existing ConfigMap containing extra env vars for PostgreSQL read only nodes""
readReplicas.extraEnvVarsSecretName of existing Secret containing extra env vars for PostgreSQL read only nodes""
readReplicas.commandOverride default container command (useful when using custom images)[]
readReplicas.argsOverride default container args (useful when using custom images)[]
readReplicas.livenessProbe.enabledEnable livenessProbe on PostgreSQL read only containerstrue
readReplicas.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe30
readReplicas.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
readReplicas.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
readReplicas.livenessProbe.failureThresholdFailure threshold for livenessProbe6
readReplicas.livenessProbe.successThresholdSuccess threshold for livenessProbe1
readReplicas.readinessProbe.enabledEnable readinessProbe on PostgreSQL read only containerstrue
readReplicas.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
readReplicas.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
readReplicas.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
readReplicas.readinessProbe.failureThresholdFailure threshold for readinessProbe6
readReplicas.readinessProbe.successThresholdSuccess threshold for readinessProbe1
readReplicas.startupProbe.enabledEnable startupProbe on PostgreSQL read only containersfalse
readReplicas.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe30
readReplicas.startupProbe.periodSecondsPeriod seconds for startupProbe10
readReplicas.startupProbe.timeoutSecondsTimeout seconds for startupProbe1
readReplicas.startupProbe.failureThresholdFailure threshold for startupProbe15
readReplicas.startupProbe.successThresholdSuccess threshold for startupProbe1
readReplicas.customLivenessProbeCustom livenessProbe that overrides the default one{}
readReplicas.customReadinessProbeCustom readinessProbe that overrides the default one{}
readReplicas.customStartupProbeCustom startupProbe that overrides the default one{}
readReplicas.lifecycleHooksfor the PostgreSQL read only container to automate configuration before or after startup{}
readReplicas.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if readReplicas.resources is set (readReplicas.resources is recommended for production).nano
readReplicas.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
readReplicas.podSecurityContext.enabledEnable security contexttrue
readReplicas.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
readReplicas.podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
readReplicas.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
readReplicas.podSecurityContext.fsGroupGroup ID for the pod1001
readReplicas.containerSecurityContext.enabledEnabled containers' Security Contexttrue
readReplicas.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
readReplicas.containerSecurityContext.runAsUserSet containers' Security Context runAsUser1001
readReplicas.containerSecurityContext.runAsGroupSet containers' Security Context runAsGroup1001
readReplicas.containerSecurityContext.runAsNonRootSet container's Security Context runAsNonRoottrue
readReplicas.containerSecurityContext.privilegedSet container's Security Context privilegedfalse
readReplicas.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context readOnlyRootFilesystemtrue
readReplicas.containerSecurityContext.allowPrivilegeEscalationSet container's Security Context allowPrivilegeEscalationfalse
readReplicas.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
readReplicas.containerSecurityContext.seccompProfile.typeSet container's Security Context seccomp profileRuntimeDefault
readReplicas.automountServiceAccountTokenMount Service Account token in podfalse
readReplicas.hostAliasesPostgreSQL read only pods host aliases[]
readReplicas.hostNetworkSpecify if host network should be enabled for PostgreSQL pod (PostgreSQL read only)false
readReplicas.hostIPCSpecify if host IPC should be enabled for PostgreSQL pod (postgresql primary)false
readReplicas.labelsMap of labels to add to the statefulset (PostgreSQL read only){}
readReplicas.annotationsAnnotations for PostgreSQL read only pods{}
readReplicas.podLabelsMap of labels to add to the pods (PostgreSQL read only){}
readReplicas.podAnnotationsMap of annotations to add to the pods (PostgreSQL read only){}
readReplicas.podAffinityPresetPostgreSQL read only pod affinity preset. Ignored if primary.affinity is set. Allowed values: soft or hard""
readReplicas.podAntiAffinityPresetPostgreSQL read only pod anti-affinity preset. Ignored if primary.affinity is set. Allowed values: soft or hardsoft
readReplicas.nodeAffinityPreset.typePostgreSQL read only node affinity preset type. Ignored if primary.affinity is set. Allowed values: soft or hard""
readReplicas.nodeAffinityPreset.keyPostgreSQL read only node label key to match Ignored if primary.affinity is set.""
readReplicas.nodeAffinityPreset.valuesPostgreSQL read only node label values to match. Ignored if primary.affinity is set.[]
readReplicas.affinityAffinity for PostgreSQL read only pods assignment{}
readReplicas.nodeSelectorNode labels for PostgreSQL read only pods assignment{}
readReplicas.tolerationsTolerations for PostgreSQL read only pods assignment[]
readReplicas.topologySpreadConstraintsTopology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template[]
readReplicas.priorityClassNamePriority Class to use for each pod (PostgreSQL read only)""
readReplicas.schedulerNameUse an alternate scheduler, e.g. "stork".""
readReplicas.terminationGracePeriodSecondsSeconds PostgreSQL read only pod needs to terminate gracefully""
readReplicas.updateStrategy.typePostgreSQL read only statefulset strategy typeRollingUpdate
readReplicas.updateStrategy.rollingUpdatePostgreSQL read only statefulset rolling update configuration parameters{}
readReplicas.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the PostgreSQL read only container(s)[]
readReplicas.extraVolumesOptionally specify extra list of additional volumes for the PostgreSQL read only pod(s)[]
readReplicas.sidecarsAdd additional sidecar containers to the PostgreSQL read only pod(s)[]
readReplicas.initContainersAdd additional init containers to the PostgreSQL read only pod(s)[]
readReplicas.pdb.createEnable/disable a Pod Disruption Budget creationtrue
readReplicas.pdb.minAvailableMinimum number/percentage of pods that should remain scheduled""
readReplicas.pdb.maxUnavailableMaximum number/percentage of pods that may be made unavailable. Defaults to 1 if both readReplicas.pdb.minAvailable and readReplicas.pdb.maxUnavailable are empty.""
readReplicas.extraPodSpecOptionally specify extra PodSpec for the PostgreSQL read only pod(s){}
readReplicas.networkPolicy.enabledSpecifies whether a NetworkPolicy should be createdtrue
readReplicas.networkPolicy.allowExternalDon't require server label for connectionstrue
readReplicas.networkPolicy.allowExternalEgressAllow the pod to access any range of port and all destinations.true
readReplicas.networkPolicy.extraIngressAdd extra ingress rules to the NetworkPolicy[]
readReplicas.networkPolicy.extraEgressAdd extra ingress rules to the NetworkPolicy[]
readReplicas.networkPolicy.ingressNSMatchLabelsLabels to match to allow traffic from other namespaces{}
readReplicas.networkPolicy.ingressNSPodMatchLabelsPod labels to match to allow traffic from other namespaces{}
readReplicas.service.typeKubernetes Service typeClusterIP
readReplicas.service.ports.postgresqlPostgreSQL service port5432
readReplicas.service.nodePorts.postgresqlNode port for PostgreSQL""
readReplicas.service.clusterIPStatic clusterIP or None for headless services""
readReplicas.service.annotationsAnnotations for PostgreSQL read only service{}
readReplicas.service.loadBalancerClassLoad balancer class if service type is LoadBalancer""
readReplicas.service.loadBalancerIPLoad balancer IP if service type is LoadBalancer""
readReplicas.service.externalTrafficPolicyEnable client source IP preservationCluster
readReplicas.service.loadBalancerSourceRangesAddresses that are allowed when service is LoadBalancer[]
readReplicas.service.extraPortsExtra ports to expose in the PostgreSQL read only service[]
readReplicas.service.sessionAffinitySession Affinity for Kubernetes service, can be "None" or "ClientIP"None
readReplicas.service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
readReplicas.service.headless.annotationsAdditional custom annotations for headless PostgreSQL read only service{}
readReplicas.persistence.enabledEnable PostgreSQL read only data persistence using PVCtrue
readReplicas.persistence.existingClaimName of an existing PVC to use""
readReplicas.persistence.mountPathThe path the volume will be mounted at/bitnami/postgresql
readReplicas.persistence.subPathThe subdirectory of the volume to mount to""
readReplicas.persistence.storageClassPVC Storage Class for PostgreSQL read only data volume""
readReplicas.persistence.accessModesPVC Access Mode for PostgreSQL volume["ReadWriteOnce"]
readReplicas.persistence.sizePVC Storage Request for PostgreSQL volume8Gi
readReplicas.persistence.annotationsAnnotations for the PVC{}
readReplicas.persistence.labelsLabels for the PVC{}
readReplicas.persistence.selectorSelector to match an existing Persistent Volume (this value is evaluated as a template){}
readReplicas.persistence.dataSourceCustom PVC data source{}
readReplicas.persistentVolumeClaimRetentionPolicy.enabledEnable Persistent volume retention policy for read only Statefulsetfalse
readReplicas.persistentVolumeClaimRetentionPolicy.whenScaledVolume retention behavior when the replica count of the StatefulSet is reducedRetain
readReplicas.persistentVolumeClaimRetentionPolicy.whenDeletedVolume retention behavior that applies when the StatefulSet is deletedRetain

Backup parameters

NameDescriptionValue
backup.enabledEnable the logical dump of the database "regularly"false
backup.cronjob.scheduleSet the cronjob parameter schedule@daily
backup.cronjob.timeZoneSet the cronjob parameter timeZone""
backup.cronjob.concurrencyPolicySet the cronjob parameter concurrencyPolicyAllow
backup.cronjob.failedJobsHistoryLimitSet the cronjob parameter failedJobsHistoryLimit1
backup.cronjob.successfulJobsHistoryLimitSet the cronjob parameter successfulJobsHistoryLimit3
backup.cronjob.startingDeadlineSecondsSet the cronjob parameter startingDeadlineSeconds""
backup.cronjob.ttlSecondsAfterFinishedSet the cronjob parameter ttlSecondsAfterFinished""
backup.cronjob.restartPolicySet the cronjob parameter restartPolicyOnFailure
backup.cronjob.podSecurityContext.enabledEnable PodSecurityContext for CronJob/Backuptrue
backup.cronjob.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
backup.cronjob.podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
backup.cronjob.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
backup.cronjob.podSecurityContext.fsGroupGroup ID for the CronJob1001
backup.cronjob.containerSecurityContext.enabledEnabled containers' Security Contexttrue
backup.cronjob.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
backup.cronjob.containerSecurityContext.runAsUserSet containers' Security Context runAsUser1001
backup.cronjob.containerSecurityContext.runAsGroupSet containers' Security Context runAsGroup1001
backup.cronjob.containerSecurityContext.runAsNonRootSet container's Security Context runAsNonRoottrue
backup.cronjob.containerSecurityContext.privilegedSet container's Security Context privilegedfalse
backup.cronjob.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context readOnlyRootFilesystemtrue
backup.cronjob.containerSecurityContext.allowPrivilegeEscalationSet container's Security Context allowPrivilegeEscalationfalse
backup.cronjob.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
backup.cronjob.containerSecurityContext.seccompProfile.typeSet container's Security Context seccomp profileRuntimeDefault
backup.cronjob.commandSet backup container's command to run["/bin/sh","-c","pg_dumpall --clean --if-exists --load-via-partition-root --quote-all-identifiers --no-password --file=${PGDUMP_DIR}/pg_dumpall-$(date '+%Y-%m-%d-%H-%M').pgdump"]
backup.cronjob.labelsSet the cronjob labels{}
backup.cronjob.annotationsSet the cronjob annotations{}
backup.cronjob.nodeSelectorNode labels for PostgreSQL backup CronJob pod assignment{}
backup.cronjob.tolerationsTolerations for PostgreSQL backup CronJob pods assignment[]
backup.cronjob.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if backup.cronjob.resources is set (backup.cronjob.resources is recommended for production).nano
backup.cronjob.resourcesSet container requests and limits for different resources like CPU or memory{}
backup.cronjob.networkPolicy.enabledSpecifies whether a NetworkPolicy should be createdtrue
backup.cronjob.storage.enabledEnable using a PersistentVolumeClaim as backup data volumetrue
backup.cronjob.storage.existingClaimProvide an existing PersistentVolumeClaim (only when architecture=standalone)""
backup.cronjob.storage.resourcePolicySetting it to "keep" to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted""
backup.cronjob.storage.storageClassPVC Storage Class for the backup data volume""
backup.cronjob.storage.accessModesPV Access Mode["ReadWriteOnce"]
backup.cronjob.storage.sizePVC Storage Request for the backup data volume8Gi
backup.cronjob.storage.annotationsPVC annotations{}
backup.cronjob.storage.mountPathPath to mount the volume at/backup/pgdump
backup.cronjob.storage.subPathSubdirectory of the volume to mount at""
backup.cronjob.storage.volumeClaimTemplates.selectorA label query over volumes to consider for binding (e.g. when using local volumes){}
backup.cronjob.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the backup container[]
backup.cronjob.extraVolumesOptionally specify extra list of additional volumes for the backup container[]

Volume Permissions parameters

NameDescriptionValue
volumePermissions.enabledEnable init container that changes the owner and group of the persistent volumefalse
volumePermissions.image.registryInit container volume-permissions image registryREGISTRY_NAME
volumePermissions.image.repositoryInit container volume-permissions image repositoryREPOSITORY_NAME/os-shell
volumePermissions.image.digestInit container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
volumePermissions.image.pullPolicyInit container volume-permissions image pull policyIfNotPresent
volumePermissions.image.pullSecretsInit container volume-permissions image pull secrets[]
volumePermissions.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production).nano
volumePermissions.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
volumePermissions.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
volumePermissions.containerSecurityContext.runAsUserUser ID for the init container0
volumePermissions.containerSecurityContext.runAsGroupGroup ID for the init container0
volumePermissions.containerSecurityContext.runAsNonRootrunAsNonRoot for the init containerfalse
volumePermissions.containerSecurityContext.seccompProfile.typeseccompProfile.type for the init containerRuntimeDefault

Other Parameters

NameDescriptionValue
serviceBindings.enabledCreate secret for service binding (Experimental)false
serviceAccount.createEnable creation of ServiceAccount for PostgreSQL podtrue
serviceAccount.nameThe name of the ServiceAccount to use.""
serviceAccount.automountServiceAccountTokenAllows auto mount of ServiceAccountToken on the serviceAccount createdfalse
serviceAccount.annotationsAdditional custom annotations for the ServiceAccount{}
rbac.createCreate Role and RoleBinding (required for PSP to work)false
rbac.rulesCustom RBAC rules to set[]
psp.createWhether to create a PodSecurityPolicy. WARNING: PodSecurityPolicy is deprecated in Kubernetes v1.21 or later, unavailable in v1.25 or laterfalse

Metrics Parameters

NameDescriptionValue
metrics.enabledStart a prometheus exporterfalse
metrics.image.registryPostgreSQL Prometheus Exporter image registryREGISTRY_NAME
metrics.image.repositoryPostgreSQL Prometheus Exporter image repositoryREPOSITORY_NAME/postgres-exporter
metrics.image.digestPostgreSQL image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
metrics.image.pullPolicyPostgreSQL Prometheus Exporter image pull policyIfNotPresent
metrics.image.pullSecretsSpecify image pull secrets[]
metrics.collectorsControl enabled collectors{}
metrics.customMetricsDefine additional custom metrics{}
metrics.extraEnvVarsExtra environment variables to add to PostgreSQL Prometheus exporter[]
metrics.containerSecurityContext.enabledEnabled containers' Security Contexttrue
metrics.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
metrics.containerSecurityContext.runAsUserSet containers' Security Context runAsUser1001
metrics.containerSecurityContext.runAsGroupSet containers' Security Context runAsGroup1001
metrics.containerSecurityContext.runAsNonRootSet container's Security Context runAsNonRoottrue
metrics.containerSecurityContext.privilegedSet container's Security Context privilegedfalse
metrics.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context readOnlyRootFilesystemtrue
metrics.containerSecurityContext.allowPrivilegeEscalationSet container's Security Context allowPrivilegeEscalationfalse
metrics.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
metrics.containerSecurityContext.seccompProfile.typeSet container's Security Context seccomp profileRuntimeDefault
metrics.livenessProbe.enabledEnable livenessProbe on PostgreSQL Prometheus exporter containerstrue
metrics.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe5
metrics.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
metrics.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
metrics.livenessProbe.failureThresholdFailure threshold for livenessProbe6
metrics.livenessProbe.successThresholdSuccess threshold for livenessProbe1
metrics.readinessProbe.enabledEnable readinessProbe on PostgreSQL Prometheus exporter containerstrue
metrics.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
metrics.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
metrics.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
metrics.readinessProbe.failureThresholdFailure threshold for readinessProbe6
metrics.readinessProbe.successThresholdSuccess threshold for readinessProbe1
metrics.startupProbe.enabledEnable startupProbe on PostgreSQL Prometheus exporter containersfalse
metrics.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe10
metrics.startupProbe.periodSecondsPeriod seconds for startupProbe10
metrics.startupProbe.timeoutSecondsTimeout seconds for startupProbe1
metrics.startupProbe.failureThresholdFailure threshold for startupProbe15
metrics.startupProbe.successThresholdSuccess threshold for startupProbe1
metrics.customLivenessProbeCustom livenessProbe that overrides the default one{}
metrics.customReadinessProbeCustom readinessProbe that overrides the default one{}
metrics.customStartupProbeCustom startupProbe that overrides the default one{}
metrics.containerPorts.metricsPostgreSQL Prometheus exporter metrics container port9187
metrics.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if metrics.resources is set (metrics.resources is recommended for production).nano
metrics.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
metrics.service.ports.metricsPostgreSQL Prometheus Exporter service port9187
metrics.service.clusterIPStatic clusterIP or None for headless services""
metrics.service.sessionAffinityControl where client requests go, to the same pod or round-robinNone
metrics.service.annotationsAnnotations for Prometheus to auto-discover the metrics endpoint{}
metrics.serviceMonitor.enabledCreate ServiceMonitor Resource for scraping metrics using Prometheus Operatorfalse
metrics.serviceMonitor.namespaceNamespace for the ServiceMonitor Resource (defaults to the Release Namespace)""
metrics.serviceMonitor.intervalInterval at which metrics should be scraped.""
metrics.serviceMonitor.scrapeTimeoutTimeout after which the scrape is ended""
metrics.serviceMonitor.labelsAdditional labels that can be used so ServiceMonitor will be discovered by Prometheus{}
metrics.serviceMonitor.selectorPrometheus instance selector labels{}
metrics.serviceMonitor.relabelingsRelabelConfigs to apply to samples before scraping[]
metrics.serviceMonitor.metricRelabelingsMetricRelabelConfigs to apply to samples before ingestion[]
metrics.serviceMonitor.honorLabelsSpecify honorLabels parameter to add the scrape endpointfalse
metrics.serviceMonitor.jobLabelThe name of the label on the target service to use as the job name in prometheus.""
metrics.prometheusRule.enabledCreate a PrometheusRule for Prometheus Operatorfalse
metrics.prometheusRule.namespaceNamespace for the PrometheusRule Resource (defaults to the Release Namespace)""
metrics.prometheusRule.labelsAdditional labels that can be used so PrometheusRule will be discovered by Prometheus{}
metrics.prometheusRule.rulesPrometheusRule definitions[]

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install my-release \
    --set auth.postgresPassword=secretpassword
    oci://REGISTRY_NAME/REPOSITORY_NAME/postgresql

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command sets the PostgreSQL postgres account password to secretpassword.

NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available. Warning Setting a password will be ignored on new installation in case when previous PostgreSQL release was deleted through the helm command. In that case, old PVC will have an old password, and setting it through helm won't take effect. Deleting persistent volumes (PVs) will solve the issue. Refer to issue 2061 for more details

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/postgresql

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: You can use the default values.yaml

Troubleshooting

Find more information about how to deal with common errors related to Bitnami's Helm charts in this troubleshooting guide.

Upgrading

To 15.0.0

This major bump changes the following security defaults:

  • runAsGroup is changed from 0 to 1001
  • readOnlyRootFilesystem is set to true
  • resourcesPreset is changed from none to the minimum size working in our test suites (NOTE: resourcesPreset is not meant for production usage, but resources adapted to your use case).
  • global.compatibility.openshift.adaptSecurityContext is changed from disabled to auto.

This could potentially break any customization or init scripts used in your deployment. If this is the case, change the default values to the previous ones.

To 14.0.0

This major version adapts the NetworkPolicy objects to the most recent Bitnami standards. Now there is a separate object for primary and for readReplicas, being located in their corresponding sections. It is also enabled by default in other to comply with the best security standards.

Check the parameter section for the new value structure.

To 13.0.0

This major version changes the default PostgreSQL image from 15.x to 16.x. Follow the official instructions to upgrade to 16.x.

To 12.0.0

This major version changes the default PostgreSQL image from 14.x to 15.x. Follow the official instructions to upgrade to 15.x.

To 11.0.0

In this version the application version was bumped to 14.x series. Also, this major release renames several values in this chart and adds missing features, in order to be inline with the rest of assets in the Bitnami charts repository.

  • replication.enabled parameter is deprecated in favor of architecture parameter that accepts two values: standalone and replication.
  • replication.singleService and replication.uniqueServices parameters are deprecated. When using replication, each statefulset (primary and read-only) has its own headless service & service allowing to connect to read-only replicas through the service (round-robin) or individually.
  • postgresqlPostgresPassword, postgresqlUsername, postgresqlPassword, postgresqlDatabase, replication.user, replication.password, and existingSecret parameters have been regrouped under the auth map. The auth map uses a new perspective to configure authentication, so please read carefully each sub-parameter description.
  • extraEnv has been deprecated in favor of primary.extraEnvVars and readReplicas.extraEnvVars.
  • postgresqlConfiguration, pgHbaConfiguration, configurationConfigMap, postgresqlExtendedConf, and extendedConfConfigMap have been deprecated in favor of primary.configuration, primary.pgHbaConfiguration, primary.existingConfigmap, primary.extendedConfiguration, and primary.existingExtendedConfigmap.
  • postgresqlInitdbArgs, postgresqlInitdbWalDir, initdbScripts, initdbScriptsConfigMap, initdbScriptsSecret, initdbUser and initdbPassword have been regrouped under the primary.initdb map.
  • postgresqlMaxConnections, postgresqlPostgresConnectionLimit, postgresqlDbUserConnectionLimit, postgresqlTcpKeepalivesInterval, postgresqlTcpKeepalivesIdle, postgresqlTcpKeepalivesCount, postgresqlStatementTimeout and postgresqlPghbaRemoveFilters parameters are deprecated. Use XXX.extraEnvVars instead.
  • primaryAsStandBy has been deprecated in favor of primary.standby.
  • securityContext and containerSecurityContext have been deprecated in favor of primary.podSecurityContext, primary.containerSecurityContext, readReplicas.podSecurityContext, and readReplicas.containerSecurityContext.
  • livenessProbe and readinessProbe maps have been deprecated in favor of primary.livenessProbe, primary.readinessProbe, readReplicas.livenessProbe and readReplicas.readinessProbe maps.
  • persistence map has been deprecated in favor of primary.persistence and readReplicas.persistence maps.
  • networkPolicy map has been completely refactored.
  • service map has been deprecated in favor of primary.service and readReplicas.service maps.
  • metrics.service.port has been regrouped under the metrics.service.ports map.
  • serviceAccount.enabled and serviceAccount.autoMount have been deprecated in favor of serviceAccount.create and serviceAccount.automountServiceAccountToken.

How to upgrade to version 11.0.0

To upgrade to 11.0.0 from 10.x, it should be done reusing the PVC(s) used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is postgresql):

NOTE: Please, create a backup of your database before running any of these actions.

  1. Obtain the credentials and the names of the PVCs used to hold the PostgreSQL data on your current release:
export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
  1. Delete the PostgreSQL statefulset (notice the option --cascade=false) and secret:
kubectl delete statefulsets.apps postgresql-postgresql --namespace default --cascade=false
kubectl delete secret postgresql --namespace default
  1. Upgrade your release using the same PostgreSQL version:
CURRENT_VERSION=$(kubectl exec postgresql-postgresql-0 -- bash -c 'echo $BITNAMI_IMAGE_VERSION')
helm upgrade postgresql bitnami/postgresql \
  --set auth.postgresPassword=$POSTGRESQL_PASSWORD \
  --set primary.persistence.existingClaim=$POSTGRESQL_PVC \
  --set image.tag=$CURRENT_VERSION
  1. You will have to delete the existing PostgreSQL pod and the new statefulset is going to create a new one
kubectl delete pod postgresql-postgresql-0
  1. Finally, you should see the lines below in PostgreSQL container logs:
$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,app.kubernetes.io/component=primary -o jsonpath="{.items[0].metadata.name}")
...
postgresql 08:05:12.59 INFO  ==> Deploying PostgreSQL with persisted data...
...

NOTE: the instructions above reuse the same PostgreSQL version you were using in your chart release. Otherwise, you will find an error such as the one below when upgrading since the new chart major version also bumps the application version. To workaround this issue you need to upgrade database, please refer to the official PostgreSQL documentation for more information about this.

$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,app.kubernetes.io/component=primary -o jsonpath="{.items[0].metadata.name}")
    ...
postgresql 08:10:14.72 INFO  ==> ** Starting PostgreSQL **
2022-02-01 08:10:14.734 GMT [1] FATAL:  database files are incompatible with server
2022-02-01 08:10:14.734 GMT [1] DETAIL:  The data directory was initialized by PostgreSQL version 11, which is not compatible with this version 14.1.

To 10.0.0

On November 13, 2020, Helm v2 support was formally finished, this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.

  • Previous versions of this Helm Chart use apiVersion: v1 (installable by both Helm 2 and 3), this Helm Chart was updated to apiVersion: v2 (installable by Helm 3 only). Here you can find more information about the apiVersion field.
  • Move dependency information from the requirements.yaml to the Chart.yaml
  • After running helm dependency update, a Chart.lock file is generated containing the same structure used in the previous requirements.lock
  • The different fields present in the Chart.yaml file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Chart.
  • The term master has been replaced with primary and slave with readReplicas throughout the chart. Role names have changed from master and slave to primary and read.

Considerations when upgrading to this version

  • If you want to upgrade to this version using Helm v2, this scenario is not supported as this version does not support Helm v2 anymore.
  • If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the official Helm documentation about migrating from Helm v2 to v3.

Useful links

How to upgrade to version 10.0.0

To upgrade to 10.0.0 from 9.x, it should be done reusing the PVC(s) used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is postgresql):

NOTE: Please, create a backup of your database before running any of those actions.

  1. Obtain the credentials and the names of the PVCs used to hold the PostgreSQL data on your current release:
export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
  1. Delete the PostgreSQL statefulset (notice the option --cascade=false):
kubectl delete statefulsets.apps postgresql-postgresql --namespace default --cascade=false
  1. Upgrade your release using the same PostgreSQL version:
helm upgrade postgresql bitnami/postgresql \
  --set postgresqlPassword=$POSTGRESQL_PASSWORD \
  --set persistence.existingClaim=$POSTGRESQL_PVC
  1. Delete the existing PostgreSQL pod and the new statefulset will create a new one:
kubectl delete pod postgresql-postgresql-0
  1. Finally, you should see the lines below in PostgreSQL container logs:
$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
...
postgresql 08:05:12.59 INFO  ==> Deploying PostgreSQL with persisted data...
...

To 9.0.0

In this version the chart was adapted to follow the Helm standard labels.

  • Some inmutable objects were modified to adopt Helm standard labels introducing backward incompatibilities.

How to upgrade to version 9.0.0

To upgrade to 9.0.0 from 8.x, it should be done reusing the PVC(s) used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is postgresql):

NOTE: Please, create a backup of your database before running any of those actions.

  1. Obtain the credentials and the names of the PVCs used to hold the PostgreSQL data on your current release:
export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
export POSTGRESQL_PVC=$(kubectl get pvc -l app=postgresql,role=master -o jsonpath="{.items[0].metadata.name}")
  1. Delete the PostgreSQL statefulset (notice the option --cascade=false):
kubectl delete statefulsets.apps postgresql-postgresql --namespace default --cascade=false
  1. Upgrade your release using the same PostgreSQL version:
helm upgrade postgresql bitnami/postgresql \
  --set postgresqlPassword=$POSTGRESQL_PASSWORD \
  --set persistence.existingClaim=$POSTGRESQL_PVC
  1. Delete the existing PostgreSQL pod and the new statefulset will create a new one:
kubectl delete pod postgresql-postgresql-0
  1. Finally, you should see the lines below in PostgreSQL container logs:
$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,role=master -o jsonpath="{.items[0].metadata.name}")
...
postgresql 08:05:12.59 INFO  ==> Deploying PostgreSQL with persisted data...
...

License

Copyright © 2024 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.