tree: 4d2ff649b228ddeca2b9371aa27bb3b40fe5b830 [path history] [tgz]
  1. ci/
  2. templates/
  3. .helmignore
  4. CHANGELOG.md
  5. Chart.yaml
  6. OWNERS
  7. README.md
  8. README.md.gotmpl
  9. requirements.lock
  10. values.yaml
charts/ingress-nginx/README.md

ingress-nginx

ingress-nginx Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer

Version: 4.0.17 Type: application AppVersion: 1.1.1

To use, add ingressClassName: nginx spec field or the kubernetes.io/ingress.class: nginx annotation to your Ingress resources.

This chart bootstraps an ingress-nginx deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Chart version 3.x.x: Kubernetes v1.16+
  • Chart version 4.x.x and above: Kubernetes v1.19+

Get Repo Info

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

Install Chart

Important: only helm3 is supported

helm install [RELEASE_NAME] ingress-nginx/ingress-nginx

The command deploys ingress-nginx on the Kubernetes cluster in the default configuration.

See configuration below.

See helm install for command documentation.

Uninstall Chart

helm uninstall [RELEASE_NAME]

This removes all the Kubernetes components associated with the chart and deletes the release.

See helm uninstall for command documentation.

Upgrading Chart

helm upgrade [RELEASE_NAME] [CHART] --install

See helm upgrade for command documentation.

Upgrading With Zero Downtime in Production

By default the ingress-nginx controller has service interruptions whenever it's pods are restarted or redeployed. In order to fix that, see the excellent blog post by Lindsay Landry from Codecademy: Kubernetes: Nginx and Zero Downtime in Production.

Migrating from stable/nginx-ingress

There are two main ways to migrate a release from stable/nginx-ingress to ingress-nginx/ingress-nginx chart:

  1. For Nginx Ingress controllers used for non-critical services, the easiest method is to uninstall the old release and install the new one
  2. For critical services in production that require zero-downtime, you will want to:
    1. Install a second Ingress controller
    2. Redirect your DNS traffic from the old controller to the new controller
    3. Log traffic from both controllers during this changeover
    4. Uninstall the old controller once traffic has fully drained from it
    5. For details on all of these steps see Upgrading With Zero Downtime in Production

Note that there are some different and upgraded configurations between the two charts, described by Rimas Mocevicius from JFrog in the "Upgrading to ingress-nginx Helm chart" section of Migrating from Helm chart nginx-ingress to ingress-nginx. As the ingress-nginx/ingress-nginx chart continues to update, you will want to check current differences by running helm configuration commands on both charts.

Configuration

See Customizing the Chart Before Installing. To see all configurable options with detailed comments, visit the chart's values.yaml, or run these configuration commands:

helm show values ingress-nginx/ingress-nginx

PodDisruptionBudget

Note that the PodDisruptionBudget resource will only be defined if the replicaCount is greater than one, else it would make it impossible to evacuate a node. See gh issue #7127 for more info.

Prometheus Metrics

The Nginx ingress controller can export Prometheus metrics, by setting controller.metrics.enabled to true.

You can add Prometheus annotations to the metrics service using controller.metrics.service.annotations. Alternatively, if you use the Prometheus Operator, you can enable ServiceMonitor creation using controller.metrics.serviceMonitor.enabled. And set controller.metrics.serviceMonitor.additionalLabels.release="prometheus". "release=prometheus" should match the label configured in the prometheus servicemonitor ( see kubectl get servicemonitor prometheus-kube-prom-prometheus -oyaml -n prometheus)

ingress-nginx nginx_status page/stats server

Previous versions of this chart had a controller.stats.* configuration block, which is now obsolete due to the following changes in nginx ingress controller:

  • In 0.16.1, the vts (virtual host traffic status) dashboard was removed
  • In 0.23.0, the status page at port 18080 is now a unix socket webserver only available at localhost. You can use curl --unix-socket /tmp/nginx-status-server.sock http://localhost/nginx_status inside the controller container to access it locally, or use the snippet from nginx-ingress changelog to re-enable the http server

ExternalDNS Service Configuration

Add an ExternalDNS annotation to the LoadBalancer service:

controller:
  service:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: kubernetes-example.com.

AWS L7 ELB with SSL Termination

Annotate the controller as shown in the nginx-ingress l7 patch:

controller:
  service:
    targetPorts:
      http: http
      https: http
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:XX-XXXX-X:XXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'

AWS route53-mapper

To configure the LoadBalancer service with the route53-mapper addon, add the domainName annotation and dns label:

controller:
  service:
    labels:
      dns: "route53"
    annotations:
      domainName: "kubernetes-example.com"

Additional Internal Load Balancer

This setup is useful when you need both external and internal load balancers but don't want to have multiple ingress controllers and multiple ingress objects per application.

By default, the ingress object will point to the external load balancer address, but if correctly configured, you can make use of the internal one if the URL you are looking up resolves to the internal load balancer's URL.

You'll need to set both the following values:

controller.service.internal.enabled controller.service.internal.annotations

If one of them is missing the internal load balancer will not be deployed. Example you may have controller.service.internal.enabled=true but no annotations set, in this case no action will be taken.

controller.service.internal.annotations varies with the cloud service you're using.

Example for AWS:

controller:
  service:
    internal:
      enabled: true
      annotations:
        # Create internal ELB
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
        # Any other annotation can be declared here.

Example for GCE:

controller:
  service:
    internal:
      enabled: true
      annotations:
        # Create internal LB. More informations: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
        # For GKE versions 1.17 and later
        networking.gke.io/load-balancer-type: "Internal"
        # For earlier versions
        # cloud.google.com/load-balancer-type: "Internal"

        # Any other annotation can be declared here.

Example for Azure:

controller:
  service:
      annotations:
        # Create internal LB
        service.beta.kubernetes.io/azure-load-balancer-internal: "true"
        # Any other annotation can be declared here.

Example for Oracle Cloud Infrastructure:

controller:
  service:
      annotations:
        # Create internal LB
        service.beta.kubernetes.io/oci-load-balancer-internal: "true"
        # Any other annotation can be declared here.

An use case for this scenario is having a split-view DNS setup where the public zone CNAME records point to the external balancer URL while the private zone CNAME records point to the internal balancer URL. This way, you only need one ingress kubernetes object.

Optionally you can set controller.service.loadBalancerIP if you need a static IP for the resulting LoadBalancer.

Ingress Admission Webhooks

With nginx-ingress-controller version 0.25+, the nginx ingress controller pod exposes an endpoint that will integrate with the validatingwebhookconfiguration Kubernetes feature to prevent bad ingress from being added to the cluster. This feature is enabled by default since 0.31.0.

With nginx-ingress-controller in 0.25.* work only with kubernetes 1.14+, 0.26 fix this issue

Helm Error When Upgrading: spec.clusterIP: Invalid value: ""

If you are upgrading this chart from a version between 0.31.0 and 1.2.2 then you may get an error like this:

Error: UPGRADE FAILED: Service "?????-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable

Detail of how and why are in this issue but to resolve this you can set xxxx.service.omitClusterIP to true where xxxx is the service referenced in the error.

As of version 1.26.0 of this chart, by simply not providing any clusterIP value, invalid: spec.clusterIP: Invalid value: "": field is immutable will no longer occur since clusterIP: "" will not be rendered.

Requirements

Kubernetes: >=1.19.0-0

Values

KeyTypeDefaultDescription
commonLabelsobject{}
controller.addHeadersobject{}Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers
controller.admissionWebhooks.annotationsobject{}
controller.admissionWebhooks.certificatestring"/usr/local/certificates/cert"
controller.admissionWebhooks.createSecretJob.resourcesobject{}
controller.admissionWebhooks.enabledbooltrue
controller.admissionWebhooks.existingPspstring""Use an existing PSP instead of creating one
controller.admissionWebhooks.failurePolicystring"Fail"
controller.admissionWebhooks.keystring"/usr/local/certificates/key"
controller.admissionWebhooks.labelsobject{}Labels to be added to admission webhooks
controller.admissionWebhooks.namespaceSelectorobject{}
controller.admissionWebhooks.objectSelectorobject{}
controller.admissionWebhooks.patch.enabledbooltrue
controller.admissionWebhooks.patch.image.digeststring"sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660"
controller.admissionWebhooks.patch.image.imagestring"ingress-nginx/kube-webhook-certgen"
controller.admissionWebhooks.patch.image.pullPolicystring"IfNotPresent"
controller.admissionWebhooks.patch.image.registrystring"k8s.gcr.io"
controller.admissionWebhooks.patch.image.tagstring"v1.1.1"
controller.admissionWebhooks.patch.labelsobject{}Labels to be added to patch job resources
controller.admissionWebhooks.patch.nodeSelector."kubernetes.io/os"string"linux"
controller.admissionWebhooks.patch.podAnnotationsobject{}
controller.admissionWebhooks.patch.priorityClassNamestring""Provide a priority class name to the webhook patching job
controller.admissionWebhooks.patch.runAsUserint2000
controller.admissionWebhooks.patch.tolerationslist[]
controller.admissionWebhooks.patchWebhookJob.resourcesobject{}
controller.admissionWebhooks.portint8443
controller.admissionWebhooks.service.annotationsobject{}
controller.admissionWebhooks.service.externalIPslist[]
controller.admissionWebhooks.service.loadBalancerSourceRangeslist[]
controller.admissionWebhooks.service.servicePortint443
controller.admissionWebhooks.service.typestring"ClusterIP"
controller.affinityobject{}Affinity and anti-affinity rules for server scheduling to nodes
controller.allowSnippetAnnotationsbooltrueThis configuration defines if Ingress Controller should allow users to set their own *-snippet annotations, otherwise this is forbidden / dropped when users add those annotations. Global snippets in ConfigMap are still respected
controller.annotationsobject{}Annotations to be added to the controller Deployment or DaemonSet
controller.autoscaling.behaviorobject{}
controller.autoscaling.enabledboolfalse
controller.autoscaling.maxReplicasint11
controller.autoscaling.minReplicasint1
controller.autoscaling.targetCPUUtilizationPercentageint50
controller.autoscaling.targetMemoryUtilizationPercentageint50
controller.autoscalingTemplatelist[]
controller.configobject{}Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
controller.configAnnotationsobject{}Annotations to be added to the controller config configuration configmap.
controller.configMapNamespacestring""Allows customization of the configmap / nginx-configmap namespace; defaults to $(POD_NAMESPACE)
controller.containerNamestring"controller"Configures the controller container name
controller.containerPortobject{"http":80,"https":443}Configures the ports that the nginx-controller listens on
controller.customTemplate.configMapKeystring""
controller.customTemplate.configMapNamestring""
controller.dnsConfigobject{}Optionally customize the pod dnsConfig.
controller.dnsPolicystring"ClusterFirst"Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'. By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller to keep resolving names inside the k8s network, use ClusterFirstWithHostNet.
controller.electionIDstring"ingress-controller-leader"Election ID to use for status update
controller.enableMimallocbooltrueEnable mimalloc as a drop-in replacement for malloc.
controller.existingPspstring""Use an existing PSP instead of creating one
controller.extraArgsobject{}Additional command line arguments to pass to nginx-ingress-controller E.g. to specify the default SSL certificate you can use
controller.extraContainerslist[]Additional containers to be added to the controller pod. See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example.
controller.extraEnvslist[]Additional environment variables to set
controller.extraInitContainerslist[]Containers, which are run before the app containers are started.
controller.extraModuleslist[]
controller.extraVolumeMountslist[]Additional volumeMounts to the controller main container.
controller.extraVolumeslist[]Additional volumes to the controller pod.
controller.healthCheckHoststring""Address to bind the health check endpoint. It is better to set this option to the internal node address if the ingress nginx controller is running in the hostNetwork: true mode.
controller.healthCheckPathstring"/healthz"Path of the health check endpoint. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path.
controller.hostNetworkboolfalseRequired for use with CNI based kubernetes installations (such as ones set up by kubeadm), since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920 is merged
controller.hostPort.enabledboolfalseEnable 'hostPort' or not
controller.hostPort.ports.httpint80'hostPort' http port
controller.hostPort.ports.httpsint443'hostPort' https port
controller.hostnameobject{}Optionally customize the pod hostname.
controller.image.allowPrivilegeEscalationbooltrue
controller.image.digeststring"sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de"
controller.image.imagestring"ingress-nginx/controller"
controller.image.pullPolicystring"IfNotPresent"
controller.image.registrystring"k8s.gcr.io"
controller.image.runAsUserint101
controller.image.tagstring"v1.1.1"
controller.ingressClassstring"nginx"For backwards compatibility with ingress.class annotation, use ingressClass. Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotation
controller.ingressClassByNameboolfalseProcess IngressClass per name (additionally as per spec.controller).
controller.ingressClassResource.controllerValuestring"k8s.io/ingress-nginx"Controller-value of the controller that is processing this ingressClass
controller.ingressClassResource.defaultboolfalseIs this the default ingressClass for the cluster
controller.ingressClassResource.enabledbooltrueIs this ingressClass enabled or not
controller.ingressClassResource.namestring"nginx"Name of the ingressClass
controller.ingressClassResource.parametersobject{}Parameters is a link to a custom resource containing additional configuration for the controller. This is optional if the controller does not require extra parameters.
controller.keda.apiVersionstring"keda.sh/v1alpha1"
controller.keda.behaviorobject{}
controller.keda.cooldownPeriodint300
controller.keda.enabledboolfalse
controller.keda.maxReplicasint11
controller.keda.minReplicasint1
controller.keda.pollingIntervalint30
controller.keda.restoreToOriginalReplicaCountboolfalse
controller.keda.scaledObject.annotationsobject{}
controller.keda.triggerslist[]
controller.kindstring"Deployment"Use a DaemonSet or Deployment
controller.labelsobject{}Labels to be added to the controller Deployment or DaemonSet and other resources that do not have option to specify labels
controller.lifecycleobject{"preStop":{"exec":{"command":["/wait-shutdown"]}}}Improve connection draining when ingress controller pod is deleted using a lifecycle hook: With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds to 300, allowing the draining of connections up to five minutes. If the active connections end before that, the pod will terminate gracefully at that time. To effectively take advantage of this feature, the Configmap feature worker-shutdown-timeout new value is 240s instead of 10s.
controller.livenessProbe.failureThresholdint5
controller.livenessProbe.httpGet.pathstring"/healthz"
controller.livenessProbe.httpGet.portint10254
controller.livenessProbe.httpGet.schemestring"HTTP"
controller.livenessProbe.initialDelaySecondsint10
controller.livenessProbe.periodSecondsint10
controller.livenessProbe.successThresholdint1
controller.livenessProbe.timeoutSecondsint1
controller.maxmindLicenseKeystring""Maxmind license key to download GeoLite2 Databases.
controller.metrics.enabledboolfalse
controller.metrics.portint10254
controller.metrics.prometheusRule.additionalLabelsobject{}
controller.metrics.prometheusRule.enabledboolfalse
controller.metrics.prometheusRule.ruleslist[]
controller.metrics.service.annotationsobject{}
controller.metrics.service.externalIPslist[]List of IP addresses at which the stats-exporter service is available
controller.metrics.service.loadBalancerSourceRangeslist[]
controller.metrics.service.servicePortint10254
controller.metrics.service.typestring"ClusterIP"
controller.metrics.serviceMonitor.additionalLabelsobject{}
controller.metrics.serviceMonitor.enabledboolfalse
controller.metrics.serviceMonitor.metricRelabelingslist[]
controller.metrics.serviceMonitor.namespacestring""
controller.metrics.serviceMonitor.namespaceSelectorobject{}
controller.metrics.serviceMonitor.relabelingslist[]
controller.metrics.serviceMonitor.scrapeIntervalstring"30s"
controller.metrics.serviceMonitor.targetLabelslist[]
controller.minAvailableint1
controller.minReadySecondsint0minReadySeconds to avoid killing pods before we are ready
controller.namestring"controller"
controller.nodeSelectorobject{"kubernetes.io/os":"linux"}Node labels for controller pod assignment
controller.podAnnotationsobject{}Annotations to be added to controller pods
controller.podLabelsobject{}Labels to add to the pod container metadata
controller.podSecurityContextobject{}Security Context policies for controller pods
controller.priorityClassNamestring""
controller.proxySetHeadersobject{}Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headers
controller.publishServiceobject{"enabled":true,"pathOverride":""}Allows customization of the source of the IP address or FQDN to report in the ingress status field. By default, it reads the information provided by the service. If disable, the status field reports the IP address of the node or nodes where an ingress controller pod is running.
controller.publishService.enabledbooltrueEnable 'publishService' or not
controller.publishService.pathOverridestring""Allows overriding of the publish service to bind to Must be /<service_name>
controller.readinessProbe.failureThresholdint3
controller.readinessProbe.httpGet.pathstring"/healthz"
controller.readinessProbe.httpGet.portint10254
controller.readinessProbe.httpGet.schemestring"HTTP"
controller.readinessProbe.initialDelaySecondsint10
controller.readinessProbe.periodSecondsint10
controller.readinessProbe.successThresholdint1
controller.readinessProbe.timeoutSecondsint1
controller.replicaCountint1
controller.reportNodeInternalIpboolfalseBare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply
controller.resources.requests.cpustring"100m"
controller.resources.requests.memorystring"90Mi"
controller.scope.enabledboolfalseEnable 'scope' or not
controller.scope.namespacestring""Namespace to limit the controller to; defaults to $(POD_NAMESPACE)
controller.scope.namespaceSelectorstring""When scope.enabled == false, instead of watching all namespaces, we watching namespaces whose labels only match with namespaceSelector. Format like foo=bar. Defaults to empty, means watching all namespaces.
controller.service.annotationsobject{}
controller.service.appProtocolbooltrueIf enabled is adding an appProtocol option for Kubernetes service. An appProtocol field replacing annotations that were using for setting a backend protocol. Here is an example for AWS: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http It allows choosing the protocol for each backend specified in the Kubernetes service. See the following GitHub issue for more details about the purpose: https://github.com/kubernetes/kubernetes/issues/40244 Will be ignored for Kubernetes versions older than 1.20
controller.service.enableHttpbooltrue
controller.service.enableHttpsbooltrue
controller.service.enabledbooltrue
controller.service.external.enabledbooltrue
controller.service.externalIPslist[]List of IP addresses at which the controller services are available
controller.service.internal.annotationsobject{}Annotations are mandatory for the load balancer to come up. Varies with the cloud service.
controller.service.internal.enabledboolfalseEnables an additional internal load balancer (besides the external one).
controller.service.internal.loadBalancerSourceRangeslist[]Restrict access For LoadBalancer service. Defaults to 0.0.0.0/0.
controller.service.ipFamilieslist["IPv4"]List of IP families (e.g. IPv4, IPv6) assigned to the service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field.
controller.service.ipFamilyPolicystring"SingleStack"Represents the dual-stack-ness requested or required by this Service. Possible values are SingleStack, PreferDualStack or RequireDualStack. The ipFamilies and clusterIPs fields depend on the value of this field.
controller.service.labelsobject{}
controller.service.loadBalancerSourceRangeslist[]
controller.service.nodePorts.httpstring""
controller.service.nodePorts.httpsstring""
controller.service.nodePorts.tcpobject{}
controller.service.nodePorts.udpobject{}
controller.service.ports.httpint80
controller.service.ports.httpsint443
controller.service.targetPorts.httpstring"http"
controller.service.targetPorts.httpsstring"https"
controller.service.typestring"LoadBalancer"
controller.sysctlsobject{}See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for notes on enabling and using sysctls
controller.tcp.annotationsobject{}Annotations to be added to the tcp config configmap
controller.tcp.configMapNamespacestring""Allows customization of the tcp-services-configmap; defaults to $(POD_NAMESPACE)
controller.terminationGracePeriodSecondsint300terminationGracePeriodSeconds to avoid killing pods before we are ready
controller.tolerationslist[]Node tolerations for server scheduling to nodes with taints
controller.topologySpreadConstraintslist[]Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in.
controller.udp.annotationsobject{}Annotations to be added to the udp config configmap
controller.udp.configMapNamespacestring""Allows customization of the udp-services-configmap; defaults to $(POD_NAMESPACE)
controller.updateStrategyobject{}The update strategy to apply to the Deployment or DaemonSet
controller.watchIngressWithoutClassboolfalseProcess Ingress objects without ingressClass annotation/ingressClassName field Overrides value for --watch-ingress-without-class flag of the controller binary Defaults to false
defaultBackend.affinityobject{}
defaultBackend.autoscaling.annotationsobject{}
defaultBackend.autoscaling.enabledboolfalse
defaultBackend.autoscaling.maxReplicasint2
defaultBackend.autoscaling.minReplicasint1
defaultBackend.autoscaling.targetCPUUtilizationPercentageint50
defaultBackend.autoscaling.targetMemoryUtilizationPercentageint50
defaultBackend.containerSecurityContextobject{}Security Context policies for controller main container. See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for notes on enabling and using sysctls
defaultBackend.enabledboolfalse
defaultBackend.existingPspstring""Use an existing PSP instead of creating one
defaultBackend.extraArgsobject{}
defaultBackend.extraEnvslist[]Additional environment variables to set for defaultBackend pods
defaultBackend.extraVolumeMountslist[]
defaultBackend.extraVolumeslist[]
defaultBackend.image.allowPrivilegeEscalationboolfalse
defaultBackend.image.imagestring"defaultbackend-amd64"
defaultBackend.image.pullPolicystring"IfNotPresent"
defaultBackend.image.readOnlyRootFilesystembooltrue
defaultBackend.image.registrystring"k8s.gcr.io"
defaultBackend.image.runAsNonRootbooltrue
defaultBackend.image.runAsUserint65534
defaultBackend.image.tagstring"1.5"
defaultBackend.labelsobject{}Labels to be added to the default backend resources
defaultBackend.livenessProbe.failureThresholdint3
defaultBackend.livenessProbe.initialDelaySecondsint30
defaultBackend.livenessProbe.periodSecondsint10
defaultBackend.livenessProbe.successThresholdint1
defaultBackend.livenessProbe.timeoutSecondsint5
defaultBackend.minAvailableint1
defaultBackend.namestring"defaultbackend"
defaultBackend.nodeSelectorobject{"kubernetes.io/os":"linux"}Node labels for default backend pod assignment
defaultBackend.podAnnotationsobject{}Annotations to be added to default backend pods
defaultBackend.podLabelsobject{}Labels to add to the pod container metadata
defaultBackend.podSecurityContextobject{}Security Context policies for controller pods See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for notes on enabling and using sysctls
defaultBackend.portint8080
defaultBackend.priorityClassNamestring""
defaultBackend.readinessProbe.failureThresholdint6
defaultBackend.readinessProbe.initialDelaySecondsint0
defaultBackend.readinessProbe.periodSecondsint5
defaultBackend.readinessProbe.successThresholdint1
defaultBackend.readinessProbe.timeoutSecondsint5
defaultBackend.replicaCountint1
defaultBackend.resourcesobject{}
defaultBackend.service.annotationsobject{}
defaultBackend.service.externalIPslist[]List of IP addresses at which the default backend service is available
defaultBackend.service.loadBalancerSourceRangeslist[]
defaultBackend.service.servicePortint80
defaultBackend.service.typestring"ClusterIP"
defaultBackend.serviceAccount.automountServiceAccountTokenbooltrue
defaultBackend.serviceAccount.createbooltrue
defaultBackend.serviceAccount.namestring""
defaultBackend.tolerationslist[]Node tolerations for server scheduling to nodes with taints
dhParamstringnilA base64-encoded Diffie-Hellman parameter. This can be generated with: `openssl dhparam 4096 2> /dev/null
imagePullSecretslist[]Optional array of imagePullSecrets containing private registry credentials
podSecurityPolicy.enabledboolfalse
rbac.createbooltrue
rbac.scopeboolfalse
revisionHistoryLimitint10Rollback limit
serviceAccount.annotationsobject{}Annotations for the controller service account
serviceAccount.automountServiceAccountTokenbooltrue
serviceAccount.createbooltrue
serviceAccount.namestring""
tcpobject{}TCP service key:value pairs
udpobject{}UDP service key:value pairs