tree: baa8c76e52152072add8680b56bdda7e7ccfd86a [path history] [tgz]
  1. templates/
  2. .helmignore
  3. Chart.yaml
  4. README.md
  5. requirements.lock
  6. values.yaml
charts/coredns/README.md

CoreDNS

CoreDNS is a DNS server that chains plugins and provides DNS Services

TL;DR;

$ helm repo add coredns https://coredns.github.io/helm
$ helm --namespace=kube-system install coredns coredns/coredns

Introduction

This chart bootstraps a CoreDNS deployment on a Kubernetes cluster using the Helm package manager. This chart will provide DNS Services and can be deployed in multiple configuration to support various scenarios listed below:

  • CoreDNS as a cluster dns service and a drop-in replacement for Kube/SkyDNS. This is the default mode and CoreDNS is deployed as cluster-service in kube-system namespace. This mode is chosen by setting isClusterService to true.
  • CoreDNS as an external dns service. In this mode CoreDNS is deployed as any kubernetes app in user specified namespace. The CoreDNS service can be exposed outside the cluster by using using either the NodePort or LoadBalancer type of service. This mode is chosen by setting isClusterService to false.
  • CoreDNS as an external dns provider for kubernetes federation. This is a sub case of 'external dns service' which uses etcd plugin for CoreDNS backend. This deployment mode as a dependency on etcd-operator chart, which needs to be pre-installed.

Prerequisites

  • Kubernetes 1.10 or later

Installing the Chart

The chart can be installed as follows:

$ helm repo add coredns https://coredns.github.io/helm
$ helm --namespace=kube-system install coredns coredns/coredns

The command deploys CoreDNS on the Kubernetes cluster in the default configuration. The configuration section lists various ways to override default configuration during deployment.

Tip: List all releases using helm list --all-namespaces

Uninstalling the Chart

To uninstall/delete the coredns deployment:

$ helm uninstall coredns

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

ParameterDescriptionDefault
image.repositoryThe image repository to pull fromcoredns/coredns
image.tagThe image tag to pull from1.9.3
image.pullPolicyImage pull policyIfNotPresent
image.pullSecretsSpecify container image pull secrets[]
replicaCountNumber of replicas1
resources.limits.cpuContainer maximum CPU100m
resources.limits.memoryContainer maximum memory128Mi
resources.requests.cpuContainer requested CPU100m
resources.requests.memoryContainer requested memory128Mi
serviceTypeKubernetes Service typeClusterIP
prometheus.service.enabledSet this to true to create Service for Prometheus metricsfalse
prometheus.service.annotationsAnnotations to add to the metrics Service{prometheus.io/scrape: "true", prometheus.io/port: "9153"}
prometheus.monitor.enabledSet this to true to create ServiceMonitor for Prometheus operatorfalse
prometheus.monitor.additionalLabelsAdditional labels that can be used so ServiceMonitor will be discovered by Prometheus{}
prometheus.monitor.namespaceSelector to select which namespaces the Endpoints objects are discovered from.""
prometheus.monitor.intervalScrape interval for polling the metrics endpoint. (E.g. "30s")""
service.clusterIPIP address to assign to service""
service.loadBalancerIPIP address to assign to load balancer (if supported)""
service.externalIPsExternal IP addresses[]
service.externalTrafficPolicyEnable client source IP preservation[]
service.annotationsAnnotations to add to service{}
serviceAccount.createIf true, create & use serviceAccountfalse
serviceAccount.nameIf not set & create is true, use template fullname
rbac.createIf true, create & use RBAC resourcestrue
rbac.pspEnableSpecifies whether a PodSecurityPolicy should be created.false
isClusterServiceSpecifies whether chart should be deployed as cluster-service or normal k8s app.true
priorityClassNameName of Priority Class to assign pods""
serversConfiguration for CoreDNS and pluginsSee values.yml
livenessProbe.enabledEnable/disable the Liveness probetrue
livenessProbe.initialDelaySecondsDelay before liveness probe is initiated60
livenessProbe.periodSecondsHow often to perform the probe10
livenessProbe.timeoutSecondsWhen the probe times out5
livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.5
livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
readinessProbe.enabledEnable/disable the Readiness probetrue
readinessProbe.initialDelaySecondsDelay before readiness probe is initiated30
readinessProbe.periodSecondsHow often to perform the probe10
readinessProbe.timeoutSecondsWhen the probe times out5
readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.5
readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
affinityAffinity settings for pod assignment{}
nodeSelectorNode labels for pod assignment{}
tolerationsTolerations for pod assignment[]
zoneFilesConfigure custom Zone files[]
extraVolumesOptional array of volumes to create[]
extraVolumeMountsOptional array of volumes to mount inside the CoreDNS container[]
extraSecretsOptional array of secrets to mount inside the CoreDNS container[]
customLabelsOptional labels for Deployment(s), Pod, Service, ServiceMonitor objects{}
customAnnotationsOptional annotations for Deployment(s), Pod, Service, ServiceMonitor objects
rollingUpdate.maxUnavailableMaximum number of unavailable replicas during rolling update1
rollingUpdate.maxSurgeMaximum number of pods created above desired number of pods25%
podDisruptionBudgetOptional PodDisruptionBudget{}
podAnnotationsOptional Pod only Annotations{}
terminationGracePeriodSecondsOptional duration in seconds the pod needs to terminate gracefully.30
preStopSleepDefinition of Kubernetes preStop hook executed before Pod termination{}
hpa.enabledEnable Hpa autoscaler instead of proportional onefalse
hpa.minReplicasHpa minimum number of CoreDNS replicas1
hpa.maxReplicasHpa maximum number of CoreDNS replicas2
hpa.metricsMetrics definitions used by Hpa to scale up and down{}
autoscaler.enabledOptionally enabled a cluster-proportional-autoscaler for CoreDNSfalse
autoscaler.coresPerReplicaNumber of cores in the cluster per CoreDNS replica256
autoscaler.nodesPerReplicaNumber of nodes in the cluster per CoreDNS replica16
autoscaler.minMin size of replicaCount0
autoscaler.maxMax size of replicaCount0 (aka no max)
autoscaler.includeUnschedulableNodesShould the replicas scale based on the total number or only schedulable nodesfalse
autoscaler.preventSinglePointFailureIf true does not allow single points of failure to formtrue
autoscaler.customFlagsA list of custom flags to pass into cluster-proportional-autoscaler(no args)
autoscaler.image.repositoryThe image repository to pull autoscaler fromk8s.gcr.io/cpa/cluster-proportional-autoscaler
autoscaler.image.tagThe image tag to pull autoscaler from1.8.5
autoscaler.image.pullPolicyImage pull policy for the autoscalerIfNotPresent
autoscaler.image.pullSecretsSpecify container image pull secrets[]
autoscaler.priorityClassNameOptional priority class for the autoscaler pod. priorityClassName used if not set.""
autoscaler.affinityAffinity settings for pod assignment for autoscaler{}
autoscaler.nodeSelectorNode labels for pod assignment for autoscaler{}
autoscaler.tolerationsTolerations for pod assignment for autoscaler[]
autoscaler.resources.limits.cpuContainer maximum CPU for cluster-proportional-autoscaler20m
autoscaler.resources.limits.memoryContainer maximum memory for cluster-proportional-autoscaler10Mi
autoscaler.resources.requests.cpuContainer requested CPU for cluster-proportional-autoscaler20m
autoscaler.resources.requests.memoryContainer requested memory for cluster-proportional-autoscaler10Mi
autoscaler.configmap.annotationsAnnotations to add to autoscaler config map. For example to stop CI renaming them{}
autoscaler.livenessProbe.enabledEnable/disable the Liveness probetrue
autoscaler.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated10
autoscaler.livenessProbe.periodSecondsHow often to perform the probe5
autoscaler.livenessProbe.timeoutSecondsWhen the probe times out5
autoscaler.livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.3
autoscaler.livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
deployment.enabledOptionally disable the main deployment and its respective resources.true
deployment.nameName of the deployment if deployment.enabled is true. Otherwise the name of an existing deployment for the autoscaler or HPA to target.""
deployment.annotationsAnnotations to add to the main deployment{}

See values.yaml for configuration notes. Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install coredns \
  coredns/coredns \
  --set rbac.create=false

The above command disables automatic creation of RBAC rules.

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

$ helm install coredns coredns/coredns -f values.yaml

Tip: You can use the default values.yaml

Caveats

The chart will automatically determine which protocols to listen on based on the protocols you define in your zones. This means that you could potentially use both "TCP" and "UDP" on a single port. Some cloud environments like "GCE" or "Azure container service" cannot create external loadbalancers with both "TCP" and "UDP" protocols. So When deploying CoreDNS with serviceType="LoadBalancer" on such cloud environments, make sure you do not attempt to use both protocols at the same time.

Autoscaling

By setting autoscaler.enabled = true a cluster-proportional-autoscaler will be deployed. This will default to a coredns replica for every 256 cores, or 16 nodes in the cluster. These can be changed with autoscaler.coresPerReplica and autoscaler.nodesPerReplica. When cluster is using large nodes (with more cores), coresPerReplica should dominate. If using small nodes, nodesPerReplica should dominate.

This also creates a ServiceAccount, ClusterRole, and ClusterRoleBinding for the autoscaler deployment.

replicaCount is ignored if this is enabled.

By setting hpa.enabled = true a Horizontal Pod Autoscaler is enabled for Coredns deployment. This can scale number of replicas based on meitrics like CpuUtilization, MemoryUtilization or Custom ones.