Configuration Guide
Overview
When deploying the acd-manager helm chart, a configuration file containing the chart values must
be supplied to Helm. The default values.yaml file can be found on the ISO in the chart’s directory.
Helm does not require that the complete file be supplied at install time, as any files supplied via the
--values command will be merged with the defaults from the chart. This allows the operator to maintain
a much simpler configuration file containing only the modified values. Additionally, values may be
individually overridden by passing --set key=value to the Helm command. However, this is discouraged for
all but temporary cases, as the same arguments must be specified any time the chart is updated.
The default values.yaml file is located on the ISO under the subpath /helm/charts/acd-manager/values.yaml
Since the ISO is mounted read-only, you must copy this file to a writable location to make changes. Helm
supports multiple --values arguments where all files will be merged left-to-right before being merged
with the chart defaults.
Applying the Configuration
After updating the configuration file, you must perform a helm upgrade for the changes to be propagated
to the cluster. Helm tracks the changes in each revision, and supports rolling back to previous configurations.
During the initial chart installation, the configuration values will be supplied to Helm through the helm install
command, but to update an existing installation, the following command line shall be used instead.
helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values /path/to/values.yaml
Note: Both the helm install and helm upgrade commands take many of the same arguments, and a shortcut
exists helm upgrade --install which can be used in place of either, to update an existing installation, or
deploy a new installation if one did not previously exist.
If the configuration update was unsuccessful, you can roll back to a previous revision using the following
command. Keep in mind, this will not change the values.yaml file on disk, so you must revert the changes
to that file manually, or restore the file from a backup.
helm rollback acd-manager <revision_number>
You can view the current revision number of all installed charts with helm list --all
If you wish to temporarily change one or more values, for instance to increase the manager log level from “info”
to “debug”, you can do so with the --set command.
helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values /path/to/values.yaml --set manager.logLevel=debug
It is also possible to split the values.yaml into multiple individual files, for instance to separate manager
and metrics values in two files using the following commands. All files will be merged left to right by Helm.
Take notice however, that doing this will require all values files to be supplied in the same order any time
a helm upgrade is performed in the future.
helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values /path/to/values1.yaml --values /path/to/values2.yaml
Before applying new configuration, it is recommended to perform a dry-run to ensure that the templates
can be rendered properly. This does not guarantee that the templates will be accepted by Kubernetes, only
that the templates can be properly rendered using the supplied values. The rendered templates will be output
to the console.
helm upgrade ... --dry-run
In the event that the helm upgrade fails to produce the desired results, e.g. if the correct configuration
did not propagate to all required pods, simply performing a helm uninstall acd-manager followed by the original
helm install command will force all pods to be redeployed. This is service affecting however and should only be
performed as a last-resort as all pods will be destroyed and recreated.
Configuration Reference
In this section, we break down the configuration file and look more in-depth into the options available.
Globals
The global section, is a special-case section in Helm, intended for sharing global values between charts.
most of the configuration properties here can be ignored, as they are intended as a means of globally
providing defaults that affect nested subcharts. The only necessary field here is the hosts configuration.
global:
hosts:
manager:
- host: manager.local
routers:
- name: default
address: 127.0.0.1
edns_proxy: []
geoip: []
| key | Type | Description |
|---|---|---|
| global.hosts.manager | Array | List of external IP addresses or DNS hostnames for all nodes in the Manager cluster |
| global.hosts.routers | Array | List of ESB3024 AgileTV CDN Director instances |
| global.hosts.edns_proxy | Array | List of EDNS Proxy addresses |
| global.hosts.geoip | Array | List of GeoIP Proxy addresses |
The global.hosts.manager record contains a list of objects containing a single host field. The first
of which is used by several internal services to contact Zitadel for user authentication and authorization.
Since Zitadel, which provides these services enforces CORS protections, this must match exacly the Origin
used to access Zitadel.
The global.hosts.routers record contains a list of objects each with a name and address field. The
name field is a unique identifier used in URLs to refer to the Director instance, and the address field
is the IP address or DNS name used to communicate with the Director node. Only Director instances run outside
of this cluster need to be specified here, as instances running in Kubernetes can utilize the cluster’s auto-
discovery system.
The global.hosts.edns_proxy record contains a list of objects each with an address and port field. This
list is currently unused.
The global.hosts.geoip record contains a list of objects each with an address and port field. This list
should refer to the GeoIP Proxies used by the Frontend GUI. Currently only one GeoIP proxy is supported.
Common Parameters
This section contains common parameters that are namespaced to the acd-manager chart. These should be left at their default values under most circumstances.
| Key | Type | Description |
|---|---|---|
| kubeVersion | String | Override the Kubernetes version reported by .Capabilities |
| apiVersion | String | Override the Kubernetes API version reported by .Capabilities |
| nameOverride | String | Partially override common.names.name |
| fullnameOverride | String | Fully override common.names.name |
| namespaceOverride | String | Fully override common.names.namespace |
| commonLabels | Object | Labels to add to all deployed objects |
| commonAnnotations | Object | Annotations to add to all deployed objects |
| clusterDomain | String | Kubernetes cluster domain name |
| extraDeploy | Array | List of extra Kubernetes objects to deploy with the release |
| diagnosticMode.enabled | Boolean | Enable Diagnostic mode (All probes will be disabled and the command will be overridden) |
| diagnosticMode.command | Array | Override the command when diagnostic mode is enabled |
| diagnosticMode.args | Array | Override the command line arguments when diagnostic mode is enabled |
Manager
This section represents the configuration options for the ACD Manager’s API server.
| Key | Type | Description |
|---|---|---|
| manager.image.registry | String | The docker registry |
| manager.image.repository | String | The docker repository |
| manager.image.tag | String | Override the image tag |
| manager.image.digest | String | Override a specific image digest |
| manager.image.pullPolicy | String | The image pull policy |
| manager.image.pullSecrets | Array | A list of secret names containing credentials for the configured image registry |
| manager.image.debug | boolean | Enable debug mode for the containers |
| manager.logLevel | String | Set the log level used in the containers |
| manager.replicaCount | Number | Number of manager replicas to deploy. This value is ignored if the Horizontal Pod Autoscaler is enabled |
| manager.containerPorts.http | Number | Port number exposed by the container for HTTP traffic |
| manager.extraContainerPorts | Array | List of additional container ports to expose |
| manager.livenessProbe | Object | Configuration for the liveness probe on the manager container |
| manager.readinessProbe | Object | Configuration for the readiness probe on the manager container |
| manager.startupProbe | Object | Configuration for the startup probe on the manager container |
| manager.customLivenessProbe | Object | Override the default liveness probe |
| manager.customReadinessProbe | Object | Override the default readiness probe |
| manager.customStartupProbe | Object | Override the default startup probe |
| manager.resourcePreset | String | Set the manager resources according to one common preset |
| manager.resources | Object | Set request and limits for different resources like CPU or memory |
| manager.podSecurityContext | Object | Set the security context for the manager pods |
| manager.containerSecurityContext | Object | Set the security context for all containers inside the manager pods |
| manager.maxmindDbVolume | String | Name of a Kubernetes volume containing Maxmind GeoIP, ASN, and Anonymous IP databases |
| manager.existingConfigmap | String | Reserved for future use |
| manager.command | Array | Command executed inside the manager container |
| manager.args | Array | Arguments passed to the command |
| manager.automountServiceAccountToken | Boolean | Mount Service Account token in manager pods |
| manager.hostAliases | Array | Add additional entries to /etc/hosts in the pod |
| manager.deploymentAnnotations | Object | Annotations for the manager deployment |
| manager.podLabels | Object | Extra labels for manager pods |
| manager.podAnnotations | Object | Extra annotations for the manager pods |
| manager.podAffinityPreset | String | Allowed values soft or hard |
| manager.podAntiAffinityPreset | String | Allowed values soft or hard |
| manager.nodeAffinityPreset.type | String | Allowed values soft or `hard |
| manager.nodeAffinityPreset.key | String | Node label key to match |
| manager.nodeAffinityPreset.values | Array | List of node labels to match |
| manager.affinity | Object | Override the affinity for pod assignments |
| manager.nodeSelector | Object | Node labels for manager pod assignments |
| manager.tolerations | Array | Tolerations for manager pod assignment |
| manager.updateStrategy.type | String | Can be set to RollingUpdate or Recreate |
| manager.priorityClassName | String | Manager pods’ priorityClassName |
| manager.topologySpreadConstraints | Array | Topology Spread Constraints for manager pod assignment spread across the cluster among failure-domains |
| manager.schedulerName | String | Name of the Kubernetes scheduler for manager pods |
| manager.terminationGracePeriodSeconds | Number | Seconds manager pods need to terminate gracefully |
| manager.lifecycleHooks | Object | Lifecycle Hooks for manager containers to automate configuration before or after startup |
| manager.extraEnvVars | Array | List of extra environment variables to add to the manager containers |
| manager.extraEnvVarsCM | Array | List of Config Maps containing extra environment variables to pass to the Manager pods |
| manager.extraEnvVarsSecret | Array | List of Secrets containing extra environment variables to pass to the Manager pods |
| manager.extraVolumes | Array | Optionally specify extra list of additional volumes for the manager pods |
| manager.extraVolumeMounts | Array | Optionally specify extra list of additional volume mounts for the manager pods |
| manager.sidecars | Array | Add additional sidecar containers to the manager pods |
| manager.initContainers | Array | Add additional init containers to the manager pods |
| manager.pdb.create | Boolean | Enable / disable a Pod Disruption Budget creation |
| manager.pdb.minAvailable | Number | Minimum number/precentage of pods that should remain scheduled |
| manager.pdb.maxUnavailable | Number | Maximum number/percentage of pods that may be made unavailable |
| manager.autoscaling.vpa | Object | Vertical Pod Autoscaler Configuration. Not used for self-hosted clusters |
| manager.autoscaling.hpa | Object | Horizontal Pod Autoscaler. Automatically scale the number of replicas based on resource utilization |
Gateway
The parameters under the gateway namespace are mostly identical to those of the manager section above, but
which affect the NGinx Proxy Gateway service. The additional properites here are described in the following
table.
| Key | Type | Description |
|---|---|---|
| gateway.service.type | String | Service Type |
| gateway.service.ports.http | Number | The service port |
| gateway.service.nodePorts | Object | Allows configuring the exposed node port if the service.type is “NodePort” |
| gateway.service.clusterIP | String | Override the ClusterIP address if the service.type is “ClusterIP” |
| gateway.service.loadBalancerIP | String | Override the LoadBalancer IP address if the service.type is “LoadBalancer” |
| gateway.service.loadBalancerSourceRanges | Array | Source CIDRs for the LoadBalancer |
| gateway.service.externalTrafficPolicy | String | External Traffic Policy for the service |
| gateway.service.annotations | Object | Additional custom annotations for the manager service |
| gateway.service.extraPorts | Array | Extra ports to expose in the manager service. (Normally used with the sidecar value) |
| gateway.service.sessionAffinity | String | Control where client requests go, to the same pod or round-robin |
| gateway.service.sessionAffinityConfig | Object | Additional settings for the sessionAffinity |
Selection Input
The parameters under the selectionInput namespace are mostly identical to those of the manager section above,
but which affect the selection input consumer service. The additional properties here are described in the
following table.
| Key | Type | Description |
|---|---|---|
| selectionInput.kafkaTopic | String | Name of the selection input kafka topic |
Metrics Aggregator
The parameters under the metricsAggregator namespace are mostly identical to those of the manager section above,
but which affect the metrics aggregator service.
Traffic Exposure
These parameters determine how the various services are exposed over the network.
| Key | Type | Description |
|---|---|---|
| service.type | String | Service Type |
| service.ports.http | Number | The service port |
| service.nodePorts | Object | Allows configuring the exposed node port if the service.type is “NodePort” |
| service.clusterIP | String | Override the ClusterIP address if the service.type is “ClusterIP” |
| service.loadBalancerIP | String | Override the LoadBalancer IP address if the service.type is “LoadBalancer” |
| service.loadBalancerSourceRanges | Array | Source CIDRs for the LoadBalancer |
| service.externalTrafficPolicy | String | External Traffic Policy for the service |
| service.annotations | Object | Additional custom annotations for the manager service |
| service.extraPorts | Array | Extra ports to expose in the manager service. (Normally used with the sidecar value) |
| service.sessionAffinity | String | Control where client requests go, to the same pod or round-robin |
| service.sessionAffinityConfig | Object | Additional settings for the sessionAffinity |
| networkPolicy.enabled | Boolean | Specifies whether a NetworkPolicy should be created |
| networkPolicy.allowExternal | Boolean | Doesn’t require server labels for connections |
| networkPolicy.allowExternalEgress | Boolean | Allow the pod to access any range of port and all destinations |
| networkPolicy.allowExternalClientAccess | Boolean | Allow access from pods with client label set to “true” |
| networkPolicy.extraIngress | Array | Add extra ingress rules to the Network Policy |
| networkPolicy.extraEgress | Array | Add extra egress rules to the Network Policy |
| networkPolicy.ingressPodMatchLabels | Object | Labels to match to allow traffic from other pods. |
| networkPolicy.ingressNSMatchLabels | Object | Labels to match to allow traffic from other namespaces. |
| networkPolicy.ingressNSPodMatchLabels | Object | Pod labels to match to allow traffic from other namespaces. |
| ingress.enabled | Boolean | Enable the ingress record generation for the manager |
| ingress.pathType | String | Ingress Path Type |
| ingress.apiVersion | String | Force Ingress API version |
| ingress.hostname | String | Match HOST header for the ingress record |
| ingress.ingressClassName | String | Ingress Class that will be used to implement the Ingress |
| ingress.path | String | Default path for the Ingress record |
| ingress.annotations | Object | Additional annotations for the Ingress resource. |
| ingress.tls | Boolean | Enable TLS configuration for the host defined at ingress.hostname |
| ingress.selfSigned | Boolean | Create a TLS secret for this ingress record using self-signed certificates generated by Helm |
| ingress.extraHosts | Array | An array with additional hostnames to be covered by the Ingress record. |
| ingress.extraPaths | Array | An array of extra path entries to be covered by the Ingress record. |
| ingress.extraTls | Array | TLS configuration for additional hostnames to be covered with this Ingress record. |
| ingress.secrets | Array | Custom TLS certificates as secrets |
| ingress.extraRules | Array | Additional rules to be covered with this Ingress record. |
Persistence
The following values control how persistent storage is used by the manager. Currently these have no effect as the Manager does not use any persistent volume claims, however they are documented here as the same properties are used in several subcontainers to configure persistence.
| Key | Type | Description |
|---|---|---|
| persistence.enabled | Boolean | Enable persistence using Persistent Volume Claims |
| persistence.mountPath | String | Path where to mount the volume |
| persistence.subPath | String | The subdirectory of the volume to mount |
| persistence.storageClass | String | Storage class of backing Persistent Volume Claim |
| persistence.annotations | Object | Persistent Volume Claim annotations |
| persistence.accessModes | Array | Persistent Volume Access Modes |
| persistence.size | String | Size of the data volume |
| persistence.dataSource | Object | Custom PVC data source |
| persistence.existingClaim | String | The name of an existing PVC to use for persistence |
| persistence.selector | Object | Selector to match existing Persistent Volume for data PVC |
Other Values
The following are additional parameters for the chart.
| Key | Type | Description |
|---|---|---|
| defaultInitContainers | Object | Configuration for default init containers. |
| rbac.create | Boolean | Specifies whether Role-Based Access Control Resources should be created. |
| rbac.rules | Object | Custom RBAC rules to apply |
| serviceAccount.create | Boolean | Specifies whether a ServiceAccount should be created |
| serviceAccount.name | String | Override the ServiceAccount name. If not set, a name will be generated automatically. |
| serviceAccount.annotations | Object | Additional Service Account annotations (evaluated as a template) |
| serviceAccount.automountServiceAccountToken | Boolean | Automount the service account token for the service account. |
| metrics.enabled | Boolean | Enable the export of Prometheus metrics. Not currently implemented |
| metrics.serviceMonitor.enabled | Boolean | If true, creates a Prometheus Operator ServiceMonitor |
| metrics.serviceMonitor.namespace | String | Namespace in which Prometheus is running |
| metrics.serviceMonitor.annotations | Object | Additional custom annotations for the ServiceMonitor |
| metrics.serviceMonitor.labels | Object | Extra labels for the ServiceMonitor |
| metrics.serviceMonitor.jobLabel | String | The name of the label on the target service to use as the job name in Prometheus |
| metrics.serviceMonitor.honorLabels | Boolean | Chooses the metric’s labels on collisions with target labels |
| metrics.serviceMonitor.tlsConfig | Object | TLS configuration used for scrape endpoints used by Prometheus |
| metrics.serviceMonitor.interval | Number | Interval at which metrics should be scraped. |
| metrics.serviceMonitor.scrapeTimeout | Number | Timeout after which the scrape is ended. |
| metrics.serviceMonitor.metricRelabelings | Array | Specify additional relabeling of metrics. |
| metrics.serviceMonitor.relabelings | Array | Specify general relabeling |
| metrics.serviceMonitor.selector | Object | Prometheus instance selector labels |
Sub-components
Confd
| Key | Type | Description |
|---|---|---|
| confd.enabled | Boolean | Enable the embedded Confd instance |
| confd.service.ports.internal. | Number | Port number to use for internal communication with the Confd TCP socket |
MIB Frontend
There are many additional properties that can be configured for the MIB Frontend service which are not
specified in the configuration file. The mib-frontend helm Chart follows the same basic template
as the acd-manager chart so documenting them all here would be unnecessarily repeatative. Virtually every
property in this chart can be configured under the mib-frontend namespace and be valid.
| Key | Type | Description |
|---|---|---|
| mib-frontend.enabled | Boolean | Enable the Configuration GUI |
| mib-frontend.frontend.resourcePreset | String | Use a preset resource configuration. |
| mib-frontend.frontend.resources | Object | Use custom resource configuration. |
| mib-frontend.frontend.autoscaling.hpa | Object | Horizontal Pod Autoscaler configuration for MIB Frontend component |
ACD Metrics
There are many additional properties that can be configured for the ACD metrics service which are not
specified in the configuration file. The acd-metrics helm Chart follows the same basic template
as the acd-manager chart, as do each of its subcharts. Documenting them all here would mostly be
unnecessarily repeatative. Virtually any property in this chart can be configured under the acd-metrics
namespace and be valid. For example, setting the resource preset for grafana can be achieved by setting
acd-metrics.grafana.resourcePreset etc.
| Key | Type | Description |
|---|---|---|
| acd-metrics.enabled | Boolean | Enable the ACD Metrics components |
| acd-metrics.telegraf.enabled | Boolean | Enable the Telegraf Database component |
| acd-metrics.prometheus.enabled | Boolean | Enable the Prometheus Service Instance |
| acd-metrics.grafana.enabled | Boolean | Enable the Grafana Service Instance |
| acd-metrics.victoria-metrics-single.enabled | Boolean | Enable Victoria Metrics Service instance |
Zitadel
Zitadel does not follow the same template as many of the other services. Below is a list of Zitadel specific properties.
| Key | Type | Description |
|---|---|---|
| zitadel.enabled | Boolean | Enable the Zitadel instance |
| zitadel.replicaCount | Number | Number of replicas in the Zitadel deployment |
| zitadel.image.repository | String | The full name of the image registry and repository for the Zitadel container |
| zitadel.setupJob | Object | Configuration for the initial setup job to configure the database |
| zitadel.zitadel.masterkeySecretName | String | The name of an existing Kubernetes secret containing the Zitadel Masterkey |
| zitadel.zitadel.configmapConfig | Object | The Zitadel configuration. See Configuration Options in ZITADEL |
| zitadel.zitadel.configmapConfig.ExternalDomain | String | The external domain name or IP address to which all requests must be made. |
| zitadel.service | Ojbect | Service configuration options for Zitadel |
| zitadel.ingress | Object | Traffic exposure parameters for Zitadel |
The zitadel.zitadel.configmapConfig.ExternalDomain MUST be configured with the same
value used as the first entry in in global.hosts.manager. Cross-Origin Resource Sharing (CORS)
is enforced with Zitadel, and only this origin specified here will be allowed to be used
to access Zitadel. The first entry in the global.hosts.manager Array will be used by
internal services, and if this does not match, authentication requests will not be accepted.
For example, if the global.hosts.manager entries look like this:
global:
hosts:
manager:
- host: foo.example.com
- host: bar.example.com
The Zitadel ExternalDomain must be set to foo.example.com, and all requests to Zitadel
must use foo.example.com. e.g https://foo.example.com/ui/console. Requests made to
bar.example.com will result in HTTP 404 errors.
Redis and Kafka
Both the redis and kafka subcharts follow the same basic structure as the acd-manager
chart, and the configurable values in each are nearly identical. Documenting the configuration
of these charts here would be unnecessarily redundant. However, the operator may wish to
adjust the resource configuration for these charts at the following locations:
| Key | Type | Description |
|---|---|---|
| redis.master.resources | Object | Resource configuration for the Redis master instance |
| redis.replica.resources | Object | Resource configuration for the Redis read-only replica instances |
| redis.replica.replicaCount | Number | Number of Read-only Redis replica instances |
| kafka.controller.resources | Object | Resource configuration for the Kafka controller |
| kafka.controller.replicaCount | Number | Number of Kafka controller replica instances to deploy |
Resource Configuration
All resource configuration blocks follow the same basic schema which is defined here.
| Key | Type | Description |
|---|---|---|
| resources.limits.cpu | String | The maximum CPU which can be consumed before the Pod is terminated. |
| resources.limits.memory | String | The maximum amount of memory the pod may consume before being killed. |
| resources.limits.ephemeral-storage | String | The maximum amount of storage a pod may consume |
| resources.requests.cpu | String | The minimum available CPU cores for each Pod to be assigned to a node. |
| resources.requests.memory | String | The minimum available Free Memory on a node for a pod to be assigned. |
| resources.requests.ephemeral-storage | String | The minimum amount of storage a pod requires to be assigned to a node. |
CPU values are specified in units of 1/1000 of a CPU e.g. “1000m” represents 1 core, “250m” is 1/4 of 1 core. Memory and Storage values are specified with the SI suffix, e.g. “250Mi” is 250MB, “3Gi” is 3GB, etc.
Most services also include a resourcePreset value which is a simple String representing
some common configurations.
The presets are as follows:
| Preset | Request CPU | Request Memory | Request Storage | Limit CPU | Limit Memory | Limit Storage |
|---|---|---|---|---|---|---|
| nano | 100m | 128Mi | 50Mi | 150m | 192Mi | 2Gi |
| micro | 250m | 256Mi | 50Mi | 375m | 384Mi | 2Gi |
| small | 500m | 512Mi | 50Mi | 750m | 768Mi | 2Gi |
| medium | 500m | 1024Mi | 50Mi | 750m | 1536Mi | 2Gi |
| large | 1.0 | 2048Mi | 50Mi | 1.5 | 3072Mi | 2Gi |
| xlarge | 1.0 | 3072Mi | 50Mi | 3.0 | 6144Mi | 2Gi |
| 2xlarge | 1.0 | 3072Mi | 50Mi | 6.0 | 12288Mi | 2Gi |
When considering the resource requests vs. limits, the request values should represent the minimum resource usage necessary to run the service, while the limits represent the maximum resources each pod in the deployment will be allowed to consume. The resource request and limits are per pod, so a service using “large” presets with 3 replicas will need a minimum of 3 full cores, and 6GB of available memory to start and may consume up to a maximum of 4.5 Cores and 9GB of memory across all nodes in the cluster.
Security Contexts
Most charts used in the deployment contain configuration for both Pod and Container security contexts. Below is additional information about the parameters there-in.
| Key | Type | Description |
|---|---|---|
| podSecurityContext.enabled | Boolean | Enable the Pod Security Context |
| podSecurityContext.fsGroupChangePolicy | String | Set filesystem group change policy for the nodes |
| podSecurityContext.sysctls | Array | Set kernel settings using sysctl interface for the pods |
| podSecurityContext.supplementalGroups | Array | Set filesystem extra groups for the pods |
| podSecurityContext.fsGroup | Number | Set Filesystem Group ID for the pods |
| containerSecurityContext.enabled | Boolean | Enable the container security context |
| containerSecurityContext.seLinuxOptions | Object | Set SELinux options for each container in the Pod |
| containerSecurityContext.runAsUser | Number | Set runAsUser in the containers Security Context |
| containerSecurityContext.runAsGroup | Number | Set runAsGroup in the containers Security Context |
| containerSecurityContext.runAsNonRoot | Boolean | Set runAsNonRoot in the containers Security Context |
| containerSecurityContext.readOnlyRootFilesystem | Boolean | Set readOnlyRootFilesystem in the containers Security Context |
| containerSecurityContext.privileged | Boolean | Set privileged in the container Security Context |
| containerSecurityContext.allowPrivilegeEscalation | Boolean | Set allowPrivilegeEscalation in the container’s security context |
| containerSecurityContext.capabilities.drop | Array | List of capabilities to be dropped in the container |
| containerSecurityContext.seccompProfile.type | String | Set seccomp profile in the container |
Probe Configuration
Each Pod uses healthcheck probes to determine the readiness of the pod. Three probe types are defined. startupProbe, readinessProbe, and livenessProbe. They all contain exactly the same configuration options, the only difference between the probe types is when they are executed.
Liveness Probe: Checks if the container is running. If this probe fails, Kubernetes restarts the container, assuming it is stuck or unhealthy.
Readiness Probe: Determines if the container is ready to accept traffic. If it fails, the container is removed from the service load balancer until it becomes ready again.
Startup Probe: Used during container startup to determine if the application has started successfully. It helps to prevent the liveness probe from killing a container that is still starting up.
The following table describes each of these properties:
| Property | Description |
|---|---|
| enabled | Determines whether the probe is active (true) or disabled (false). |
| initialDelaySeconds | Time in seconds to wait after the container starts before performing the first probe. |
| periodSeconds | How often (in seconds) to perform the probe. |
| timeoutSeconds | Number of seconds to wait for a probe response before considering it a failure. |
| failureThreshold | Number of consecutive failed probes before considering the container unhealthy (for liveness) or unavailable (for readiness). |
| successThreshold | Number of consecutive successful probes required to consider the container healthy or ready (usually 1). |
| httpGet | Specifies that the probe performs an HTTP GET request to check container health. |
| httpGet.path | The URL path to request during the HTTP GET probe. |
| httpGet.port | The port number or name where the HTTP GET request is sent. |
| exec | Specifies that the probe runs the specified command inside the container and expects a successful exit code to indicate health. |
| exec.command | An array of strings representing the command to run |
Only one of httpGet or exec may be specified in a single probe. These configurations are mutually exclusive.