Configuration Guide

Configuration Guide

Overview

When deploying the acd-manager helm chart, a configuration file containing the chart values must be supplied to Helm. The default values.yaml file can be found on the ISO in the chart’s directory. Helm does not require that the complete file be supplied at install time, as any files supplied via the --values command will be merged with the defaults from the chart. This allows the operator to maintain a much simpler configuration file containing only the modified values. Additionally, values may be individually overridden by passing --set key=value to the Helm command. However, this is discouraged for all but temporary cases, as the same arguments must be specified any time the chart is updated.

The default values.yaml file is located on the ISO under the subpath /helm/charts/acd-manager/values.yaml Since the ISO is mounted read-only, you must copy this file to a writable location to make changes. Helm supports multiple --values arguments where all files will be merged left-to-right before being merged with the chart defaults.

Applying the Configuration

After updating the configuration file, you must perform a helm upgrade for the changes to be propagated to the cluster. Helm tracks the changes in each revision, and supports rolling back to previous configurations. During the initial chart installation, the configuration values will be supplied to Helm through the helm install command, but to update an existing installation, the following command line shall be used instead.

helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values /path/to/values.yaml

Note: Both the helm install and helm upgrade commands take many of the same arguments, and a shortcut exists helm upgrade --install which can be used in place of either, to update an existing installation, or deploy a new installation if one did not previously exist.

If the configuration update was unsuccessful, you can roll back to a previous revision using the following command. Keep in mind, this will not change the values.yaml file on disk, so you must revert the changes to that file manually, or restore the file from a backup.

helm rollback acd-manager <revision_number>

You can view the current revision number of all installed charts with helm list --all

If you wish to temporarily change one or more values, for instance to increase the manager log level from “info” to “debug”, you can do so with the --set command.

helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values /path/to/values.yaml --set manager.logLevel=debug

It is also possible to split the values.yaml into multiple individual files, for instance to separate manager and metrics values in two files using the following commands. All files will be merged left to right by Helm. Take notice however, that doing this will require all values files to be supplied in the same order any time a helm upgrade is performed in the future.

helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values /path/to/values1.yaml --values /path/to/values2.yaml

Before applying new configuration, it is recommended to perform a dry-run to ensure that the templates can be rendered properly. This does not guarantee that the templates will be accepted by Kubernetes, only that the templates can be properly rendered using the supplied values. The rendered templates will be output to the console.

helm upgrade ... --dry-run

In the event that the helm upgrade fails to produce the desired results, e.g. if the correct configuration did not propagate to all required pods, simply performing a helm uninstall acd-manager followed by the original helm install command will force all pods to be redeployed. This is service affecting however and should only be performed as a last-resort as all pods will be destroyed and recreated.

Configuration Reference

In this section, we break down the configuration file and look more in-depth into the options available.

Globals

The global section, is a special-case section in Helm, intended for sharing global values between charts. most of the configuration properties here can be ignored, as they are intended as a means of globally providing defaults that affect nested subcharts. The only necessary field here is the hosts configuration.

global:
  hosts:
    manager:
      - host: manager.local
    routers:
      - name: default
        address: 127.0.0.1
    edns_proxy: []
    geoip: []
keyTypeDescription
global.hosts.managerArrayList of external IP addresses or DNS hostnames for all nodes in the Manager cluster
global.hosts.routersArrayList of ESB3024 AgileTV CDN Director instances
global.hosts.edns_proxyArrayList of EDNS Proxy addresses
global.hosts.geoipArrayList of GeoIP Proxy addresses

The global.hosts.manager record contains a list of objects containing a single host field. The first of which is used by several internal services to contact Zitadel for user authentication and authorization. Since Zitadel, which provides these services enforces CORS protections, this must match exacly the Origin used to access Zitadel.

The global.hosts.routers record contains a list of objects each with a name and address field. The name field is a unique identifier used in URLs to refer to the Director instance, and the address field is the IP address or DNS name used to communicate with the Director node. Only Director instances run outside of this cluster need to be specified here, as instances running in Kubernetes can utilize the cluster’s auto- discovery system.

The global.hosts.edns_proxy record contains a list of objects each with an address and port field. This list is currently unused.

The global.hosts.geoip record contains a list of objects each with an address and port field. This list should refer to the GeoIP Proxies used by the Frontend GUI. Currently only one GeoIP proxy is supported.

Common Parameters

This section contains common parameters that are namespaced to the acd-manager chart. These should be left at their default values under most circumstances.

KeyTypeDescription
kubeVersionStringOverride the Kubernetes version reported by .Capabilities
apiVersionStringOverride the Kubernetes API version reported by .Capabilities
nameOverrideStringPartially override common.names.name
fullnameOverrideStringFully override common.names.name
namespaceOverrideStringFully override common.names.namespace
commonLabelsObjectLabels to add to all deployed objects
commonAnnotationsObjectAnnotations to add to all deployed objects
clusterDomainStringKubernetes cluster domain name
extraDeployArrayList of extra Kubernetes objects to deploy with the release
diagnosticMode.enabledBooleanEnable Diagnostic mode (All probes will be disabled and the command will be overridden)
diagnosticMode.commandArrayOverride the command when diagnostic mode is enabled
diagnosticMode.argsArrayOverride the command line arguments when diagnostic mode is enabled

Manager

This section represents the configuration options for the ACD Manager’s API server.

KeyTypeDescription
manager.image.registryStringThe docker registry
manager.image.repositoryStringThe docker repository
manager.image.tagStringOverride the image tag
manager.image.digestStringOverride a specific image digest
manager.image.pullPolicyStringThe image pull policy
manager.image.pullSecretsArrayA list of secret names containing credentials for the configured image registry
manager.image.debugbooleanEnable debug mode for the containers
manager.logLevelStringSet the log level used in the containers
manager.replicaCountNumberNumber of manager replicas to deploy. This value is ignored if the Horizontal Pod Autoscaler is enabled
manager.containerPorts.httpNumberPort number exposed by the container for HTTP traffic
manager.extraContainerPortsArrayList of additional container ports to expose
manager.livenessProbeObjectConfiguration for the liveness probe on the manager container
manager.readinessProbeObjectConfiguration for the readiness probe on the manager container
manager.startupProbeObjectConfiguration for the startup probe on the manager container
manager.customLivenessProbeObjectOverride the default liveness probe
manager.customReadinessProbeObjectOverride the default readiness probe
manager.customStartupProbeObjectOverride the default startup probe
manager.resourcePresetStringSet the manager resources according to one common preset
manager.resourcesObjectSet request and limits for different resources like CPU or memory
manager.podSecurityContextObjectSet the security context for the manager pods
manager.containerSecurityContextObjectSet the security context for all containers inside the manager pods
manager.maxmindDbVolumeStringName of a Kubernetes volume containing Maxmind GeoIP, ASN, and Anonymous IP databases
manager.existingConfigmapStringReserved for future use
manager.commandArrayCommand executed inside the manager container
manager.argsArrayArguments passed to the command
manager.automountServiceAccountTokenBooleanMount Service Account token in manager pods
manager.hostAliasesArrayAdd additional entries to /etc/hosts in the pod
manager.deploymentAnnotationsObjectAnnotations for the manager deployment
manager.podLabelsObjectExtra labels for manager pods
manager.podAnnotationsObjectExtra annotations for the manager pods
manager.podAffinityPresetStringAllowed values soft or hard
manager.podAntiAffinityPresetStringAllowed values soft or hard
manager.nodeAffinityPreset.typeStringAllowed values soft or `hard
manager.nodeAffinityPreset.keyStringNode label key to match
manager.nodeAffinityPreset.valuesArrayList of node labels to match
manager.affinityObjectOverride the affinity for pod assignments
manager.nodeSelectorObjectNode labels for manager pod assignments
manager.tolerationsArrayTolerations for manager pod assignment
manager.updateStrategy.typeStringCan be set to RollingUpdate or Recreate
manager.priorityClassNameStringManager pods’ priorityClassName
manager.topologySpreadConstraintsArrayTopology Spread Constraints for manager pod assignment spread across the cluster among failure-domains
manager.schedulerNameStringName of the Kubernetes scheduler for manager pods
manager.terminationGracePeriodSecondsNumberSeconds manager pods need to terminate gracefully
manager.lifecycleHooksObjectLifecycle Hooks for manager containers to automate configuration before or after startup
manager.extraEnvVarsArrayList of extra environment variables to add to the manager containers
manager.extraEnvVarsCMArrayList of Config Maps containing extra environment variables to pass to the Manager pods
manager.extraEnvVarsSecretArrayList of Secrets containing extra environment variables to pass to the Manager pods
manager.extraVolumesArrayOptionally specify extra list of additional volumes for the manager pods
manager.extraVolumeMountsArrayOptionally specify extra list of additional volume mounts for the manager pods
manager.sidecarsArrayAdd additional sidecar containers to the manager pods
manager.initContainersArrayAdd additional init containers to the manager pods
manager.pdb.createBooleanEnable / disable a Pod Disruption Budget creation
manager.pdb.minAvailableNumberMinimum number/precentage of pods that should remain scheduled
manager.pdb.maxUnavailableNumberMaximum number/percentage of pods that may be made unavailable
manager.autoscaling.vpaObjectVertical Pod Autoscaler Configuration. Not used for self-hosted clusters
manager.autoscaling.hpaObjectHorizontal Pod Autoscaler. Automatically scale the number of replicas based on resource utilization

Gateway

The parameters under the gateway namespace are mostly identical to those of the manager section above, but which affect the NGinx Proxy Gateway service. The additional properites here are described in the following table.

KeyTypeDescription
gateway.service.typeStringService Type
gateway.service.ports.httpNumberThe service port
gateway.service.nodePortsObjectAllows configuring the exposed node port if the service.type is “NodePort”
gateway.service.clusterIPStringOverride the ClusterIP address if the service.type is “ClusterIP”
gateway.service.loadBalancerIPStringOverride the LoadBalancer IP address if the service.type is “LoadBalancer”
gateway.service.loadBalancerSourceRangesArraySource CIDRs for the LoadBalancer
gateway.service.externalTrafficPolicyStringExternal Traffic Policy for the service
gateway.service.annotationsObjectAdditional custom annotations for the manager service
gateway.service.extraPortsArrayExtra ports to expose in the manager service. (Normally used with the sidecar value)
gateway.service.sessionAffinityStringControl where client requests go, to the same pod or round-robin
gateway.service.sessionAffinityConfigObjectAdditional settings for the sessionAffinity

Selection Input

The parameters under the selectionInput namespace are mostly identical to those of the manager section above, but which affect the selection input consumer service. The additional properties here are described in the following table.

KeyTypeDescription
selectionInput.kafkaTopicStringName of the selection input kafka topic

Metrics Aggregator

The parameters under the metricsAggregator namespace are mostly identical to those of the manager section above, but which affect the metrics aggregator service.

Traffic Exposure

These parameters determine how the various services are exposed over the network.

KeyTypeDescription
service.typeStringService Type
service.ports.httpNumberThe service port
service.nodePortsObjectAllows configuring the exposed node port if the service.type is “NodePort”
service.clusterIPStringOverride the ClusterIP address if the service.type is “ClusterIP”
service.loadBalancerIPStringOverride the LoadBalancer IP address if the service.type is “LoadBalancer”
service.loadBalancerSourceRangesArraySource CIDRs for the LoadBalancer
service.externalTrafficPolicyStringExternal Traffic Policy for the service
service.annotationsObjectAdditional custom annotations for the manager service
service.extraPortsArrayExtra ports to expose in the manager service. (Normally used with the sidecar value)
service.sessionAffinityStringControl where client requests go, to the same pod or round-robin
service.sessionAffinityConfigObjectAdditional settings for the sessionAffinity
networkPolicy.enabledBooleanSpecifies whether a NetworkPolicy should be created
networkPolicy.allowExternalBooleanDoesn’t require server labels for connections
networkPolicy.allowExternalEgressBooleanAllow the pod to access any range of port and all destinations
networkPolicy.allowExternalClientAccessBooleanAllow access from pods with client label set to “true”
networkPolicy.extraIngressArrayAdd extra ingress rules to the Network Policy
networkPolicy.extraEgressArrayAdd extra egress rules to the Network Policy
networkPolicy.ingressPodMatchLabelsObjectLabels to match to allow traffic from other pods.
networkPolicy.ingressNSMatchLabelsObjectLabels to match to allow traffic from other namespaces.
networkPolicy.ingressNSPodMatchLabelsObjectPod labels to match to allow traffic from other namespaces.
ingress.enabledBooleanEnable the ingress record generation for the manager
ingress.pathTypeStringIngress Path Type
ingress.apiVersionStringForce Ingress API version
ingress.hostnameStringMatch HOST header for the ingress record
ingress.ingressClassNameStringIngress Class that will be used to implement the Ingress
ingress.pathStringDefault path for the Ingress record
ingress.annotationsObjectAdditional annotations for the Ingress resource.
ingress.tlsBooleanEnable TLS configuration for the host defined at ingress.hostname
ingress.selfSignedBooleanCreate a TLS secret for this ingress record using self-signed certificates generated by Helm
ingress.extraHostsArrayAn array with additional hostnames to be covered by the Ingress record.
ingress.extraPathsArrayAn array of extra path entries to be covered by the Ingress record.
ingress.extraTlsArrayTLS configuration for additional hostnames to be covered with this Ingress record.
ingress.secretsArrayCustom TLS certificates as secrets
ingress.extraRulesArrayAdditional rules to be covered with this Ingress record.

Persistence

The following values control how persistent storage is used by the manager. Currently these have no effect as the Manager does not use any persistent volume claims, however they are documented here as the same properties are used in several subcontainers to configure persistence.

KeyTypeDescription
persistence.enabledBooleanEnable persistence using Persistent Volume Claims
persistence.mountPathStringPath where to mount the volume
persistence.subPathStringThe subdirectory of the volume to mount
persistence.storageClassStringStorage class of backing Persistent Volume Claim
persistence.annotationsObjectPersistent Volume Claim annotations
persistence.accessModesArrayPersistent Volume Access Modes
persistence.sizeStringSize of the data volume
persistence.dataSourceObjectCustom PVC data source
persistence.existingClaimStringThe name of an existing PVC to use for persistence
persistence.selectorObjectSelector to match existing Persistent Volume for data PVC

Other Values

The following are additional parameters for the chart.

KeyTypeDescription
defaultInitContainersObjectConfiguration for default init containers.
rbac.createBooleanSpecifies whether Role-Based Access Control Resources should be created.
rbac.rulesObjectCustom RBAC rules to apply
serviceAccount.createBooleanSpecifies whether a ServiceAccount should be created
serviceAccount.nameStringOverride the ServiceAccount name. If not set, a name will be generated automatically.
serviceAccount.annotationsObjectAdditional Service Account annotations (evaluated as a template)
serviceAccount.automountServiceAccountTokenBooleanAutomount the service account token for the service account.
metrics.enabledBooleanEnable the export of Prometheus metrics. Not currently implemented
metrics.serviceMonitor.enabledBooleanIf true, creates a Prometheus Operator ServiceMonitor
metrics.serviceMonitor.namespaceStringNamespace in which Prometheus is running
metrics.serviceMonitor.annotationsObjectAdditional custom annotations for the ServiceMonitor
metrics.serviceMonitor.labelsObjectExtra labels for the ServiceMonitor
metrics.serviceMonitor.jobLabelStringThe name of the label on the target service to use as the job name in Prometheus
metrics.serviceMonitor.honorLabelsBooleanChooses the metric’s labels on collisions with target labels
metrics.serviceMonitor.tlsConfigObjectTLS configuration used for scrape endpoints used by Prometheus
metrics.serviceMonitor.intervalNumberInterval at which metrics should be scraped.
metrics.serviceMonitor.scrapeTimeoutNumberTimeout after which the scrape is ended.
metrics.serviceMonitor.metricRelabelingsArraySpecify additional relabeling of metrics.
metrics.serviceMonitor.relabelingsArraySpecify general relabeling
metrics.serviceMonitor.selectorObjectPrometheus instance selector labels

Sub-components

Confd

KeyTypeDescription
confd.enabledBooleanEnable the embedded Confd instance
confd.service.ports.internal.NumberPort number to use for internal communication with the Confd TCP socket

MIB Frontend

There are many additional properties that can be configured for the MIB Frontend service which are not specified in the configuration file. The mib-frontend helm Chart follows the same basic template as the acd-manager chart so documenting them all here would be unnecessarily repeatative. Virtually every property in this chart can be configured under the mib-frontend namespace and be valid.

KeyTypeDescription
mib-frontend.enabledBooleanEnable the Configuration GUI
mib-frontend.frontend.resourcePresetStringUse a preset resource configuration.
mib-frontend.frontend.resourcesObjectUse custom resource configuration.
mib-frontend.frontend.autoscaling.hpaObjectHorizontal Pod Autoscaler configuration for MIB Frontend component

ACD Metrics

There are many additional properties that can be configured for the ACD metrics service which are not specified in the configuration file. The acd-metrics helm Chart follows the same basic template as the acd-manager chart, as do each of its subcharts. Documenting them all here would mostly be unnecessarily repeatative. Virtually any property in this chart can be configured under the acd-metrics namespace and be valid. For example, setting the resource preset for grafana can be achieved by setting acd-metrics.grafana.resourcePreset etc.

KeyTypeDescription
acd-metrics.enabledBooleanEnable the ACD Metrics components
acd-metrics.telegraf.enabledBooleanEnable the Telegraf Database component
acd-metrics.prometheus.enabledBooleanEnable the Prometheus Service Instance
acd-metrics.grafana.enabledBooleanEnable the Grafana Service Instance
acd-metrics.victoria-metrics-single.enabledBooleanEnable Victoria Metrics Service instance

Zitadel

Zitadel does not follow the same template as many of the other services. Below is a list of Zitadel specific properties.

KeyTypeDescription
zitadel.enabledBooleanEnable the Zitadel instance
zitadel.replicaCountNumberNumber of replicas in the Zitadel deployment
zitadel.image.repositoryStringThe full name of the image registry and repository for the Zitadel container
zitadel.setupJobObjectConfiguration for the initial setup job to configure the database
zitadel.zitadel.masterkeySecretNameStringThe name of an existing Kubernetes secret containing the Zitadel Masterkey
zitadel.zitadel.configmapConfigObjectThe Zitadel configuration. See Configuration Options in ZITADEL
zitadel.zitadel.configmapConfig.ExternalDomainStringThe external domain name or IP address to which all requests must be made.
zitadel.serviceOjbectService configuration options for Zitadel
zitadel.ingressObjectTraffic exposure parameters for Zitadel

The zitadel.zitadel.configmapConfig.ExternalDomain MUST be configured with the same value used as the first entry in in global.hosts.manager. Cross-Origin Resource Sharing (CORS) is enforced with Zitadel, and only this origin specified here will be allowed to be used to access Zitadel. The first entry in the global.hosts.manager Array will be used by internal services, and if this does not match, authentication requests will not be accepted.

For example, if the global.hosts.manager entries look like this:

global:
  hosts:
    manager:
      - host: foo.example.com
      - host: bar.example.com

The Zitadel ExternalDomain must be set to foo.example.com, and all requests to Zitadel must use foo.example.com. e.g https://foo.example.com/ui/console. Requests made to bar.example.com will result in HTTP 404 errors.

Redis and Kafka

Both the redis and kafka subcharts follow the same basic structure as the acd-manager chart, and the configurable values in each are nearly identical. Documenting the configuration of these charts here would be unnecessarily redundant. However, the operator may wish to adjust the resource configuration for these charts at the following locations:

KeyTypeDescription
redis.master.resourcesObjectResource configuration for the Redis master instance
redis.replica.resourcesObjectResource configuration for the Redis read-only replica instances
redis.replica.replicaCountNumberNumber of Read-only Redis replica instances
kafka.controller.resourcesObjectResource configuration for the Kafka controller
kafka.controller.replicaCountNumberNumber of Kafka controller replica instances to deploy

Resource Configuration

All resource configuration blocks follow the same basic schema which is defined here.

KeyTypeDescription
resources.limits.cpuStringThe maximum CPU which can be consumed before the Pod is terminated.
resources.limits.memoryStringThe maximum amount of memory the pod may consume before being killed.
resources.limits.ephemeral-storageStringThe maximum amount of storage a pod may consume
resources.requests.cpuStringThe minimum available CPU cores for each Pod to be assigned to a node.
resources.requests.memoryStringThe minimum available Free Memory on a node for a pod to be assigned.
resources.requests.ephemeral-storageStringThe minimum amount of storage a pod requires to be assigned to a node.

CPU values are specified in units of 1/1000 of a CPU e.g. “1000m” represents 1 core, “250m” is 1/4 of 1 core. Memory and Storage values are specified with the SI suffix, e.g. “250Mi” is 250MB, “3Gi” is 3GB, etc.

Most services also include a resourcePreset value which is a simple String representing some common configurations.

The presets are as follows:

PresetRequest CPURequest MemoryRequest StorageLimit CPULimit MemoryLimit Storage
nano100m128Mi50Mi150m192Mi2Gi
micro250m256Mi50Mi375m384Mi2Gi
small500m512Mi50Mi750m768Mi2Gi
medium500m1024Mi50Mi750m1536Mi2Gi
large1.02048Mi50Mi1.53072Mi2Gi
xlarge1.03072Mi50Mi3.06144Mi2Gi
2xlarge1.03072Mi50Mi6.012288Mi2Gi

When considering the resource requests vs. limits, the request values should represent the minimum resource usage necessary to run the service, while the limits represent the maximum resources each pod in the deployment will be allowed to consume. The resource request and limits are per pod, so a service using “large” presets with 3 replicas will need a minimum of 3 full cores, and 6GB of available memory to start and may consume up to a maximum of 4.5 Cores and 9GB of memory across all nodes in the cluster.

Security Contexts

Most charts used in the deployment contain configuration for both Pod and Container security contexts. Below is additional information about the parameters there-in.

KeyTypeDescription
podSecurityContext.enabledBooleanEnable the Pod Security Context
podSecurityContext.fsGroupChangePolicyStringSet filesystem group change policy for the nodes
podSecurityContext.sysctlsArraySet kernel settings using sysctl interface for the pods
podSecurityContext.supplementalGroupsArraySet filesystem extra groups for the pods
podSecurityContext.fsGroupNumberSet Filesystem Group ID for the pods
containerSecurityContext.enabledBooleanEnable the container security context
containerSecurityContext.seLinuxOptionsObjectSet SELinux options for each container in the Pod
containerSecurityContext.runAsUserNumberSet runAsUser in the containers Security Context
containerSecurityContext.runAsGroupNumberSet runAsGroup in the containers Security Context
containerSecurityContext.runAsNonRootBooleanSet runAsNonRoot in the containers Security Context
containerSecurityContext.readOnlyRootFilesystemBooleanSet readOnlyRootFilesystem in the containers Security Context
containerSecurityContext.privilegedBooleanSet privileged in the container Security Context
containerSecurityContext.allowPrivilegeEscalationBooleanSet allowPrivilegeEscalation in the container’s security context
containerSecurityContext.capabilities.dropArrayList of capabilities to be dropped in the container
containerSecurityContext.seccompProfile.typeStringSet seccomp profile in the container

Probe Configuration

Each Pod uses healthcheck probes to determine the readiness of the pod. Three probe types are defined. startupProbe, readinessProbe, and livenessProbe. They all contain exactly the same configuration options, the only difference between the probe types is when they are executed.

  • Liveness Probe: Checks if the container is running. If this probe fails, Kubernetes restarts the container, assuming it is stuck or unhealthy.

  • Readiness Probe: Determines if the container is ready to accept traffic. If it fails, the container is removed from the service load balancer until it becomes ready again.

  • Startup Probe: Used during container startup to determine if the application has started successfully. It helps to prevent the liveness probe from killing a container that is still starting up.

    The following table describes each of these properties:

PropertyDescription
enabledDetermines whether the probe is active (true) or disabled (false).
initialDelaySecondsTime in seconds to wait after the container starts before performing the first probe.
periodSecondsHow often (in seconds) to perform the probe.
timeoutSecondsNumber of seconds to wait for a probe response before considering it a failure.
failureThresholdNumber of consecutive failed probes before considering the container unhealthy (for liveness) or unavailable (for readiness).
successThresholdNumber of consecutive successful probes required to consider the container healthy or ready (usually 1).
httpGetSpecifies that the probe performs an HTTP GET request to check container health.
httpGet.pathThe URL path to request during the HTTP GET probe.
httpGet.portThe port number or name where the HTTP GET request is sent.
execSpecifies that the probe runs the specified command inside the container and expects a successful exit code to indicate health.
exec.commandAn array of strings representing the command to run

Only one of httpGet or exec may be specified in a single probe. These configurations are mutually exclusive.