This is the multi-page printable view of this section. Click here to print.
Installation Guide
1 - Overview
Introduction
Installing the ESB3027 AgileTV CDN Manager for production requires a minimum of three nodes. More details about node roles and sizing can be found in the System Requirements Guide. Before beginning the installation, select one node as the primary “Server” node. This node will serve as the main installation point. Once additional Server nodes join the cluster, all Server nodes are considered equivalent, and cluster operations can be managed from any of them. The typical process involves installing the primary node as a Server, then adding more Server nodes to expand the cluster, followed by joining Agent nodes as needed to increase capacity.
Air-Gapped Environments
In air-gapped environments—those without direct Internet access—additional considerations are
required. First, on each node, the Operating System’s ISO must be mounted so that dnf can be
used to install essential packages included with the OS. Second, the “Extras” ISO from the
ESB3027 AgileTV CDN Manager must be mounted to provide access to container images for
third-party software that would otherwise be downloaded from public repositories. Details on
mounting this ISO and loading the included images are provided below.
Roles
All nodes in the cluster have one of two roles. Server nodes run the control-plane software necessary to manage the cluster and provide redundancy. Agent nodes do not run the control-plane software; instead, they are responsible for running the Pods that make up the applications. Jobs are distributed among agent nodes to enable horizontal scalability of workloads. However, agent nodes do not contribute to the cluster’s high availability. If an agent node fails, the Pods assigned to that node are automatically moved to another node, provided sufficient resources are available.
Control-plane only Server nodes
Both server nodes and agent nodes run workloads within the cluster. However, a special attribute called the “CriticalAddonsOnly” taint can be applied to server nodes. This taint prevents the node from scheduling workloads that are not part of the control plane. If the hardware allows, it is recommended to apply this taint to server nodes to separate their responsibilities. Doing so helps prevent misbehaving applications from negatively impacting the overall health of the cluster.
graph TD
    subgraph Cluster
        direction TB
        ServerNodes[Server Nodes]
        AgentNodes[Agent Nodes]
    end
    ServerNodes -->|Manage cluster and control plane| ControlPlane
    ServerNodes -->|Provide redundancy| Redundancy
    AgentNodes -->|Run application Pods| Pods
    Pods -->|Handle workload distribution| Workloads
    AgentNodes -->|Failover: Pods move if node fails| Pods
    ServerNodes -->|Can run Pods unless tainted with CriticalAddonsOnly| PodExecution
    Taint[CriticalAddonsOnly Taint] -->|Applied to server nodes to restrict workload| ServerNodesFor high availability, at least three nodes running the control plane are required, along with at least three nodes running workloads. These can be a combination of server and agent roles, provided that the control-plane nodes are sufficient. If a server node has the “CriticalAddonsOnly” taint applied, an additional agent node must be deployed to ensure workloads can run. For example, the cluster could consist of three untainted server nodes, or two untainted servers, one tainted server, and one agent, or three tainted servers and three agents—all while maintaining at least three control-plane nodes and three workload nodes.
The “CriticalAddonsOnly” taint can be applied to server nodes at any time after cluster installation. However, it only affects Pods scheduled in the future. Existing Pods that have already been assigned to a server node will remain there until they are recreated or rescheduled due to an external event.
kubectl taint nodes <node-name> CriticalAddonsOnly=true:NoSchedule
Where node-name is the hostname of the node for which to apply the taint. Multiple node names
may be specified in the same command. This command should only be run from one of the server nodes.
Next Steps
Continue reading the Installation Requirements.
2 - Requirements
SELinux
SELinux is fully supported provided it is enabled and set to “Enforcing” mode at the time of the initial cluster installation on all nodes. This is the default configuration for Red Hat Enterprise Linux and its derivatives, such as Oracle Linux and AlmaLinux. If the mode is set to “Enforcing” prior to install time, the necessary SELinux packages will be installed, and the cluster will be started with support for SELinux. For these reasons, enabling SELinux after the initial cluster installation is not supported.
Firewalld
Please see the Networking Guide for the current firewall recommendations.
Hardware
Refer to the System Requirements Guide for the current hardware, operating system, and network requirements.
Networking
A minimum of one network interface card must be present and configured as the default gateway on the node when the cluster is installed. If the node does not have an interface with the default route, a default route must be configured. Even a black-hole route via a dummy interface will suffice. The K3s software requires a default route in order to auto-detect the node’s primary IP, and for cluster routing to function properly. To add a dummy route do the following:
ip link add dummy0 type dummy
ip link set dummy0 up
ip addr add 203.0.113.254/31 dev dummy0
ip route add default via 203.0.113.255 dev dummy0 metric 1000
Special Considerations when using Multiple Network Interfaces
If there are special network considerations, such as using a non-default interface for
cluster communication, that must be configured using the INSTALL_K3S_EXEC environment
variable as below before installing the cluster or joining nodes.
As an example, consider the case where the node contains two interfaces, bond0 and
bond1, where the default route exists through bond0, but where bond1 should be used
for cluster communication. In that case, ensure that the INSTALL_K3S_EXEC environment
variable is set as follows in the environment prior to installing or joining the cluster.
Assuming that bond1 has the local IP address 10.0.0.10:
export INSTALL_K3S_EXEC="<MODE> --node-ip 10.0.0.10 --flannel-iface=bond1"
Where MODE should be one of server or agent depending on the role of the node. The
initial node used to create the cluster MUST be server, and additional nodes vary
depending on the role.
DNS and Hostname Resolution
Kubernetes clusters use internal DNS to allow services and pods to communicate using predictable hostnames. Each service is assigned a DNS name in the format:
<service>.<namespace>.svc.cluster.local
These names are automatically resolvable from within the cluster using the
coredns service, which is deployed by default with K3s. This means that any
pod or service can refer to another by its internal DNS name without needing to
know its IP address.
For DNS queries outside the cluster.local domain (for example, public
internet hostnames or your organization’s internal DNS), coredns forwards
requests to an upstream nameserver. By default, this is set to Google’s public
DNS servers (8.8.8.8 and 8.8.4.4).
However, if the primary installation node has custom nameservers configured in
/etc/resolv.conf, K3s will automatically configure coredns to use those as
the upstream resolvers instead. This allows the cluster to integrate with your
local or corporate DNS infrastructure if needed.
Note for multi-datacenter deployments: If your cluster spans multiple datacenters and each datacenter uses different DNS servers, the cluster will be configured to forward DNS requests to the servers specified in
/etc/resolv.confon the node where the installation was performed. If nodes in other datacenters cannot reach those DNS servers (for example, if the servers are only routable within one datacenter), you must use thehostAliasesmechanism (as described for air-gapped environments) to ensure proper DNS resolution for required services across all nodes.
Note for air-gapped environments: In air-gapped clusters, or any environment where no upstream DNS servers are available or configurable, external DNS queries will not resolve. In these cases, you can override DNS names for required services directly in the configuration using host overrides. This allows the cluster to function without external DNS resolution.
Example: To override hostnames in an air-gapped environment, add a
hostAliasessection to yourvalues.yamlconfiguration.
manager:
  hostAliases:
    - ip: "192.0.2.10"
      hostnames:
        - acd-manager
        - acd-manager.example.local
    - ip: "192.0.2.11"
      hostnames:
        - acd-manager-alt
        - acd-manager-alt.example.local
Next Steps
Continue reading the Installation Guide.
3 - Quick Start Guide
Lab Install Guide
This section describes a simplified installation process for customer acceptance testing in a single-node lab environment. Unlike the production Quick Start Guide (which assumes 3 or more nodes), the Lab Install Guide is intended for customers to perform acceptance testing prior to installing a production environment.
System Requirements:
- RHEL 8 or 9 (or equivalent) with at least a minimal installation
 - 8-core CPU
 - 16 GB RAM
 - 128 GB available disk space in the 
/varpartition 
Step 1: Mount the ISO
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Step 2: Install the Base Cluster Software
/mnt/esb3027/install
Step 3: (Air-gapped only) Mount the Extras ISO and Load Images
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
/mnt/esb3027-extras/load-images
Step 4: Deploy the Cluster Helm Chart
helm install --wait --timeout 10m acd-cluster /mnt/esb3027/helm/charts/acd-cluster
Step 5: Deploy the Manager Helm Chart
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m
Step 6: Next Steps
See the Post Install Guide for post-installation steps and recommendations.
You can now access the manager and begin acceptance testing. For full configuration details, see the full Installation Guide.
Quick Start Guide
This section provides a concise, step-by-step summary for installing the ESB3027 AgileTV CDN Manager cluster in a production environment. The Quick Start Guide is intended for production deployments with three or more nodes, providing high availability and scalability. For full details, see the full Installation Guide.
Step 1: Mount the ISO
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Step 2: Install the Base Cluster Software
/mnt/esb3027/install
Step 3: (Air-gapped only) Mount the Extras ISO and Load Images
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
/mnt/esb3027-extras/load-images
Step 4: Fetch the Node Token
cat /var/lib/rancher/k3s/server/node-token
Step 5: Join Additional Nodes
On each additional node, repeat Step 1, then run:
/mnt/esb3027/join-server https://<primary-server-ip>:6443 <node-token>
# or for agent nodes:
/mnt/esb3027/join-agent https://<primary-server-ip>:6443 <node-token>
Step 6: Deploy the Cluster Helm Chart
helm install --wait --timeout 10m acd-cluster /mnt/esb3027/helm/charts/acd-cluster
Step 7: Deploy the Manager Helm Chart
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m
Step 8: Next Steps
See the Post Install Guide for post-installation steps and recommendations.
For configuration details and troubleshooting, see the full Installation Guide.
4 - Installation Guide
Installing the Primary Server Node
Mount the ESB3027 ISO
Start by mounting the core ESB3027 ISO on the system. There are no limitations on the exact
mountpoint used, but for this document, we will assume /mnt/esb3027.
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Run the installer
Run the install command to install the base cluster software.
/mnt/esb3027/install
(Air-gapped only) Mount the “Extras” ISO and Load Container Images
In an air-gapped environment, after running the installer, the “extras” image must be mounted. This image contains publicly available container images that otherwise would be simply downloaded from the source repositories.
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
The public container images for third-party products such as Kafka, Redis, Zitadel, etc., need to be loaded into the container runtime. An embedded registry mirror is used to distribute these images to other nodes within the cluster, so this only needs to be performed on one machine.
/mnt/esb3027-extras/load-images
Fetch the primary node token
In order to join additional nodes into the cluster, a unique node token must be provided. This token is automatically generated on the primary node during the installation process. Retrieve this now, and take note of it for later use.
cat /var/lib/rancher/k3s/server/node-token
Join Additional Server Nodes
From each additional server node, mount the core ISO and join the cluster using the following commands.
mkdir -p /mnt/esb3027
mount esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Obtain the node token from the primary server as you will need to include it in the following command. You will also need the URL to the primary server for which to connect.
/mnt/esb3027/join-server https://primary-server-ip:6443 abcdefg0123456...987654321
Where primary-server-ip is replaced with the IP address to which this node should connect to the
primary server, and abcdef...321 is the contents of the node-token retrieved from the primary server.
Repeat the above steps on each additional Server node in the cluster.
Join Agent Nodes
From each additional agent node, mount the core ISO and join the cluster using the following commands.
mkdir -p /mnt/esb3027
mount esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Obtain the node token from the primary server as you will need to include it in the following command. You will also need the URL to the primary server for which to connect.
/mnt/esb3027/join-agent https://primary-server-ip:6443 abcdefg0123456...987654321
Where primary-server-ip is replaced with the IP address to which this node should connect to the
primary server, and abcdef...321 is the contents of the node-token retrieved from the primary server.
Repeat the above steps on each additional Agent node in the cluster.
Verify the state of the cluster
At this point, a generic Kubernetes cluster should have multiple nodes connected and be marked Ready. Verify this is the case by running the following from any one of the Server nodes.
kubectl get nodes
Each node in the cluster should be listed in the output with the status “Ready”, and the Server nodes should have “control-plane” in the listed Roles. If this is not the case, see the Troubleshooting Guide to help diagnose the problem.
Deploy the cluster helm chart
The acd-cluster helm chart, which is included on the core ISO, contains the clustering software which
is required for self-hosted clusters, but may be optional in Cloud deployments. Currently this consists
of a PostgreSQL database server, but additional components may be added in later releases.
helm install --wait --timeout 10m acd-cluster /mnt/esb3027/helm/charts/acd-cluster
Deploying the Manager chart
The acd-manager helm chart is used to deploy the acd-manager application as well as any of the
third-party services on which the chart depends. Installing this chart requires at least a minimal
configuration to be applied. To get started, either copy the default values.yaml file from the chart
directory /mnt/esb3027/helm/charts/acd-manager/values.yaml or copy the following minimal template to a
writable location such as the user’s home directory.
global:
  hosts:
    manager:
      - host: manager.local
    routers:
      - name: director-1
        address: 192.0.2.1
      - name: director-2
        address: 192.0.2.2
zitadel:
  zitadel:
    configmapConfig:
      ExternalDomain: manager.local
Where:
manager.localis either the external IP or resolvable DNS name used to access the manager’s cluster.- All director instances should be listed in the 
global.hosts.routerssection. Thenamefield is used in URLs, and must consist of only alpha-numeric characters or ‘.’, ‘-’, or ‘_’. 
Further details on the available configuration options in the default values.yaml file can be found in
the Configuration Guide.
You must set at a minimum the following properties:
| Property | Type | Description | 
|---|---|---|
| global.hosts.manager | Array | List of external IP addresses or DNS hostnames for each node in the cluster | 
| global.hosts.router | Array | List of name and address for each instance of ESB3024 AgileTV CDN Director | 
| zitadel.zitadel.configmapConfig.ExternalDomain | String | External DNS domain name or IP address of one manager node. This must match the first entry from global.hosts.manager | 
Note! The Zitadel ExternalDomain must match the hostname or IP address given in the first
global.hosts.manager entry, and MUST match the Origin used when accessing Zitadel. This is enforced by
CORS.
Hint: For non-air-gapped environments, where no DNS servers are present, a third-party service
sslip.io may be used to provide a resolvable DNS name which can be used for both the
global.hosts.manager and Zitadel ExternalDomain entries. Any IP address passed as
W.X.Y.Z.sslip.io will resolve to the IP W.X.Y.Z
Only the value used for Zitadel’s ExternalDomain may be used to access Zitadel due to CORS
restrictions. E.g. if that is set to “10.10.10.10.sslip.io”, then Zitadel must be accessed via the URL
https://10.10.10.10.sslip.io/ui/console. This must match the first entry in global.hosts.manager as
that entry will be used by internal services that need to interact with Zitadel, such as the frontend
GUI and the manager API services.
Importing TLS Certificates
By default, the manager will generate a self-signed TLS certificate for use with the cluster ingress.
In production environments, it is recommended to use a valid TLS certificate issued by a trusted Certificate Authority (CA).
To install the TLS certificate pair into the ingress controller, the certificate and key must be saved in a Kubernetes secret. The simplest way of doing this is to let Helm generate the secret by including the PEM formatted certificate and private key directly in the configuration values. Alternatively, the secret can be created manually and simply referenced by the configuration.
Option 1: Let Helm manage the secret
To have Helm automatically manage the secret based on the PEM formatted certificate and key, add a record
to ingress.secrets as described in the following snippet.
ingress:
  secrets:
    - name: <secret-name>
      key: |-
        -----BEGIN RSA PRIVATE KEY-----
        ...
        -----END RSA PRIVATE KEY-----
      certificate: |-
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
Option 2: Manually creating the secret
To manually create the secret in Kubernetes, execute the following command: This will create a secret named “secret-name”.
kubectl create secret tls secret-name --cert=tls.crt --key=tls.key
Configure the Ingress
The ingress controllers must be configured as to the name of the secret holding the certificate and key files. Additionally, the DNS hostname or IP address, covered by the certificate, which Must be used to access the ingress, must be set in the configuration.
ingress:
  hostname: <dns-hostname>
  tls: true
  secretName: <secret-name>
zitadel:
  ingress:
    tls:
      - hosts:
          - <dns-hostname>
        secretName: <secret-name>
confd:
  ingress:
    hostname: <dns-hostname>
    tls: true
    secretName: <secret-name>
mib-frontend:
  ingress:
    hostname: <dns-hostname>
    tls: true
    secretName: <secret-name>
dns-hostname- A valid DNS hostname for the cluster which is valid for the certificate. For compatibility with Zitadel and CORS restrictions, this MUST be the same DNS hostname listed as the first entry inglobal.hosts.manager.secret-name- An arbitry name used to identify the Kubernetes secret containing the TLS certificate and key. This has a maximum length limitation of 53 characters.
Loading Maxmind GeoIP databases
The Maxmind GeoIP databases are required if GeoIP lookups are to be performed by the manager. If this functionality is used, then Maxmind formatted GeoIP databases must be configured. The following databases are used by the manager.
GeoIP2-City.mmdb- The City Database.GeoLite2-ASN.mmdb- The ASN Database.GeoIP2-Anonymous-IP.mmdb- The VPN and Anonymous IP database.
A helper utility has been provided on the ISO called generate-maxmind-volume that will prompt the user
for the locations of these 3 database files, and the name of a volume, which will be created in
Kubernetes. After running this command, set the manager.maxmindDbVolume property in the configuration
to the volume name.
To run the utility, use:
/mnt/esb3027/generate-maxmind-volume
Installing the Chart
Install the acd-manager helm chart using the following command: (This assumes the configuration is in
~/values.yaml)
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m
By default, there is not expected to be much output from the helm install command itself. If you would
like to see more detailed information in real-time throughout the deployment process, you can add the
--debug flag to the command:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m --debug
Note: The
--timeout 10mflag increases the default Helm timeout from 5 minutes to 10 minutes. This is recommended because the default may not be sufficient on slower hardware or in resource-constrained environments. You may need to adjust the timeout value further depending on your system’s performance or deployment conditions.
Monitor the chart rollout with the following command:
kubectl get pods
The output of which should look similar to the following:
NAME                                             READY   STATUS      RESTARTS   AGE
acd-cluster-postgresql-0                         1/1     Running     0          44h
acd-manager-6c85ddd747-5j5gt                     1/1     Running     0          43h
acd-manager-confd-558f49ffb5-n8dmr               1/1     Running     0          43h
acd-manager-gateway-7594479477-z4bbr             1/1     Running     0          43h
acd-manager-grafana-78c76d8c5-c2tl6              1/1     Running     0          43h
acd-manager-kafka-controller-0                   2/2     Running     0          43h
acd-manager-kafka-controller-1                   2/2     Running     0          43h
acd-manager-kafka-controller-2                   2/2     Running     0          43h
acd-manager-metrics-aggregator-f6ff99654-tjbfs   1/1     Running     0          43h
acd-manager-mib-frontend-67678c69df-tkklr        1/1     Running     0          43h
acd-manager-prometheus-alertmanager-0            1/1     Running     0          43h
acd-manager-prometheus-server-768f5d5c-q78xb     1/1     Running     0          43h
acd-manager-redis-master-0                       2/2     Running     0          43h
acd-manager-redis-replicas-0                     2/2     Running     0          43h
acd-manager-selection-input-844599bc4d-x7dct     1/1     Running     0          43h
acd-manager-telegraf-585dfc5ff8-n8m5c            1/1     Running     0          43h
acd-manager-victoria-metrics-single-server-0     1/1     Running     0          43h
acd-manager-zitadel-69b6546f8f-v9lkp             1/1     Running     0          43h
acd-manager-zitadel-69b6546f8f-wwcmx             1/1     Running     0          43h
acd-manager-zitadel-init-hnr5p                   0/1     Completed   0          43h
acd-manager-zitadel-setup-kjnwh                  0/2     Completed   0          43h
The output contains a “READY” column, which indicates the number of ready pods on the left, and the number of requested pods on the right. Pods with status “Completed” are one time commands that have terminated successfully and can be ignored in this output. For “Running” pods, once all pods have the same number on both sides of the “READY” status the rollout is complete.
If a Pod is marked as “CrashLoopBackoff” or “Error” this means that either one of the containers in the pod has failed to deploy, or that the container has terminated in an Error state. See the Troubleshooting Guide to help diagnose the problem. The Kubernetes cluster will retry failed pod deployments several times, and the number in the “RESTARTS” column will show the number of times that has happened. If a pod restarts during the initial rollout, this may simply be that the state of the cluster was not as expected by the pod at that time, and this can be safely ignored. After the initial rollout has completed, the pods should stabilize, and multiple restarts may be an indication that something is wrong. In that case, refer to the Troubleshooting Guide for more information.
Next Steps
For post-installation steps, see the Post Install Guide.
5 - Upgrade Guide
Upgrade Compatibility Matrix
The following table lists each release and the minimum previous version that supports a direct upgrade. Unless otherwise noted, production releases (even minor versions: 1.0.x, 1.2.x, 1.4.x, etc.) support direct upgrade from the previous two minor versions.
| Release | Minimum Compatible Version for Upgrade | Notes | 
|---|---|---|
| 1.4.1 | 1.4.0 | Direct upgrade from 1.2.1 is not supported. | 
| 1.4.0 | — | No direct upgrade from previous versions; requires clean install. | 
| 1.2.1 | — | No direct upgrade from previous versions; requires clean install. | 
Before You Begin
Backup your configuration and data:
Before starting the upgrade, back up your current values.yaml and any persistent data (such as
database volumes or important configuration files). For instructions on how to set up and take
backups of persistent volume claims (PVCs), see the Storage Guide. This ensures
you can recover if anything unexpected occurs during the upgrade process.
About the install script:
Running the install script from the new ISO is safe for upgrades. It is designed to be idempotent
and will not overwrite your existing configuration.
Downtime and rolling restarts:
During the upgrade, the cluster may experience brief downtime or rolling restarts as pods are updated. Plan accordingly if you have production workloads.
Upgrading the Cluster
Review Configuration Before Upgrading
Before performing the upgrade, carefully review your current configuration values against the
default values.yaml file provided on the ESB3027 ISO. This ensures that any new or changed
configuration options are accounted for, and helps prevent issues caused by outdated or missing
settings. The values.yaml file can be found at:
/mnt/esb3027/helm/charts/acd-manager/values.yaml
Compare your existing configuration (typically in ~/values.yaml or your deployment’s values
file) with the ISO’s values.yaml and update your configuration as needed before running the
upgrade commands below.
Mount the ESB3027 ISO
Start by mounting the core ESB3027 ISO on the system. There are no limitations on the exact
mountpoint used, but for this document, we will assume /mnt/esb3027.
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Run the installer
Run the install command to install the base cluster software.
/mnt/esb3027/install
(Air-gapped only) Mount the “Extras” ISO and Load Container Images
In an air-gapped environment, after running the installer, the “extras” image must be mounted. This image contains publicly available container images that otherwise would be simply downloaded from the source repositories.
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
The public container images for third-party products such as Kafka, Redis, Zitadel, etc., need to be loaded into the container runtime. This only needs to be performed once per upgrade, on a single node. An embedded registry mirror is used to distribute these images to other nodes within the cluster.
/mnt/esb3027-extras/load-images
Upgrade the cluster helm chart
The acd-cluster helm chart, which is included on the core ISO, contains the clustering software
which is required for self-hosted clusters, but may be optional in Cloud deployments. Currently
this consists of a PostgreSQL database server, but additional components may be added in later
releases.
helm upgrade --wait --timeout 10m acd-cluster /mnt/esb3027/helm/charts/acd-cluster
Upgrade the Manager chart
Upgrade the acd-manager helm chart using the following command: (This assumes the configuration is
in ~/values.yaml)
helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m
By default, there is not expected to be much output from the helm upgrade command itself. If you
would like to see more detailed information in real-time throughout the deployment process, you
can add the --debug flag to the command:
helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m --debug
Note: The
--timeout 10mflag increases the default Helm timeout from 5 minutes to 10 minutes. This is recommended because the default may not be sufficient on slower hardware or in resource-constrained environments. You may need to adjust the timeout value further depending on your system’s performance or deployment conditions.
Monitor the chart rollout with the following command:
kubectl get pods
The output of which should look similar to the following:
NAME                                             READY   STATUS      RESTARTS   AGE
acd-cluster-postgresql-0                         1/1     Running     0          44h
acd-manager-6c85ddd747-5j5gt                     1/1     Running     0          43h
acd-manager-confd-558f49ffb5-n8dmr               1/1     Running     0          43h
acd-manager-gateway-7594479477-z4bbr             1/1     Running     0          43h
acd-manager-grafana-78c76d8c5-c2tl6              1/1     Running     0          43h
acd-manager-kafka-controller-0                   2/2     Running     0          43h
acd-manager-kafka-controller-1                   2/2     Running     0          43h
acd-manager-kafka-controller-2                   2/2     Running     0          43h
acd-manager-metrics-aggregator-f6ff99654-tjbfs   1/1     Running     0          43h
acd-manager-mib-frontend-67678c69df-tkklr        1/1     Running     0          43h
acd-manager-prometheus-alertmanager-0            1/1     Running     0          43h
acd-manager-prometheus-server-768f5d5c-q78xb     1/1     Running     0          43h
acd-manager-redis-master-0                       2/2     Running     0          43h
acd-manager-redis-replicas-0                     2/2     Running     0          43h
acd-manager-selection-input-844599bc4d-x7dct     1/1     Running     0          43h
acd-manager-telegraf-585dfc5ff8-n8m5c            1/1     Running     0          43h
acd-manager-victoria-metrics-single-server-0     1/1     Running     0          43h
acd-manager-zitadel-69b6546f8f-v9lkp             1/1     Running     0          43h
acd-manager-zitadel-69b6546f8f-wwcmx             1/1     Running     0          43h
acd-manager-zitadel-init-hnr5p                   0/1     Completed   0          43h
acd-manager-zitadel-setup-kjnwh                  0/2     Completed   0          43h
Rollback Procedure
If you encounter issues after upgrading, you can use Helm’s rollback feature to revert the acd-cluster and acd-manager deployments to their previous working versions.
To see the revision history for a release:
helm history acd-cluster
helm history acd-manager
To rollback to the previous revision (or specify a particular revision number):
helm rollback acd-cluster <REVISION>
helm rollback acd-manager <REVISION>
Replace <REVISION> with the desired revision number, or use 1 for the initial deployment,
2 for the first upgrade, etc. If you want to rollback to the immediately previous version,
you can omit the revision number to rollback to the last one:
helm rollback acd-cluster
helm rollback acd-manager
After performing a rollback, monitor the pods to ensure the cluster returns to a healthy state:
kubectl get pods
Refer to the Troubleshooting Guide if you encounter further issues.
Complete Cluster Replacement (Wipe and Reinstall)
If you need to upgrade between versions that do not support direct upgrade (as indicated in the upgrade compatibility matrix), you can perform a complete cluster replacement. This process will completely remove the existing cluster from all nodes and allow you to install the new version as if starting from scratch.
Warning: This method will permanently delete all cluster data, configuration, and persistent volumes from every node. It is equivalent to reinstalling the operating system. There is no rollback possible. Ensure you have taken full backups of any data you wish to preserve before proceeding.
Step 1: Remove the Existing Cluster
On each node in the cluster, run the built-in k3s kill-all script to remove all cluster components:
sudo /usr/local/bin/k3s-killall.sh
sudo /usr/local/bin/k3s-uninstall.sh
Repeat this process on every node that was part of the cluster.
Step 2: Install the New Version
After all nodes have been wiped, follow the standard installation procedure to deploy the new version of the cluster. See the Installation Guide for detailed steps.
This method allows you to jump between unsupported versions without reinstalling or reconfiguring the operating system, but all cluster data and configuration will be lost.
6 - Post Installation Guide
After installing the cluster, there are a few steps that should be taken to complete the setup.
Create an Admin User
The ESB3027 AgileTV CDN Manager ships with a default user account, but this account is only intended as a way to log in and create an actual user. Attempting to authenticate other services such as the MIB Frontend Configuration GUI, may not work using this pre-provisioned account.
You will need the IP address or DNS name specified in the configuration as both the first manager host and the Zitadel External Domain.
global:
  hosts:
    manager:
      - host: manager.local
Using a web browser, connect to the following URL, replacing manager.local with the IP or DNS name
from the configuration above:
https://manager.local/ui/console
You must authenticate using the default credentials:
Username: admin@agiletv.dev
Password: Password1!
It will ask you to set up Multi-Factor Authentication, however you MUST skip this step for now, as it is not currently supported everywhere in the manager’s APIs.
On the menu bar at the top of the screen, click “Users” and proceed to create a New User. Enter the required information, and for now, ensure the “Email Verified” and “Set Initial Password” boxes are checked. Zitadel will attempt to send a confirmation EMail if the “Email Verified” box is not checked, however on initial installation, the SMTP server details have not been configured.
You should now be able to authenticate to the MIB Frontend GUI at https://manager.local/gui using
the credentials for the new user.
Configure an SMTP Server
Zitadel requires an SMTP server to be configured in order to send validation emails and support
communication with users for password resets, etc. If you have an SMTP server, you can configure
it by logging back into the Zitadel Web UI at https://manager.local/ui/console, clicking on
“Default Settings” at the top of the page, and configuring the SMTP provider from the menu on the
left. After this has been performed, if a new user account is created, an E-Mail will be sent to
the configured E-Mail address with a verification link, which must be clicked before the account
will be valid.