Upgrade Guide
Upgrade Compatibility Matrix
The following table lists each release and the minimum previous version that supports a direct upgrade. Unless otherwise noted, production releases (even minor versions: 1.0.x, 1.2.x, 1.4.x, etc.) support direct upgrade from the previous two minor versions.
| Release | Minimum Compatible Version for Upgrade | Notes |
|---|---|---|
| 1.4.1 | 1.4.0 | Direct upgrade from 1.2.1 is not supported. |
| 1.4.0 | — | No direct upgrade from previous versions; requires clean install. |
| 1.2.1 | — | No direct upgrade from previous versions; requires clean install. |
Before You Begin
Backup your configuration and data:
Before starting the upgrade, back up your current values.yaml and any persistent data (such as
database volumes or important configuration files). For instructions on how to set up and take
backups of persistent volume claims (PVCs), see the Storage Guide. This ensures
you can recover if anything unexpected occurs during the upgrade process.
About the install script:
Running the install script from the new ISO is safe for upgrades. It is designed to be idempotent
and will not overwrite your existing configuration.
Downtime and rolling restarts:
During the upgrade, the cluster may experience brief downtime or rolling restarts as pods are updated. Plan accordingly if you have production workloads.
Upgrading the Cluster
Review Configuration Before Upgrading
Before performing the upgrade, carefully review your current configuration values against the
default values.yaml file provided on the ESB3027 ISO. This ensures that any new or changed
configuration options are accounted for, and helps prevent issues caused by outdated or missing
settings. The values.yaml file can be found at:
/mnt/esb3027/helm/charts/acd-manager/values.yaml
Compare your existing configuration (typically in ~/values.yaml or your deployment’s values
file) with the ISO’s values.yaml and update your configuration as needed before running the
upgrade commands below.
Mount the ESB3027 ISO
Start by mounting the core ESB3027 ISO on the system. There are no limitations on the exact
mountpoint used, but for this document, we will assume /mnt/esb3027.
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Run the installer
Run the install command to install the base cluster software.
/mnt/esb3027/install
(Air-gapped only) Mount the “Extras” ISO and Load Container Images
In an air-gapped environment, after running the installer, the “extras” image must be mounted. This image contains publicly available container images that otherwise would be simply downloaded from the source repositories.
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
The public container images for third-party products such as Kafka, Redis, Zitadel, etc., need to be loaded into the container runtime. This only needs to be performed once per upgrade, on a single node. An embedded registry mirror is used to distribute these images to other nodes within the cluster.
/mnt/esb3027-extras/load-images
Upgrade the cluster helm chart
The acd-cluster helm chart, which is included on the core ISO, contains the clustering software
which is required for self-hosted clusters, but may be optional in Cloud deployments. Currently
this consists of a PostgreSQL database server, but additional components may be added in later
releases.
helm upgrade --wait --timeout 10m acd-cluster /mnt/esb3027/helm/charts/acd-cluster
Upgrade the Manager chart
Upgrade the acd-manager helm chart using the following command: (This assumes the configuration is
in ~/values.yaml)
helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m
By default, there is not expected to be much output from the helm upgrade command itself. If you
would like to see more detailed information in real-time throughout the deployment process, you
can add the --debug flag to the command:
helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m --debug
Note: The
--timeout 10mflag increases the default Helm timeout from 5 minutes to 10 minutes. This is recommended because the default may not be sufficient on slower hardware or in resource-constrained environments. You may need to adjust the timeout value further depending on your system’s performance or deployment conditions.
Monitor the chart rollout with the following command:
kubectl get pods
The output of which should look similar to the following:
NAME READY STATUS RESTARTS AGE
acd-cluster-postgresql-0 1/1 Running 0 44h
acd-manager-6c85ddd747-5j5gt 1/1 Running 0 43h
acd-manager-confd-558f49ffb5-n8dmr 1/1 Running 0 43h
acd-manager-gateway-7594479477-z4bbr 1/1 Running 0 43h
acd-manager-grafana-78c76d8c5-c2tl6 1/1 Running 0 43h
acd-manager-kafka-controller-0 2/2 Running 0 43h
acd-manager-kafka-controller-1 2/2 Running 0 43h
acd-manager-kafka-controller-2 2/2 Running 0 43h
acd-manager-metrics-aggregator-f6ff99654-tjbfs 1/1 Running 0 43h
acd-manager-mib-frontend-67678c69df-tkklr 1/1 Running 0 43h
acd-manager-prometheus-alertmanager-0 1/1 Running 0 43h
acd-manager-prometheus-server-768f5d5c-q78xb 1/1 Running 0 43h
acd-manager-redis-master-0 2/2 Running 0 43h
acd-manager-redis-replicas-0 2/2 Running 0 43h
acd-manager-selection-input-844599bc4d-x7dct 1/1 Running 0 43h
acd-manager-telegraf-585dfc5ff8-n8m5c 1/1 Running 0 43h
acd-manager-victoria-metrics-single-server-0 1/1 Running 0 43h
acd-manager-zitadel-69b6546f8f-v9lkp 1/1 Running 0 43h
acd-manager-zitadel-69b6546f8f-wwcmx 1/1 Running 0 43h
acd-manager-zitadel-init-hnr5p 0/1 Completed 0 43h
acd-manager-zitadel-setup-kjnwh 0/2 Completed 0 43h
Rollback Procedure
If you encounter issues after upgrading, you can use Helm’s rollback feature to revert the acd-cluster and acd-manager deployments to their previous working versions.
To see the revision history for a release:
helm history acd-cluster
helm history acd-manager
To rollback to the previous revision (or specify a particular revision number):
helm rollback acd-cluster <REVISION>
helm rollback acd-manager <REVISION>
Replace <REVISION> with the desired revision number, or use 1 for the initial deployment,
2 for the first upgrade, etc. If you want to rollback to the immediately previous version,
you can omit the revision number to rollback to the last one:
helm rollback acd-cluster
helm rollback acd-manager
After performing a rollback, monitor the pods to ensure the cluster returns to a healthy state:
kubectl get pods
Refer to the Troubleshooting Guide if you encounter further issues.
Complete Cluster Replacement (Wipe and Reinstall)
If you need to upgrade between versions that do not support direct upgrade (as indicated in the upgrade compatibility matrix), you can perform a complete cluster replacement. This process will completely remove the existing cluster from all nodes and allow you to install the new version as if starting from scratch.
Warning: This method will permanently delete all cluster data, configuration, and persistent volumes from every node. It is equivalent to reinstalling the operating system. There is no rollback possible. Ensure you have taken full backups of any data you wish to preserve before proceeding.
Step 1: Remove the Existing Cluster
On each node in the cluster, run the built-in k3s kill-all script to remove all cluster components:
sudo /usr/local/bin/k3s-killall.sh
sudo /usr/local/bin/k3s-uninstall.sh
Repeat this process on every node that was part of the cluster.
Step 2: Install the New Version
After all nodes have been wiped, follow the standard installation procedure to deploy the new version of the cluster. See the Installation Guide for detailed steps.
This method allows you to jump between unsupported versions without reinstalling or reconfiguring the operating system, but all cluster data and configuration will be lost.