Installation

Installation Procedure

Prerequisites

  • One or more physical or virtual machines running RedHat Enterprise Linux 8 or 9 or compatible operating system.
  • A minimum of 8GB available RAM
  • A resolvable fully-qualified DNS hostname pointing to the node.

ESB3027 AgileTV CDN Manager 1.0 is only supported in a single-node self-hosted configuration.

For the 1.0 Release of ESB3027 AgileTV CDN Manager, the following limitations are in place:

  1. Only a single-node, self-hosted cluster configuration is supported.
  2. The release does not support firewalld or SELinux in enforcing mode. These services MUST be disabled before starting.
sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config
setenforce 0
systemctl disable --now firewalld
  1. Installing the software depends on several binaries that will be placed in the /usr/local/bin directory on the node. On RHEL based systems, if a root shell is obtained using sudo, that directory will not be present in the path. Either you must configure sudo to not exclude that directory from the path, or use su - which does not exhibit this behavior. You can verify if that directory is included in the path with the following command:
echo $PATH | grep /usr/local/bin

Installing the Manager

The following procedure will assume that the latest ISO for the Manager has been copied to the node.

  1. Mount the ESB3027 ISO. For this example, we are assuming that the ISO will be mounted on /mnt but any mountpoint may be used. Substitute the actual mountpoint for /mnt in all following commands.
mount -o loop,ro esb3027-acd-manager-1.0.0.iso /mnt
  1. Run the install.sh script from the ISO.
/mnt/install.sh

Generating the Configuration

The following instructions should all be executed from the primary node unless otherwise specified.

Loading SSL Certificates (Optional)

If you wish to include valid SSL certificates for the Manager, you must first load them into a Kubernetes Secret. A helper script load-certificates.sh has been provided on the ISO to read the Certificate and Key files and create the Kubernetes Secret for you. If this is not performed, the Manager will generate and use a self-signed certificate automatically.

The load-certificates.sh script uses a simple wizard approach to prompt for the files containing the SSL certificate and SSL Private Key file, and the name of the secret. The name can be any string containing alpha-numeric characters and either - or _.

/mnt/load-certificates.sh

Enter the path to the SSL Certificate File: /root/example.com.cert
Enter the path to the SSL Key File: /root/example.com.key
Enter the name of the secret containing the SSL Certificate and Key (e.g. my-ssl-certs): example-com-ssl-key
secret/example-com-ssl-key created

Generating the Zitadel Masterkey (Optional)

Zitadel is a User Identity Management software used to manage authentication across the system. Certain data in Zitadel such as credential information needs to be kept secure. Zitadel uses a 32-byte cryptographic key to protect this information at rest, and highly recommends that this key be unique to each environment.

If this step is not performed, a default master key will be used instead. Using a hardcoded master key poses significant security risks, as it may expose sensitive data to unauthorized access. It is highly recommended to not skip this step.

The load-zitadel-key.sh script generates a secure 32-byte random key and loads that key into a Kubernetes Secret for Zitadel to use. This step should only be performed once, as changing the key will result in all encrypted credential information being unrecoverable. For the secret name, any string containing alpha-numeric characters and either - or _ is allowed.

/mnt/load-zitadel-key.sh

Enter the name of the secret containing the Zitadel Masterkey (e.g. my-zitadel-key): zitadel-key
secret/zitadel-key created

Generating the Configuration Template

The last step prior to installing the Manager software is to generate a values.yaml file, which is used by Helm to provide configuration values for the deployment, to configure the installation. This can be performed by running the script configures.sh on the ISO. This script will prompt the user for a few pieces of information, including the “external domain” which is the DNS resolvable fully-qualified domain name which must be used to access the manager software, and the optional names of the secrets generated in the previous sections. Ensure that sufficient write privileges exist to write to the current directory.

/mnt/configure.sh

Enter the domain you want to use for Zitadel (e.g. zitadel.example.com): manager.example.com
Enter the name of the secret containing the SSL Certificate and Key (e.g. my-ssl-certs): example-com-ssl-key
Enter the name of the secret containing the Zitadel Masterkey (e.g. my-zitadel-key): zitadel-key
Wrote /root/values.yaml

After running this wizard, a values.yaml file will be created in the current directory. Within this file, there is a commented out section containing the routers, gui, and geoip addresses. This should optionally be filled out before continuing so that it resembles the following structure.

NOTE: This file is YAML, and indentation and whitespace are part of the format. It is recommended before continuing, that you paste the contents of the file into an online YAML validator to ensure that the syntax is OK. You can use tools like yamllint.com or any other trusted YAML validator.

gateway:
  configMap:
    annotations: []
    routers:
      - name: router1
        address: 10.16.48.100
      - name: router2
        address: 10.16.48.101
    gui:
      host: 10.16.48.100
      port: 7001
    geoip:
      host: 10.16.48.100
      port: 5003
  ingress:
    tls:
      - hosts:
        - 10.16.48.140.sslip.io
        secretName: null

Deploying ESB3027 AgileTV CDN Manager

The ESB3027 software is deployed using Helm.

helm install acd-manager \
    --values values.yaml \
    --atomic \
    --timeout=10m \
    /mnt/charts/helm/acd-manager \

The use of the --atomic and --timeout flags will cause Helm to wait up to 10 minutes for all pods to be in the Ready state. For example, a timeout might occur if there are insufficient resources available in the cluster or if a misconfiguration, such as incorrect environment variables or missing secrets, prevents a pod from starting. If the timeout is reached before all pods are Ready, the entire installation will be automatically rolled back.

Updating the Deployment

In order to update an existing Helm deployment, whether to modify configuration values, or to upgrade to a newer software version, you must use the helm upgrade command. The syntax of this command is exactly the same as for helm install, and the same parameters used at install time must be provided. A shortcut option exists for the helm upgrade command --install which if supplied, will upgrade an existing deployment or install a new deployment if one is not already present.

helm upgrade acd-manager \
    /mnt/charts/helm/acd-manager \
    --values values.yaml \
    --atomic \
    --timeout 10m

Verifying the Installation

Verify the Ready status of the Running pods with the following command.

$ kubectl get pods
NAME                                           READY   STATUS      RESTARTS      AGE
acd-manager-gateway-6d489c5c66-jb49d           1/1     Running     0             20h
acd-manager-gateway-test-connection            0/1     Completed   0             17h
acd-manager-kafka-controller-0                 1/1     Running     0             20h
acd-manager-redis-master-0                     1/1     Running     0             20h
acd-manager-rest-api-668d889b76-sn2v4          1/1     Running     0             20h
acd-manager-selection-input-5fc9f4df4c-qz5mw   1/1     Running     0             20h
acd-manager-zitadel-ccb5d9674-9qpn5            1/1     Running     0             20h
acd-manager-zitadel-init-6l8vg                 0/1     Completed   0             20h
acd-manager-zitadel-setup-bmbh9                0/2     Completed   0             20h
postgresql-0                                   1/1     Running     0             20h

Each pod in the output contains a Ready status. This represents the number of pod replicas which are in the Ready state, as compared to the number of desired replicas provisioned by the deployment. Pods marked as “Completed” are typically one-time jobs or initialization tasks that have run to completion and finished successfully.