Air-Gapped Deployment

Installation procedures for air-gapped environments
You're viewing a development version of manager, the latest released version is v1.4.1

The current page Air-Gapped Deployment doesn't exist in version v1.4.1 of the documentation for this product.
We can take you to the closest parent section instead: /docs/acd/components/manager/v1.4.1/installation/

Overview

This guide describes the installation of the AgileTV CDN Manager in air-gapped environments (no internet access). Air-gapped deployments require additional preparation compared to connected deployments.

Key differences from connected deployments:

  • Both Installation ISO and Extras ISO are required on all nodes
  • OS installation ISO must be mounted on all nodes for package access
  • Container images must be loaded from the Extras ISO on each node
  • Additional firewall considerations for OS package repositories

Prerequisites

Required ISOs

Before beginning installation, obtain the following:

ISOFilenamePurpose
Installation ISOesb3027-acd-manager-X.Y.Z.isoKubernetes cluster and Manager application
Extras ISOesb3027-acd-manager-extras-X.Y.Z.isoContainer images for air-gapped environments
OS Installation ISORHEL 9 or compatible cloneOperating system packages (required on all nodes)

Hardware Requirements

Refer to the System Requirements Guide for hardware specifications.

  • Single-Node (Lab): Minimum 8 cores, 16 GiB RAM, 128 GiB disk
  • Multi-Node (Production): Minimum 3 Server nodes for high availability

Network Configuration

Air-gapped environments may have internal network mirrors for OS packages. If no internal mirror exists, the OS installation ISO must be mounted on each node to provide packages during installation.

Ensure that required firewall ports are configured before installation. See the Networking Guide for complete firewall configuration requirements.

SELinux

If SELinux is to be used, it must be set to “Enforcing” mode before running the installer script. The installer will configure appropriate SELinux policies automatically. SELinux cannot be enabled after installation.

Installation Steps

Step 1: Prepare All Nodes

On each node (primary server, additional servers, and agents):

  1. Mount the OS installation ISO:
mkdir -p /mnt/os
mount -o loop,ro /path/to/rhel-9.iso /mnt/os
  1. Configure local repository (if no internal mirror):
cat > /etc/yum.repos.d/local.repo <<EOF
[local]
name=Local OS Repository
baseurl=file:///mnt/os/BaseOS
enabled=1
gpgcheck=0
EOF

# Also configure AppStream if needed
cat >> /etc/yum.repos.d/local.repo <<EOF

[appstream]
name=AppStream Repository
baseurl=file:///mnt/os/AppStream
enabled=1
gpgcheck=0
EOF
  1. Verify repository is accessible:
dnf repolist
dnf makecache

Step 2: Prepare the Primary Server Node

Mount the installation ISOs on the primary server node:

# Mount Installation ISO
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027

# Mount Extras ISO
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras

Step 3: Install the Base Cluster on Primary Server

Run the installer to set up the K3s Kubernetes cluster:

/mnt/esb3027/install

This installs:

  • K3s Kubernetes distribution
  • Longhorn distributed storage
  • Cloudnative PG operator for PostgreSQL
  • Base system dependencies

Important: After the installer completes, verify that all system pods in both namespaces are in the Running state before proceeding:

# Check kube-system namespace (Kubernetes core components)
kubectl get pods -n kube-system

# Check longhorn-system namespace (distributed storage)
kubectl get pods -n longhorn-system

All pods should show Running status. If any pods are still Pending or ContainerCreating, wait until they are ready. Proceeding with incomplete system pods can cause subsequent steps to fail in unpredictable ways.

This verification confirms:

  • K3s cluster is operational
  • Longhorn distributed storage is running
  • Cloudnative PG operator is deployed
  • All core components are healthy before continuing

Step 4: Retrieve the Node Token

Retrieve the node token for joining additional nodes:

cat /var/lib/rancher/k3s/server/node-token

Save this token for use on additional nodes. Also note the IP address of the primary server node.

Step 5: Join Additional Server Nodes (Multi-Node Only)

On each additional server node:

  1. Mount the OS ISO:
mkdir -p /mnt/os
mount -o loop,ro /path/to/rhel-9.iso /mnt/os

# Configure local repository
cat > /etc/yum.repos.d/local.repo <<EOF
[local]
name=Local OS Repository
baseurl=file:///mnt/os/BaseOS
enabled=1
gpgcheck=0

[appstream]
name=AppStream Repository
baseurl=file:///mnt/os/AppStream
enabled=1
gpgcheck=0
EOF

dnf makecache
  1. Mount the Installation ISOs:
# Mount Installation ISO
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027

# Mount Extras ISO
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
  1. Join the cluster:

Run the join script:

/mnt/esb3027/join-server https://<primary-server-ip>:6443 <node-token>

Replace <primary-server-ip> with the IP address of the primary server and <node-token> with the token retrieved in Step 4.

  1. Verify the node joined successfully:
kubectl get nodes

Repeat for each server node. A minimum of 3 server nodes is required for high availability.

Step 6: Join Agent Nodes (Optional)

On each agent node:

  1. Mount the OS ISO:
mkdir -p /mnt/os
mount -o loop,ro /path/to/rhel-9.iso /mnt/os

# Configure local repository
cat > /etc/yum.repos.d/local.repo <<EOF
[local]
name=Local OS Repository
baseurl=file:///mnt/os/BaseOS
enabled=1
gpgcheck=0

[appstream]
name=AppStream Repository
baseurl=file:///mnt/os/AppStream
enabled=1
gpgcheck=0
EOF

dnf makecache
  1. Mount the Installation ISOs:
# Mount Installation ISO
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027

# Mount Extras ISO
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
  1. Join the cluster as an agent:

Run the join script:

/mnt/esb3027/join-agent https://<primary-server-ip>:6443 <node-token>
  1. Verify the node joined successfully from an existing server node:
kubectl get nodes

Agent nodes provide additional workload capacity but do not participate in the control plane quorum.

Step 7: Load Container Images

On each node in the cluster:

/mnt/esb3027-extras/load-images

This script loads all container images from the Extras ISO into the local container runtime.

Important: This step must be performed on every node (primary server, additional servers, and agents) before deploying the Manager application.

Step 8: Verify Cluster Status

After all nodes are joined and images are loaded, verify the cluster is operational:

1. Verify all nodes are ready:

kubectl get nodes

Expected output:

NAME                 STATUS   ROLES                       AGE   VERSION
k3s-server-0         Ready    control-plane,etcd,master   5m    v1.33.4+k3s1
k3s-server-1         Ready    control-plane,etcd,master   3m    v1.33.4+k3s1
k3s-server-2         Ready    control-plane,etcd,master   2m    v1.33.4+k3s1
k3s-agent-1          Ready    <none>                      1m    v1.33.4+k3s1

2. Verify system pods in both namespaces are running:

# Check kube-system namespace (Kubernetes core components)
kubectl get pods -n kube-system

# Check longhorn-system namespace (distributed storage)
kubectl get pods -n longhorn-system

All pods should show Running status.

3. Verify container images are loaded:

crictl images | grep acd-manager

Step 9: Deploy the Manager Helm Chart

For complete instructions on deploying the CDN Manager Helm chart, including configuration file setup, MaxMind GeoIP database loading, TLS certificate configuration, deployment commands, and verification steps, see the Helm Chart Installation Guide.

This guide covers the common deployment steps that apply to all installation types. After completing the helm chart installation steps, proceed to Post-Installation below.

Post-Installation

After installation completes, proceed to the Next Steps guide for:

  • Initial user configuration
  • Accessing the web interfaces
  • Configuring authentication
  • Setting up monitoring

Accessing the System

Refer to the Accessing the System section in the Getting Started guide for service URLs and default credentials.

Note: A self-signed SSL certificate is deployed by default. You will need to accept the certificate warning in your browser.

Updating MaxMind GeoIP Databases

If using GeoIP-based routing, load the MaxMind databases:

/mnt/esb3027/generate-maxmind-volume

The utility will prompt for the database file locations and volume name. Reference the volume in your values.yaml:

manager:
  maxmindDbVolume: maxmind-geoip-2026-04

See the Operations Guide for database update procedures.

Troubleshooting

Image Pull Errors

If pods fail with image pull errors:

  1. Verify the load-images script completed successfully on all nodes
  2. Check container runtime image list:
    crictl images | grep <image-name>
    
  3. Ensure image tags in Helm chart match tags on the Extras ISO

OS Package Errors

If the installer reports missing OS packages:

  1. Verify OS ISO is mounted on the affected node
  2. Check repository configuration:
    dnf repolist
    dnf info <package-name>
    
  3. Ensure the ISO matches the installed OS version

Longhorn Volume Issues

If Longhorn volumes fail to mount:

  1. Verify all nodes have the load-images script completed
  2. Check Longhorn system pods:
    kubectl get pods -n longhorn-system
    
  3. Review Longhorn UI via port-forward:
    kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
    

Next Steps

After successful installation:

  1. Next Steps Guide - Post-installation configuration
  2. Configuration Guide - System configuration
  3. Operations Guide - Day-to-day operational procedures
  4. Troubleshooting Guide - Common issues and resolution