Single-Node Installation
Page not available in that version
The current page Single-Node Installation doesn't exist in version v1.4.1 of the documentation for this product.
Warning: Single-node deployments are for lab environments, acceptance testing, and demonstrations only. This configuration is not suitable for production workloads. For production deployments, see the Multi-Node Installation Guide, which requires a minimum of 3 server nodes for high availability.
Air-Gapped Deployment? This guide assumes internet connectivity. For air-gapped deployments, see the Air-Gapped Deployment Guide for additional requirements and procedures.
Overview
This guide describes the installation of the AgileTV CDN Manager on a single node. This configuration is intended for lab environments, acceptance testing, and demonstrations only. It is not suitable for production workloads.
Prerequisites
Hardware Requirements
Refer to the System Requirements Guide for hardware specifications. Single-node deployments require the “Single-Node (Lab)” configuration.
Operating System
Refer to the System Requirements Guide for supported operating systems.
Software Access
- Installation ISO:
esb3027-acd-manager-X.Y.Z.iso - Extras ISO (air-gapped only):
esb3027-acd-manager-extras-X.Y.Z.iso
Network Configuration
Ensure that required firewall ports are configured before installation. See the Networking Guide for complete firewall configuration requirements.
SELinux
If SELinux is to be used, it must be set to “Enforcing” mode before running the installer script. The installer will configure appropriate SELinux policies automatically. SELinux cannot be enabled after installation.
Installation Steps
Step 1: Mount the ISO
Create a mount point and mount the installation ISO:
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Replace X.Y.Z with the actual version number.
Step 2: Install the Base Cluster
Run the installer to set up the K3s Kubernetes cluster:
/mnt/esb3027/install
This installs:
- K3s Kubernetes distribution
- Longhorn distributed storage
- Cloudnative PG operator for PostgreSQL
- Base system dependencies
The installer will configure the node as both a server and agent node.
Step 3: Verify Cluster Status
After the installer completes, verify that all components are operational before proceeding. This verification serves as an important checkpoint to confirm the installation is progressing correctly.
1. Verify the node is ready:
kubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION
k3s-server Ready control-plane,etcd,master 2m v1.33.4+k3s1
2. Verify system pods in both namespaces are running:
# Check kube-system namespace (Kubernetes core components)
kubectl get pods -n kube-system
# Check longhorn-system namespace (distributed storage)
kubectl get pods -n longhorn-system
All pods should show Running status. If any pods are still Pending or ContainerCreating, wait until they are ready. Proceeding with incomplete system pods can cause subsequent steps to fail in unpredictable ways.
This verification confirms:
- K3s cluster is operational
- Longhorn distributed storage is running
- Cloudnative PG operator is deployed
- All core components are healthy before continuing
Step 4: Air-Gapped Deployments (If Applicable)
If deploying in an air-gapped environment, load container images from the extras ISO:
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
/mnt/esb3027-extras/load-images
Step 5: Deploy the Manager Helm Chart
For complete instructions on deploying the CDN Manager Helm chart, including configuration file setup, MaxMind GeoIP database loading, TLS certificate configuration, deployment commands, and verification steps, see the Helm Chart Installation Guide.
This guide covers the common deployment steps that apply to all installation types. After completing the helm chart installation steps, proceed to Post-Installation below.
Post-Installation
After installation completes, proceed to the Next Steps guide for:
- Initial user configuration
- Accessing the web interfaces
- Configuring authentication
- Setting up monitoring
Accessing the System
Refer to the Accessing the System section in the Getting Started guide for service URLs and default credentials.
Note: A self-signed SSL certificate is deployed by default. You will need to accept the certificate warning in your browser.
Troubleshooting
If pods fail to start:
- Check pod status:
kubectl describe pod <pod-name> - Review logs:
kubectl logs <pod-name> - Verify resources:
kubectl top pods
See the Troubleshooting Guide for additional assistance.
Next Steps
After successful installation:
- Next Steps Guide - Post-installation configuration
- Configuration Guide - System configuration
- Operations Guide - Day-to-day operations
Appendix: Example Configuration
The following values.yaml provides a minimal working configuration for lab deployments:
# Minimal lab configuration for single-node deployment
global:
hosts:
manager:
- host: manager.local
routers:
- name: default
address: 127.0.0.1
# Single-node: Disable Kafka replication
kafka:
replicaCount: 1
controller:
replicaCount: 1
Customization notes:
- Replace
manager.localwith your desired hostname - The
routersentry specifies CDN Director instances. The placeholder127.0.0.1may be used if a Director instance isn’t available, or specify actual Director hostnames for production testing - For air-gapped deployments, see Step 4: Air-Gapped Deployments
- For complete configuration options, see the Helm Chart Installation Guide
- For complete configuration options, see the Helm Chart Installation Guide