System Requirements Guide
Overview
This document specifies the hardware, operating system, and networking requirements for deploying the AgileTV CDN Manager (ESB3027). Requirements vary based on deployment type and node role within the cluster.
Cluster Sizing
Production Deployments
Production deployments require a minimum of three nodes to achieve high availability. The cluster architecture employs distinct node roles:
| Role | Description |
|---|---|
| Server Node (Control Plane Only) | Runs control plane components (etcd, Kubernetes API server) only; does not host application workloads; requires separate Agent nodes |
| Server Node (Combined) | Runs control plane components and hosts application workloads; default configuration |
| Agent Node | Executes application workloads only; does not participate in cluster quorum |
Server nodes can be deployed in either Control Plane Only or Combined role configurations. The choice depends on your deployment requirements:
- Control Plane Only: Dedicated control plane nodes with lower resource requirements; requires separate Agent nodes for workloads
- Combined: Server nodes run both control plane and workloads; minimum 3 nodes required for HA
Why Use Control Plane Only Nodes?
Dedicated Control Plane Only nodes provide several benefits for larger deployments:
- Resource Isolation: Control plane components (etcd, API server, scheduler) run on dedicated hardware without competing with application workloads for CPU and memory
- Stability: Application workload spikes or misbehaving pods cannot impact control plane performance
- Security: Smaller attack surface on control plane nodes; fewer containers and services running
- Predictable Performance: Control plane responsiveness remains consistent regardless of application load
- Flexible Sizing: Control Plane Only nodes can use lower-specification hardware (2 cores, 4 GiB) since they don’t run application workloads
For most small to medium deployments, Combined role servers are simpler and more cost-effective. Control Plane Only nodes are recommended for larger deployments with significant workload requirements or where control plane stability is critical.
High Availability Considerations
Production deployments require 3 nodes running control plane (etcd) and 3 nodes capable of running workloads. These can be the same nodes (Combined role) or separate nodes (CP-Only + Agent).
Node Role Combinations:
| Configuration | Control Plane Nodes | Workload Nodes | Total Nodes |
|---|---|---|---|
| All Combined | 3 Combined servers | 3 Combined servers | 3 |
| Separated | 3 CP-Only servers | 3 Agent nodes | 6 |
| Hybrid | 2 CP-Only + 1 Combined | 1 Combined + 2 Agent | 5 |
Any combination works as long as you have 3 control plane nodes and 3 workload-capable nodes.
Note: Regardless of the deployment configuration, a minimum of 3 nodes capable of running workloads is required for production deployments. This ensures both high availability and sufficient capacity for application pods.
For detailed fault tolerance information and data replication strategies, see the Architecture Guide.
Hardware Requirements
Single-Node Lab Deployment
Lab deployments are intended for acceptance testing, demonstrations, and development only. These configurations are not suitable for production workloads.
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 8 cores | 12 cores |
| Memory | 16 GiB | 24 GiB |
| Disk* | 128 GiB | 128 GiB |
Production Cluster - Server Node (Control Plane Only)
Server nodes dedicated to control plane functions have modest resource requirements:
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 2 cores | 4 cores |
| Memory | 4 GiB | 8 GiB |
| Disk* | 64 GiB | 128 GiB |
These nodes run only control plane components and require separate Agent nodes to run application workloads.
Production Cluster - Server Node (Control Plane + Workloads)
Combined role nodes require resources for both control plane and application workloads:
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 16 cores | 24 cores |
| Memory | 32 GiB | 48 GiB |
| Disk* | 256 GiB | 256 GiB |
Production Cluster - Agent Node
Agent nodes execute application workloads and require the following resources:
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 8 cores |
| Memory | 6 GiB | 16 GiB |
| Disk* | 64 GiB | 128 GiB |
Storage Notes
* Disk Space: All disk space values must be available in the
/var/lib/longhornpartition. It is recommended that/var/lib/longhornbe a separate partition on a fast SSD for optimal performance, though SSD is not strictly required.Longhorn Capacity: Longhorn storage requires an additional 30% capacity headroom for internal operations and scaling. If less than 30% of the total partition capacity is available, Longhorn may mark volumes as “full” and prevent further writes. Plan disk capacity accordingly.
Storage Performance
For optimal performance, the following storage characteristics are recommended:
- Disk Type: SSD or NVMe storage for Longhorn volumes
- Filesystem: XFS or ext4 with default mount options
- Partition Layout: Dedicated
/var/lib/longhornpartition for persistent storage
Virtual machines and bare-metal hardware are both supported. Nested virtualization (running multiple nodes under a single hypervisor) may impact performance and is not recommended for production deployments.
Operating System Requirements
Supported Operating Systems
The CDN Manager supports Red Hat Enterprise Linux and compatible distributions:
| Operating System | Status |
|---|---|
| Red Hat Enterprise Linux 9 | Supported |
| Red Hat Enterprise Linux 10 | Untested |
| Red Hat Enterprise Linux 8 | Not supported |
Compatible Clones
The following RHEL-compatible distributions are supported when major version requirements are satisfied:
- Oracle Linux 9
- AlmaLinux 9
- Rocky Linux 9
Air-Gapped Deployments
Important: For air-gapped deployments (no internet access), the OS installation ISO must be mounted on all nodes before running the installer or join commands. The installer needs to install one or more packages from the distribution’s repository.
Oracle Linux UEK Kernel
Note: For Oracle Linux 9.7 and later using the Unbreakable Enterprise Kernel (UEK), you must install the
kernel-uek-modules-extra-netfilter-$(uname -r)package before running the installer:# Mount OS ISO first (required for air-gapped) mount -o loop /path/to/oracle-linux-9.iso /mnt/iso # Install required kernel modules dnf install kernel-uek-modules-extra-netfilter-$(uname -r)This package provides netfilter kernel modules required by K3s and Longhorn.
SELinux
SELinux is supported when installed in “Enforcing” mode. The installation process will configure appropriate SELinux policies automatically.
Networking Requirements
Network Interface
Each cluster node must have at least one network interface card (NIC) configured as the default gateway. If the node lacks a pre-configured default route, one must be established prior to installation.
Port Requirements
The cluster requires the following network connectivity:
| Category | Ports | Purpose |
|---|---|---|
| Inter-Node | 2379-2380, 6443, 8472/UDP, 10250, 5001, 9500, 8500 | etcd, API server, Flannel VXLAN, Kubelet, Spegel, Longhorn |
| External Access | 80, 443 | HTTP redirect and HTTPS ingress |
| Application (optional) | 6379, 8125 TCP/UDP, 9093, 9095 | Redis, Telegraf, Alertmanager, Kafka external |
Important: Complete port requirements, network ranges, and firewall configuration procedures are provided in the Networking Guide. Do not expose VictoriaMetrics (8428, 8429), Grafana (3000), or PostgreSQL (5432) directly—access these services only through the secure HTTPS ingress (port 443).
Resource Planning
Calculating Cluster Capacity
When planning cluster capacity, consider the following factors:
- Base Overhead: Kubernetes system components consume approximately 1-2 cores and 2-4 GiB memory per node
- Application Workloads: Refer to individual component resource requirements in the Architecture Guide
- Headroom: Maintain 20-30% resource headroom for workload spikes and automatic scaling
Scaling Considerations
The CDN Manager supports horizontal scaling for most components. The Horizontal Pod Autoscaler (HPA) can automatically adjust replica counts based on resource utilization. Detailed scaling guidance is available in the Architecture Guide.
Example Production Deployment
A minimal production deployment with 3 server nodes (combined role) and 2 agent nodes would require:
| Node Type | Count | CPU Total | Memory Total | Disk Total |
|---|---|---|---|---|
| Server (Combined) | 3 | 48 cores | 96 GiB | 768 GiB |
| Agent | 2 | 8 cores | 12 GiB | 128 GiB |
| Total | 5 | 56 cores | 108 GiB | 896 GiB |
This configuration provides:
- High availability (survives loss of 1 server node)
- Capacity for application workloads across all nodes
- Headroom for horizontal scaling
Next Steps
After verifying system requirements:
- Review the Installation Guide for deployment procedures
- Consult the Networking Guide for firewall configuration
- Examine the Architecture Guide for component resource requirements