Installation and configuration
How to install and configure the system and its dependencies
A working knowledge of the technical basis of the Agile Live solution is beneficial before starting with this installation guide.
This guide will often point to subpages for the detailed description of steps. The major steps of this guide are:
- Hardware and operating system requirements
- Install, configure and start the System Controller
- The base platform components
- Install prerequisites
- Preparing the base system for installation
- Install Agile Live platform
- Configure and start the Agile Live components
- Install the configuration-and-monitoring GUI (optional)
- Install Prometheus and Grafana, and configure them (optional)
- Install and start control panel GUI examples
Warning
Some installation scripts described in this guide are using sudo to install software. As always when using sudo, make sure you understand what the scripts do before running them!1. Hardware and operating system requirements
The bulk of the platform runs on computer systems based on x86 CPUs and running the Ubuntu 22.04 operating system. This includes the following software components: the ingest, the production pipeline, the control panel integrations, and the system controller.
The remaining components, i.e. the configuration and monitoring GUI, the control panel GUI examples, and the Prometheus database and Grafana frontend can be run in the platform of choice, as they are cross platform.
The platform does not have explicit hardware performance requirements, as it will utilize the performance of the system it is given, but suggestions for hardware specifications can be found here.
Agile Live is supported and tested on Ubuntu 22.04 LTS. Other versions may work but are not officially supported.
The System Controller is the central backend that ties together and controls the different components of the base platform, and that provides the REST API that is used by the user to configure and monitor the system. When each Agile Live component, such as an Ingest or a Production Pipeline, starts up it connects to and registers with the System Controller, so it is a good idea to install and configure the System Controller first.
Instructuctions on how to install and configure it can be found here.
The base platform consists of the software components Ingests, Production Pipelines, and Control Panel interfaces. Each machine that is intended to act as an aquisition device for media shall run exactly one instance of the Ingest application, as it is responsible for interfacing with third party software such as SDKs for SDI cards and NDI.
The production pipeline application instance (interchangably called a rendering engine application instance in parts of this documentation) uses the resources of its host machine in a non-exclusive way (including the GPU), so there may therefore reside more than one instance on a single host machine.
The control panel integrations preferably reside on a machine close to the actual control panel.
However, in a very minimal system not all of these machines need to be separate; in fact, all the above could run on a single computer if so is desired.
A guide to installing and configuring the Agile Live base platform can be found here.
4. Install the configuration-and-monitoring GUI (optional)
There is a basic configuration-and-monitoring GUI that is intended to be used as an easy way to get going with using the system. It uses the REST API to configure and monitor the system.
Instructions for how to install it can be found here and a tutorial for how to use it can be found here
To collect useful system metrics for stability and performance monitoring, we advise to use Prometheus. For visualizing the metrics collected by Prometheus you could use Grafana.
The System controller receives a lot of internal metrics from the Agile Live components. These can be pulled by a Prometheus instance from the endpoint https://system-controller-host:port/metrics
.
It is also possible to install external exporters for various hardware and OS metrics.
To install and run prometheus exporters on the servers that run ingest and production pipeline software, as well as installing, configuring and running the Prometheus database and the associated Grafana instance on the server dedicated to management and orchestration, follow this guide.
6. Install and start control panel GUI examples
To install the example control panel GUIs that are delivered as source code, follow this guide.
1 - Hardware examples
Agile Live hardware examples
The following hardware specifications serve as examples of how much can be expected from certain hardware setups. The maximum number of inputs and outputs are highly dependent on the quality and resolution settings chosen when setting up a production, so these numbers shall be seen as approximations of what can be expected from a certain hardware under some form of normal use.
Hardware example #1
| System Specifications |
---|
CPU | Intel Core i7 12700K |
RAM | Kingston FURY Beast 32GB (2x16GB) 3200MHz CL16 DDR4 |
Graphics Card | NVIDIA RTX A2000 12GB |
PSU | 750W PSU |
Motherboard | ASUS Prime H670-Plus D4 DDR4 S-1700 ATX |
SSD | Samsung 980 NVMe M.2 1TB SSD |
Capture Card (Ingest only) | Blackmagic Design Decklink Duo 2 |
One such server is capable of fulfilling either of the two use-cases below:
- Ingest server
- With Proxy editing: ingesting 4 sources (encoded as 4 Low Delay streams, AVC 720p50 1 Mbps and 4 High Quality streams, HEVC 1080p50 25 Mbps).
- With Direct editing: ingesting 6 sources, (HEVC 1080p50 25 Mbps).
- Production server, running both a High Quality and a Low Delay pipeline with at least 10 connected sources (i.e. at least 20 incoming streams), with at least one 1080p50 multi-view output, one 1080p High Quality program output and one 720p Low Delay output.
Hardware example #2
| System Specifications |
---|
CPU | Intel Core i7 12700K |
RAM | Kingston FURY Beast 32GB (2x16GB) 6000MHz CL36 DDR5 |
Graphics Card | NVIDIA RTX 4000 SFF Ada Generation 20GB |
PSU | 750W PSU |
Motherboard | ASUS ROG Maximus Z690 Hero DDR5 ATX |
SSD | WD Blue SN570 M.2 1TB SSD |
Capture Card (Ingest only) | Blackmagic Design Decklink Quad 2 |
One such server is capable of fulfilling either of the two use-cases below:
- Ingest server
- With Proxy editing: ingesting 6-8 sources (encoded as 6-8 Low Delay streams, AVC 720p50 1 Mbps and 6-8 High Quality streams, HEVC 1080p50 25 Mbps).
- With Direct editing: ingesting 8-10 sources (HEVC 1080p50 25 Mbps).
- Production server, running both the High Quality and the Low Delay pipeline with at least 20 connected sources (i.e. at least 40 incoming streams), with at least three 4K multi-view outputs, four 1080p High Quality outputs and four 720p Low Delay outputs.
2 - The System Controller
Installing and configuring the System Controller
The System Controller is the central backend that ties together and controls the different components of the base platform, and that provides the REST API that is used by the user to configure and monitor the system.
When each Agile Live component (such as an Ingest or a Production Pipeline) starts up, it connects to and registers with the System Controller. This means the System Controller must run in a network location that is reachable from all other components.
From release 6.0.0 and on, the System Controller is distributed as a debian package, acl-system-controller
.
To install the System Controller application, do the following:
sudo apt -y install acl-system-controller_<version>_amd64.deb
The System Controller binary will be installed in the directory /opt/agile_live/bin/
and its configuration file will be placed in /etc/opt/agile_live/
.
Note
In releases 5.0.0 and earlier, the System Controller application was delivered in the release artifact agile-live-system-controller-<x.y.z>-<checksum>.zip
. Unpack the file and place the contents in a desired location.
An example configuration file is shipped with this release, postfixed with .example
. Remove this postfix to use the file.The System Controller is configured either using a settings file or using environment variables, or a combination thereof. The settings file, named acl_sc_settings.json
, is searched for in /etc/opt/agile_live
and current directory, or the search path can be pointed to using a flag on the command line when starting the application:
./acl-system-controller
for default search paths or ./acl-system-controller -settings-path /path/to/directory/where/settings_file/exists
To use environment variables overriding settings from the configuration file, construct the variables from the json objects in the settings file by prepending with ACL_SC
and then the json object flattened out with _
as the delimiter, e.g.:
export ACL_SC_LOGGER_LEVEL=debug
export ACL_SC_PROTO=https
Starting with the basic configuration file in the release, edit a few settings inside it (see this guide for help with the security related parameters):
- change the
psk
parameter to something random of your own - change the
salt
to some long string of your own - set the
username
and password
to protect the REST API - specify TLS certificate and key files
- set the
host
and port
parameters to where the System Controller shall be reached
Also consider setting the file_name_prefix
to set the log file location, otherwise it defaults to /tmp/
.
Warning
Since passwords will be inserted in this configuration file, it is recommended to set the file rights to only allow reading by the user that executes the System Controller.From the installation directory, the System Controller is started with:
./acl-system-controller
If started successfully, it will display a log message showing the server endpoint.
The System Controller comes with built-in Swagger documentation that makes it easy to explore the API in a regular web browser. To access the Swagger REST API documentation hosted by the System Controller, open https://<IP>:<port>/swagger/
in your browser. The Swagger UI can be used to poll or edit the state of the actual system by using the “Try it out” buttons.
Running the System Controller as a service
It may be useful to run the System Controller as a service in Ubuntu, so that it starts automatically on boot and restarts if it terminates abnormally. In Ubuntu 22.04 systemd
manages the services. Here follows an example of how a service can be configured, started and monitored.
First create a “service unit file”, named for example /etc/systemd/system/acl-system-controller.service
and put the following in it:
[Unit]
Description=Agile Live System Controller service
[Service]
User=<YOUR_USER>
Environment=<ANY_ENV_VARS_HERE>
ExecStart=/opt/agile_live/bin/acl-system-controller
Restart=always
[Install]
WantedBy=multi-user.target
Replace <YOUR_USER>
with the user that will be executing the System Controller binary on the host machine. The Environment
attribute is used if you want to set any parameters as environment variables.
After saving the service unit file, run:
sudo systemctl daemon-reload
sudo systemctl enable acl-system-controller.service
sudo systemctl start acl-system-controller
The service should now be started. The status of the service can be queried using:
sudo systemctl status acl-system-controller.service
3 - The base platform
Installation instructions for the base platform
This page describes the steps needed to install and configure the base platform.
Install Third-party packages
Agile Live has a few dependencies on third party software that have to be installed separately first. Which ones are needed depends on the component of Agile Live that you will be running on the specific system.
A machine running the Ingest needs the following:
- CUDA Toolkit
- oneVPL
- Blackmagic Desktop Video (Optional, is needed when using Blackmagic DeckLink SDI cards)
- NDI SDK for Linux
- Base packages via ‘apt’
A machine running the Rendering Engine only needs a subset:
- CUDA Toolkit
- Base packages via ‘apt’
A machine running control panels needs yet fewer:
Hint
For ease of use during the installation instructions below, for each commands section it is suggested to copy the contents, place them in a shell script, and run that shell script.For video encoding and decoding on NVidia GPUs you need Nvidia CUDA Toolkit installed. It must be installed on Ingest machines even if they don’t have Nvidia cards.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt -y install cuda
You need to reboot the system after installation. Then check the installation by running nvidia-smi
. If installing more dependencies, it is ok to proceed with that and reboot once after installing all parts.
oneVPL
All machines running the Ingest application need the Intel oneAPI Video Processing Library (oneVPL) installed. It is used for encoding and thumbnail processing on Intel GPUs and is required even for Ingests which will not use the Intel GPU for encoding. Install oneVPL by running:
sudo apt install -y gpg-agent wget
# Download and run oneVPL offline installer:
# https://www.intel.com/content/www/us/en/developer/articles/tool/oneapi-standalone-components.html#onevpl
VPL_INSTALLER=l_oneVPL_p_2023.1.0.43488_offline.sh
wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/e2f5aca0-c787-4e1d-a233-92a6b3c0c3f2/$VPL_INSTALLER
chmod +x $VPL_INSTALLER
sudo ./$VPL_INSTALLER -a -s --eula accept
# Install the latest general purpose GPU (GPGPU) run-time and development software packages
# https://dgpu-docs.intel.com/driver/client/overview.html
wget -qO - https://repositories.intel.com/graphics/intel-graphics.key | \
sudo gpg --dearmor --output /usr/share/keyrings/intel-graphics.gpg
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/graphics/ubuntu jammy arc' | \
sudo tee /etc/apt/sources.list.d/intel.gpu.jammy.list
sudo apt update
sudo apt install -y ffmpeg intel-media-va-driver-non-free libdrm2 libmfx-gen1.2 libva2
# Add VA environment variables
if [[ -z "${LIBVA_DRIVERS_PATH}" ]]; then
sudo sh -c "echo 'LIBVA_DRIVERS_PATH=/usr/lib/x86_64-linux-gnu/dri' >> /etc/environment"
fi
if [[ -z "${LIBVA_DRIVER_NAME}" ]]; then
sudo sh -c "echo 'LIBVA_DRIVER_NAME=iHD' >> /etc/environment"
fi
# Give users access to the GPU device for compute and render node workloads
# https://www.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top/prerequisites/install-intel-gpu-drivers.html
sudo usermod -aG video,render ${USER}
# Set environment variables for development
echo "# oneAPI environment variables for development" >> ~/.bashrc
echo "source /opt/intel/oneapi/setvars.sh > /dev/null" >> ~/.bashrc
You need to reboot the system after installation. If installing more dependencies, it is ok to proceed with that and reboot once after installing all parts.
Support for Blackmagic Design SDI grabber cards
If the Ingest shall be able to use Blackmagic DeckLink SDI cards, the drivers need to be installed.
Download https://www.blackmagicdesign.com/se/support/family/capture-and-playback (Desktop Video).
Unpack and install (Replace all commands with the version you downloaded):
sudo apt install dctrl-tools dkms
tar -xvf Blackmagic_Desktop_Video_Linux_12.7.tar.gz
cd Blackmagic_Desktop_Video_Linux_12.7/deb/x86_64
sudo dpkg -i desktopvideo_12.7a4_amd64.deb
Support for NDI sources
For machines running Ingests, you need to install the NDI library and additional dependencies:
## Install library
sudo apt -y install avahi-daemon avahi-discover avahi-utils libnss-mdns mdns-scan
wget https://downloads.ndi.tv/SDK/NDI_SDK_Linux/Install_NDI_SDK_v5_Linux.tar.gz
tar -xvzf Install_NDI_SDK_v5_Linux.tar.gz
PAGER=skip-pager ./Install_NDI_SDK_v5_Linux.sh <<< y
cd NDI\ SDK\ for\ Linux/lib/x86_64-linux-gnu/
NDI_LIBRARY_FILE=$(ls libndi.so.5.*.*)
cd ../../..
sudo cp NDI\ SDK\ for\ Linux/lib/x86_64-linux-gnu/${NDI_LIBRARY_FILE} /usr/lib/x86_64-linux-gnu
sudo ln -fs /usr/lib/x86_64-linux-gnu/${NDI_LIBRARY_FILE} /usr/lib/x86_64-linux-gnu/libndi.so
sudo ln -fs /usr/lib/x86_64-linux-gnu/${NDI_LIBRARY_FILE} /usr/lib/x86_64-linux-gnu/libndi.so.5
sudo ldconfig
sudo cp NDI\ SDK\ for\ Linux/include/* /usr/include/.
Base packages installed via ‘apt’
When installing version 5.0.0 or earlier, please refer to this guide for this step.
All systems that run an Ingest, Rendering Engine or Control Panel need chrony as NTP client.
The systemd-coredump
package is also really handy to have installed to be able to debug core dumps later on, but is technically optional.
These packages can be installed with the apt package manager:
sudo apt -y update && sudo apt -y upgrade && sudo apt -y install chrony systemd-coredump
Preparing the base system for installation
To prepare the system for Agile Live, the NTP client must be configured correctly by following this guide, and some general Linux settings shall be configured by following this guide.
Also, CUDA should be prevented from automatically installing upgrades by following this guide.
Installing the Agile Live components
When installing version 5.0.0 or earlier, please refer to this guide for this step.
The following Agile Live .deb packages are distributed:
- acl-system-controller - Contains the System Controller application (explained in separate section)
- acl-libingest - Contains the library used by the Ingest application
- acl-ingest - Contains the Ingest application
- acl-libproductionpipeline - Contains the libraries used by the Rendering Engine application
- acl-renderingengine - Contains Agile Live’s default Rendering Engine application
- acl-libcontroldatasender - Contains the libraries used by Control Panel applications
- acl-controlpanels - Contains the generic Control Panel applications
- acl-dev - Contains the header files required for development of custom Control Panels and Rendering Engines
All binaries will be installed in the directory /opt/agile_live/bin
. Libraries are installed in /opt/agile_live/lib
. Header files are installed in /opt/agile_live/include
.
Ingest
To install the Ingest application, install the following .deb-packages:
sudo apt -y install ./acl-ingest_<version>_amd64.deb ./acl-libingest_<version>_amd64.deb
Rendering engine
To install the Rendering Engine application, install the following .deb-packages:
sudo apt -y install ./acl-renderingengine_<version>_amd64.deb ./acl-libproductionpipeline_<version>_amd64.deb
Control panels
To install the generic Control Panel applications acl-manualcontrolpanel
, acl-tcpcontrolpanel
and acl-websocketcontrolpanel
, install the following .deb-packages:
sudo apt -y install ./acl-controlpanels_<version>_amd64.deb ./acl-libcontroldatasender_<version>_amd64.deb
Dev
To install the acl-dev package containing the header files needed for building your
own applications, install the following .deb-package:
sudo apt -y install ./acl-dev_<version>_amd64.deb
Configuring and starting the Ingest(s)
The Ingest component is a binary which ingests raw video input sources such as SDI or NDI, timestamps, processes, encodes and originates transport to the rest of the platform.
Settings for the ingest binary are added as environment variables. There is one setting that must be set:
ACL_SYSTEM_CONTROLLER_PSK
shall be set to the same as in the settings file of the System Controller
Also, there are some parameters that have default settings that you will most probably want to override:
ACL_SYSTEM_CONTROLLER_IP
and ACL_SYSTEM_CONTROLLER_PORT
shall be set to the location where the System Controller is hostedACL_INGEST_NAME
is optional but is well used as an aid to identify the Ingest in the REST interface if multiple instances are connected to the same System Controller
You may also want to set the log location by setting ACL_LOG_PATH
. The log location defaults to /tmp/
.
Hint
In case you forget the names of these environment variables or want to see their default values, run the component with the --help
flag, e.g. ./acl-ingest --help
.As an example, from the installation directory, the Ingest is started with:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_INGEST_NAME="My Ingest" ./acl-ingest
If started successfully, it will display a log message showing that the Ingest is up and running.
Running the Ingest SW as a service
It may be useful to run the Ingest SW as a service in Ubuntu, so that it starts automatically on boot and restarts if it terminates abnormally. In Ubuntu 22.04 systemd
manages the services. Here follows an example of how a service can be configured, started and monitored.
First create a “service unit file”, named for example /etc/systemd/system/acl-ingest.service
and put the following in it:
[Unit]
Description=Agile Live Ingest service
[Service]
User=<YOUR_USER>
Environment=ACL_SYSTEM_CONTROLLER_PSK=<YOUR_PSK>
ExecStart=/bin/bash -c '\
source /opt/intel/oneapi/setvars.sh; \
exec /opt/agile_live/bin/acl-ingest'
Restart=always
[Install]
WantedBy=multi-user.target
Replace <YOUR_USER>
with the user that will be executing the Ingest binary on the host machine. The Environment
attribute should be set in the same way as described above for manually starting the binary.
After saving the service unit file, run:
sudo systemctl daemon-reload
sudo systemctl enable acl-ingest.service
sudo systemctl start acl-ingest
The service should now be started. The status of the service can be queried using:
sudo systemctl status acl-ingest.service
Configuring and starting the Production pipeline(s) including rendering engine
The production pipeline component and the rendering engine component are combined into one binary named acl-renderingengine
and terminates transport of media streams from ingests, demuxes, decodes, aligns according to timestamps and clock sync, separately mixes audio and video (including HTML-based graphics insertion) and finally encodes, muxes and originates transport of finished mixed outputs as well as multiviews.
It is possible to run just one production pipeline if doing direct editing (the “traditional way”, in the stream) or to use two pipelines in order to set up a proxy editing workflow. In the latter case, there is one designated for the Low Delay flow which will be used for quick feedbacks on the editing commands and one designated for High Quality flows which will produce the final quality stream to broadcast to the viewers. The Production Pipeline is simply called a “pipeline” in the REST API. The rendering engine is not managed through the REST API but rather through the control command API from the control panels.
Settings for the production pipeline binary are added as environment variables. There is one setting that must be set:
ACL_SYSTEM_CONTROLLER_PSK
shall be set to the same as in the settings file of the System Controller
Also, there are some parameters that have default settings that you will most probably want to override:
ACL_SYSTEM_CONTROLLER_IP
and ACL_SYSTEM_CONTROLLER_PORT
shall be set to the location where the System Controller is hostedACL_RENDERING_ENGINE_NAME
is needed to identify the different production pipelines in the REST API and shall thus be set uniquely and descriptivelyACL_MY_PUBLIC_IP
shall be set to the IP address or hostname where the ingest shall find the production pipeline. If the production pipeline is behind a NAT or needs DNS resolving, setting this variable is necessaryDISPLAY
is needed but may already be set by the operating system. If not (because the server is headless) it needs to be set.
Hint
If running a headless server and setting the DISPLAY
environment variable, run an instance of Xvfb
to provide the virtual “display” to point towards. E.g. start Xvfb by running Xvfb -ac :99 -screen 0 1280x1024x16
, and when starting the Rendering Engine set DISPLAY=:99
. It is recommended to start Xvfb
as a service on headless machines.You may also want to set the log location by setting ACL_LOG_PATH
. The log location defaults to /tmp/
.
Here is an example of how to start two instances (one for Low Delay flow and one for High Quality Flow) of the acl-renderingengine application and name them accordingly so that you and the GUI can tell them apart. First start the Low Delay instance by running:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_RENDERING_ENGINE_NAME="My LD Pipeline" ./acl-renderingengine
Then start the High Quality Rendering Engine in the same way:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_RENDERING_ENGINE_NAME="My HQ Pipeline" ./acl-renderingengine
Configuring and starting the generic Control Panels
TCP control panel
acl-tcpcontrolpanel
is a control panel application that starts a TCP server on a configurable set of endpoints and relays control commands over a control connection if one has been set up. It is preferably run geographically very close to the control panel whose control commands it will relay, but can alternatively run close to the production pipeline that it is connected to. To start with, it is easiest to run it on the same machine as the associated production pipeline.
Settings for the tcp control panel binary are added as environment variables. There is one setting that must be set:
ACL_SYSTEM_CONTROLLER_PSK
shall be set to the same as in the settings file of the System Controller
Also, there are some parameters that have default settings that you will most probably want to override:
ACL_SYSTEM_CONTROLLER_IP
and ACL_SYSTEM_CONTROLLER_PORT
shall be set to the location where the System Controller is hostedACL_CONTROL_PANEL_NAME
can be set to make it easier to identify the control panel through the REST APIACL_TCP_ENDPOINT
from release 6.0.0 on, contains the IP address and port of the TCP endpoint to be opened, written as <ip>:>port>
. Several clients can connect to the same listening port. In release 5.0.0 and earlier, instead use ACL_TCP_ENDPOINTS
, a comma-separated list of endpoints to be opened, where each endpoint is written as <ip>:>port>
You may also want to set the log location by setting ACL_LOG_PATH
. The log location defaults to /tmp/
.
As an example, from the installation directory, the tcp control panel is started with:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_TCP_ENDPOINT=0.0.0.0:7000 ACL_CONTROL_PANEL_NAME="My TCP Control Panel" ./acl-tcpcontrolpanel
Websocket control panel
acl-websocketcontrolpanel
is a control panel application that acts as a websocket server, and relays control commands that come in over websockets that are opened typically by web-based GUIs.
The basic settings are the same as for the tcp control panel. But to specify where the websocket server shall open an endpoint, use this environment variable:
ACL_WEBSOCKET_ENDPOINT
is the endpoint to listen for incoming messages on. On the form <ip>:<port>
You may also want to set the log location by setting ACL_LOG_PATH
. The log location defaults to /tmp/
.
An example of how to start it is the following:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q ACL_WEBSOCKET_ENDPOINT=0.0.0.0:6999 ACL_CONTROL_PANEL_NAME="My Websocket Control Panel" ./acl-websocketcontrolpanel
Manual control panel
acl-manualcontrolpanel
is a basic Control Panel application operated from the command line.
The basic settings are the same as for the other generic control panels but it does not open any network endpoints for input, instead the control commands are written in the terminal. It can be started with:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q ACL_CONTROL_PANEL_NAME="My Manual Control Panel" ./acl-manualcontrolpanel
3.1 - NTP
Agile Live NTP instructions
To ensure that all ingest servers and production pipelines are in sync, a clock
synchronization protocol called NTP is required. Historically that has been
handled by a systemd service service called timesyncd.
Chrony
Starting from version 4.0.0 of Agile Live the required NTP client has switched
from systemd-timesyncd to chrony. Chrony is installed by defualt in many
Linux distributions since it has a lot of advantages over using systemd-timesyncd.
Some of the advantages are listed below.
- Full NTP support (systemd-timesyncd only supports Simple NTP)
- Syncing against multiple servers
- More complex but more accurate
- Gradually adjusts the system time instead of hard steps
Installation
Install chrony with:
$ sudo apt install chrony
This will replace systemd-timesyncd
if installed.
NTP servers
The default configuration for chrony
uses a pool of 8 NTP servers to
synchronize against. It is recommended to use these default servers hosted by Ubuntu.
Verify that the configuration is valid by checking the configuration file after
installing.
$ cat /etc/chrony/chrony.conf
...
pool ntp.ubuntu.com iburst maxsources 4
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
...
The file should also contain the line below for the correct handling of
leap seconds. It is used to set the leap second timezone to
get the correct TAI time.
Useful Commands
For a simple check to see that the system time is synchronized, run:
$ timedatectl
Local time: Wed 2023-03-08 08:23:37 CET
Universal time: Wed 2023-03-08 07:23:37 UTC
RTC time: Wed 2023-03-08 07:23:37
Time zone: Europe/Stockholm (CET, +0100)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
To get more details on how the system time is running compared to NTP,
use the chrony client chronyc
. The subcommand tracking
shows information
about the current system time and the selected time server used.
$ chronyc tracking
Reference ID : C23ACA14 (sth1.ntp.netnod.se)
Stratum : 2
Ref time (UTC) : Wed Mar 08 07:23:37 2023
System time : 0.000662344 seconds slow of NTP time
Last offset : -0.000669114 seconds
RMS offset : 0.001053432 seconds
Frequency : 10.476 ppm slow
Residual freq : +0.001 ppm
Skew : 0.295 ppm
Root delay : 0.007340557 seconds
Root dispersion : 0.001045418 seconds
Update interval : 1028.7 seconds
Leap status : Normal
The sources
subcommand to get a list of the other time
servers that chrony uses.
$ chronyc sources -v
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current best, '+' = combined, '-' = not combined,
| / 'x' = may be in error, '~' = too variable, '?' = unusable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- prod-ntp-5.ntp4.ps5.cano> 2 6 17 8 +59us[ -11us] +/- 23ms
^- pugot.canonical.com 2 6 17 7 -1881us[-1951us] +/- 48ms
^+ prod-ntp-4.ntp1.ps5.cano> 2 6 17 7 -1872us[-1942us] +/- 24ms
^+ prod-ntp-3.ntp1.ps5.cano> 2 6 17 6 +878us[ +808us] +/- 23ms
^- ntp4.flashdance.cx 2 6 17 7 -107us[ -176us] +/- 34ms
^+ time.cloudflare.com 3 6 17 8 -379us[ -449us] +/- 5230us
^* time.cloudflare.com 3 6 17 8 +247us[ +177us] +/- 5470us
^- ntp5.flashdance.cx 2 6 17 7 -1334us[-1404us] +/- 34ms
Leap seconds
Due to small variations in the Earth’s rotation speed a one second time
adjustment is occasionally introduced in UTC (Coordinated Universal Time). This
is usually handled by stepping back the clock in the NTP server one second.
These hard steps could cause problems in such a time critical product as video
production. To avoid this the system uses a clock known as TAI (International Atomic
Time), which does not introduce any leap seconds, and is therefore slightly
offset from UTC (37 seconds in 2023).
Leap smearing
Another technique for handling the introduction of a new leap second is called
leap smearing. Instead of stepping back the clock in on big step, the leap
second is introduced in many minor microsecond intervals during a longer period,
usually over 12 to 24 hours. This technique is used by most of the larger cloud
providers. Most NTP servers that use leap smearing don’t announce the current
TAI offset and will thereby cause issues if used in combination with systems
that are using TAI time. It is therefore highly recommended to use Ubuntu’s default
NTP servers as described above, as none of these use leap smearing. Mixing leap
smearing and non-leap smearing time servers will result in that components in
the system will have clocks that are off from each other by 37 seconds (as of
2023), as the ones using leap smearing time servers without TAI offset will set
the TAI clock to UTC.
3.2 - Linux settings
Recommended changes to various Linux settings
UDP buffers
To improve the performance while receiving and sending multiple streams, and not overflow UDP buffers (leading to receive buffer errors as identified using e.g. ‘netstat -suna’), it is recommended to
adjust some UDP buffer settings on the host system.
Recommended value of the buffers is 16MB, but different setups may need larger buffers or would suffice with lower values. To change the buffer values issue:
sudo sysctl -w net.core.rmem_default=16000000
sudo sysctl -w net.core.rmem_max=16000000
sudo sysctl -w net.core.wmem_default=16000000
sudo sysctl -w net.core.wmem_max=16000000
To make the changes persistant edit/add the values to /etc/sysctl.conf:
net.core.rmem_default=16000000
net.core.rmem_max=16000000
net.core.wmem_default=16000000
net.core.wmem_max=16000000
Core dump
On the rare occasion that a segmentation fault or a similar error should occur, a core dump is generated by the operating system to record the state of the application to aid in troubleshooting.
By default these core dumps are handled by the ‘apport’ service in Ubuntu and not written to file.
To store them to file instead we recommend installing the systemd core dump service.
sudo apt install systemd-coredump
Core dumps will now be compressed and stored in /var/lib/systemd/coredump
.
The systemd core dump package also includes a helpful CLI tool for accessing core dumps.
List all core dumps:
$ coredumpctl list
TIME PID UID GID SIG COREFILE EXE
Thu 2022-10-20 17:24:16 CEST 985485 1007 1008 11 present /usr/bin/example
You can print some basic information from the core dump to send to the Agile Live developer to aid debugging the crash
$ coredumpctl info 985485
In some cases, the developers might want to see the entire core dump to understand the crash, then save the core dump using
$ coredumpctl dump 985485 --output=myfilename.coredump
Core dumps will not persist between reboots.
3.3 - Disabling unattended upgrades of CUDA toolkit
Recommended changes to the unattended upgrades
CUDA Driver/library mismatch
Per default Ubuntu runs automatic upgrades on APT installed packages every night, a process called unattended upgrades. This will sometimes cause issues when the unattended upgrade process tries to update the CUDA toolkit, as it will be able to update the libraries, but cannot update the driver, as it is in use by some application. The result is that the CUDA driver cannot be used because of the driver-library mismatch and the Agile Live applications will fail to start.
In case an Agile Live application fails to start with some CUDA error, the easiest way to see if this is because of a driver-library mismatch, is to run nvidia-smi
in a terminal and check the output:
$ nvidia-smi
Failed to initialize NVML: Driver/library version mismatch
In case this happens, just reboot the machine and the new CUDA driver should be loaded instead of the old one at startup. (It is sometimes possible to manually reload the CUDA driver without rebooting, but this can be troublesome as you have to find all applications using the driver)
One way to get rid of these unpleasant surprises when the applications suddenly no longer can start, without disabling unattended upgrades for other packages, is to blacklist the CUDA toolkit from the unattended upgrades process. This will make Ubuntu still update other packages, but CUDA toolkit will require a manual upgrade, which can be done at a suitable time point.
To blacklist the CUDA driver, open the file /etc/apt/apt.conf.d/50unattended-upgrades
in a text editor and look for the Unattended-Upgrade::Package-Blacklist
block. Add "cuda"
and "(lib)?nvidia"
to the list, like so:
// Python regular expressions, matching packages to exclude from upgrading
Unattended-Upgrade::Package-Blacklist {
"cuda";
"(lib)?nvidia";
};
This will disable unattended upgrades for packages with names starting with cuda
, libnvidia
or just nvidia
.
Once you want to manually update the CUDA toolkit and the driver, run the following commands:
$ sudo apt update
$ sudo apt upgrade
$ sudo apt install cuda
This will likely trigger the driver-library mismatch in case you run nvidia-smi
. Then reboot the machine to reload the new driver.
Sometimes a reboot is required before the cuda
package can be installed, so in case the upgrade did not work the first time, try a reboot, then apt install cuda
and then reboot again.
3.4 - Installing older releases
Installation instructions parts of the base platform for older releases
This page describes a couple of steps in installing the base platform that are specific for releases 5.0.0 and older, and replaces these sections in the main documentation.
Base packages installed via ‘apt’
To make it possible to run the Ingest and Rendering Engine, some packages must be installed.
The following base packages can be installed with the apt package manager:
sudo apt -y update && sudo apt -y upgrade && sudo apt -y install libssl3 libfdk-aac2 libopus0 libcpprest2.10 libspdlog1 libfmt8 libsystemd0 chrony libcairo2 systemd-coredump
The systemd-coredump
package in the list above is really handy to have installed to be able to debug core dumps later on.
Installing the Agile Live components
Extract the following release artifacts from the release files listed under Releases:
- Ingest application consisting of binary
acl-ingest
and shared library libacl-ingest.so
- Production pipeline library
libacl-productionpipeline.so
and associated header files - Rendering engine application
acl-renderingengine
- Production control panel interface library
libacl-controldatasender.so
and associated header files - Generic control panel applications
acl-manualcontrolpanel
, acl-tcpcontrolpanel
and acl-websocketcontrolpanel
4 - Configuration and monitoring GUI
Installing the Configuration and monitoring GUI
The following guide will show how to run the configuration and monitoring GUI using Docker compose. The OS specific
parts of the guide, how to install Docker and aws-cli, is written for Ubuntu 22.04.
Install Docker Engine
First install Docker on the machine you want to run the GUI and the GUI database:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Login to Docker container registry
To access the Agile Docker container registry to pull the GUI image, you need to log in install and log into AWS CLI
Install AWS CLI:
sudo apt-get install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Configure AWS access, use the access key id and secret access key that you should have received from Agile Content.
Set region to eu-north-1
and leave output format as None
(just hit enter):
aws configure --profile agile-live-gui
AWS Access Key ID [None]: AKIA****************
AWS Secret Access Key [None]: ****************************************
Default region name [None]: eu-north-1
Default output format [None]:
Login to Agile’s AWS Elastic Container Repository to be able to pull the container image: 263637530387.dkr.ecr.eu-north-1.amazonaws.com/agile-live-gui
. If your user is in the docker
group, sudo
part can be skipped.
aws --profile agile-live-gui ecr get-login-password | sudo docker login --username AWS --password-stdin 263637530387.dkr.ecr.eu-north-1.amazonaws.com
MongoDB configuration
The GUI uses MongoDB to store information. When starting up the MongoDB Docker container it will create a database user that the GUI backend will use to communicate with the Mongo database.
Create the file mongo-init.js
for the API user (replace <API_USER>
and <API_PASSWORD>
with a username and password of your choice):
productionsDb = db.getSiblingDB('agile-live-gui');
productionsDb.createUser({
user: '<API_USER>',
pwd: '<API_PASSWORD>',
roles: [{ role: 'readWrite', db: 'agile-live-gui' }]
});
Create a docker-compose.yml file
Create a file docker-compose.yml
to start the GUI backend and the MongoDB instance. Give the file the following content:
version: '3.7'
services:
agileui:
image: 263637530387.dkr.ecr.eu-north-1.amazonaws.com/agile-live-gui:latest
container_name: agileui
environment:
MONGODB_URI: mongodb://<API_USER>:<API_PASSWORD>@mongodb:27017/agile-live-gui
AGILE_URL: https://<SYSTEM_CONTROLLER_IP>:<SYSTEM_CONTROLLER_PORT>
AGILE_CREDENTIALS: <SYSTEM_CONTROLLER_USER>:<SYSTEM_CONTROLLER_PASSWORD>
NODE_TLS_REJECT_UNAUTHORIZED: 1
NEXTAUTH_SECRET: <INSERT_GENERATED_SECRET>
NEXTAUTH_URL: http://<SERVER_IP>:<SERVER_PORT>
BCRYPT_SALT_ROUNDS: 10
UI_LANG: en
ports:
- <HOST_PORT>:3000
mongodb:
image: mongo:latest
container_name: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: <MONGODB_USERNAME>
MONGO_INITDB_ROOT_PASSWORD: <MONGODB_PASSWORD>
ports:
- 27017:27017
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongodb:/data/db
Details about some of the parameters:
- image - Use
latest
tag for always starting the latest version, set to :X.Y.Z
to lock on a specific version. - ports - Internally port 3000 is used, open a mapping from your
HOST_PORT
of choice to access the GUI at. - extra_hosts - Adding “host.docker.internal:host-gateway” makes it possible for docker container to access host machine from host.docker.internal
- MONGODB_URI - The mongodb connection string including credentials,
API_USER
and API_PASSWORD
should match what you wrote in mongo-init.js
- AGILE_URL - The URL to the Agile-live system controller REST API
- AGILE_CREDENTIALS - Credentials for the Agile-live system controller REST API, used by backend to talk to the Agile-live REST API.
- NODE_TLS_REJECT_UNAUTHORIZED - Set to 0 to disable SSL verification. This is useful if testing using self-signed certs
- NEXTAUTH_SECRET - The secret used to encrypt the JWT Token, can be generated by
openssl rand -base64 32
- NEXTAUTH_URL - The base url for the service, used for redirecting after login, eg. http://my-host-machine-ip:3000
- BCRYPT_SALT_ROUNDS - The number of salt rounds the bcrypt hashing function will perform, you probably don’t want this higher than 13 ( 13 = ~1 hash/second )
- UI_LANG - Set language for the user interface (en|sv). Default is en
- MONGODB_USERNAME - Select a username for a root user in MongoDB
- MONGODB_PASSWORD - Select a password for your root user in MongoDB
Start, stop and signing in to the Agile-live GUI
Start the UI container by running Docker compose in the same directory as the docker-compose.yml
file`:
sudo docker compose up -d
Once the containers are started you should now be able to reach the GUI at the host-port
that you selected in the docker-compose.yml
-file.
To login to the GUI, use username admin
and type a password longer than 8 characters. This password will now be stored in the database and
required for any new logins. For more information on how to reset passwords and add new users, read more here
After signing in to the GUI, the statuses in the bottom left should all be green, else there is probably a problem in your System Controller or database configuration in the docker-compose.yml
-file.
To stop the GUI and database container run:
Installing support for VLC on client machine, to open Multiview directly from GUI:
On Windows operating system, the GUI supports redirecting the user via links to VLC. To make this work, the following open-source program needs to be installed: https://github.com/stefansundin/vlc-protocol/tree/main
This is not supported on Linux or Mac.
4.1 - Custom MongoDB installation
Instructions for installing MongoDB instead of running it in docker
This page contains instruction on making a manual installation of MongoDB, instead of running MongoDB in Docker as is described here. In most cases we recommend running MongoDB in Docker, as it is easier to set up and manage.
Custom installation of MongoDB
Install the database software, MongoDB.
sudo apt-get install gnupg
curl -fsSL https://pgp.mongodb.com/server-6.0.asc | \
sudo gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg \
--dearmor
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org
Create a /etc/mongod.conf
that contains the following:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
# engine:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
Then start the database
sudo systemctl enable mongod
sudo systemctl start mongod
Create the file ~/mongosh-admin.js
containing (replace <ADMIN_PASSWORD>
):
use admin
db.createUser(
{
user: "admin",
pwd: "<ADMIN_PASSWORD>",
roles: [
{ role: "userAdminAnyDatabase", db: "admin" },
{ role: "readWriteAnyDatabase", db: "admin" }
]
}
)
Create the admin user
mongosh localhost < ~/mongosh-admin.js
Create the file ~/mongosh-api.js
for the API user (replace <API_PASSWORD>
):
productionsDb = db.getSiblingDB('agile-live-gui');
productionsDb.createUser({
user: 'api',
pwd: '<API_PASSWORD>',
roles: [{ role: 'readWrite', db: 'agile-live-gui' }]
});
Create the API user
mongosh -u admin -p <ADMIN_PASSWORD> < ~/mongosh-api.js
5 - Prometheus and Grafana for monitoring
Installing and running Prometheus exporters, Prometheus database and Grafana
To collect useful system metrics for stability and performance
monitoring, we advise to use Prometheus. For visualizing the metrics
collected by Prometheus you could use Grafana.
The System controller receives a lot of internal metrics from the
Agile Live components. These can be pulled by a Prometheus instance from the endpoint
https://system-controller-host:8080/metrics.
It is also possible to install external exporters for various hardware
and OS metrics.
Prometheus
Installation
Use this guide to install Prometheus:
https://prometheus.io/docs/prometheus/latest/installation/.
Configuration
Prometheus should be configured to poll or scrape the system
controller with something like this in the prometheus.yml
file:
scrape_configs:
- job_name: 'system_controller_exporter'
scrape_interval: 5s
scheme: https
tls_config:
insecure_skip_verify: true
static_configs:
- targets: ['system-controller-host:8080']
External exporters
Node Exporter
Node Exporter is an exporter used for general hardware and OS
metrics, such as CPU load and memory usage.
Instructions for installation and configuration can be found here:
https://github.com/prometheus/node_exporter
Add a new scrape_config
in prometheus.yml
like so:
- job_name: 'node_exporter'
scrape_interval: 15s
static_configs:
- targets: ['node-exporter-host:9100']
DCGM Exporter
This exporter uses Nvidia DCGM to gather metrics from Nvidia GPUs. Includes encoder and decoder utilization.
More info and installation instructions to be found here: https://github.com/NVIDIA/dcgm-exporter
Add a new scrape_config
in prometheus.yml
like so:
- job_name: 'dcgm_exporter'
scrape_interval: 15s
static_configs:
- targets: ['dcgm-exporter-host:9400']
Grafana
Installation of Grafana is described here:
https://grafana.com/docs/grafana/latest/setup-grafana/installation/
As a start, the following Dashboards can be used to visualize the Node Exporter and DCGM Exporter data:
Example of running Node Exporter and DCGM Exporter with Docker Compose
To simplify setup of the Node Exporter and DCGM Exporter on multiple machines to monitor, the following example Docker Compose file can be used.
First, after a normal installation of Docker and the Docker Compose plugin, the Nvidia Container Toolkit must be installed and configured to allow access to the Nvidia GPU from inside a Docker container:
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
Then the following example docker-compose.yml
file can be used to start both the Node Exporter and the DCGM Exporter:
version: '3.8'
services:
node_exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
command:
- '--path.rootfs=/host'
network_mode: host
pid: host
restart: unless-stopped
volumes:
- '/:/host:ro,rslave'
dcgm_exporter:
image: nvcr.io/nvidia/k8s/dcgm-exporter:3.3.3-3.3.1-ubuntu22.04
container_name: dcgm_exporter
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [ gpu ]
restart: unless-stopped
environment:
- DCGM_EXPORTER_NO_HOSTNAME=1
cap_add:
- SYS_ADMIN
ports:
- "9400:9400"
Start the Docker containers as usual with docker compose up -d
.
To verify the exporters work, you can use Curl to access the metrics data like:
curl localhost:9100/metrics
for the Node Exporter and curl localhost:9400/metrics
for the DCGM exporter. Note that the DCGM exporter might take several seconds before the first metrics are collected, resulting in that the first requests might yield an empty response body.
6 - Control panel GUI examples
Installing and configuring some Control panel GUI examples
The rendering engine, which contains the video mixer, HTML renderer, audio mixer, etc., is configured and controlled via a control command API over the control connections. Some source code examples of control panel integrations are provided, together with 3rd party software recommendations. All these use the generic control panels as proxies when communicating with the rendering engine. However, it is recommended for integrators to, when possible, write their own control panel integrations using the libraries and header files provided by the base layer of the platform instead of relaying the commands via the generic control panels.
Streamdeck and Companion
A good first starting point for a control panel to control the video (and audio) mixer is to use an Elgato Streamdeck in combination with the open source software Bitfocus Companion.
The source code examples
Unzip the file agile-live-unofficial-tools-5.0.0.zip
to find several different source code examples of control panel integrations in various languages. Some communicate with the generic tcp control panel, some with the generic websocket control panel, some directly with the REST API. The source code is free to reuse and adapt.
The audio control interface and GUI
An example audio control panel interface and GUI is provided as source code in Python 3. It attempts to find a control panel HW, either the Korg nanoKONTROL2 or Behringer X-Touch Extender, via MIDI and to communicate with it using the Mackie Control protocol. (Note that the Korg nanoKONTROL2 does not use the Maicke protocol by default. Refer to the nanoKONTROL2 user manual on how to set it up)
The audio control panel uses the tkinter GUI library in the Python standard libraries, and also depends upon the python packages python_rtmidi
and mido
which can be installed using pip3 install
.
The chroma key control panels
There is a Python3+tkinter-based GUI for configuring the chroma key (“green screen”) functionality, which communicates with the generic tcp control panel.
There is also a HTML-based version that communicates with the generic websocket control panel.
The thumbnail viewer
The thumbnail viewer is an example web GUI to poll the REST API and visualize all ingests and sources that are connected to a given System Controller. Thumbnails of all sources are shown, and updated every 5 seconds.