This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

The base platform

Installation instructions for the base platform

This page describes the steps needed to install and configure the base platform.

Install Third-party packages

Agile Live has a few dependencies on third party software that have to be installed separately first. Which ones are needed depends on the component of Agile Live that you will be running on the specific system.

A machine running the Ingest needs the following:

  • CUDA Toolkit
  • oneVPL
  • Blackmagic Desktop Video (Optional, is needed when using Blackmagic DeckLink SDI cards)
  • NDI SDK for Linux
  • Base packages via ‘apt’

A machine running the Rendering Engine only needs a subset:

  • CUDA Toolkit
  • Base packages via ‘apt’

A machine running control panels needs yet fewer:

  • Base packages via ‘apt’

CUDA Toolkit

For video encoding and decoding on NVidia GPUs you need Nvidia CUDA Toolkit installed. It must be installed on Ingest machines even if they don’t have Nvidia cards.

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb

sudo apt update
sudo apt -y install cuda

You need to reboot the system after installation. Then check the installation by running nvidia-smi. If installing more dependencies, it is ok to proceed with that and reboot once after installing all parts.

oneVPL

All machines running the Ingest application need the Intel oneAPI Video Processing Library (oneVPL) installed. It is used for encoding and thumbnail processing on Intel GPUs and is required even for Ingests which will not use the Intel GPU for encoding. Install oneVPL by running:

sudo apt install -y gpg-agent wget

# Download and run oneVPL offline installer:
# https://www.intel.com/content/www/us/en/developer/articles/tool/oneapi-standalone-components.html#onevpl
VPL_INSTALLER=l_oneVPL_p_2023.1.0.43488_offline.sh
wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/e2f5aca0-c787-4e1d-a233-92a6b3c0c3f2/$VPL_INSTALLER
chmod +x $VPL_INSTALLER
sudo ./$VPL_INSTALLER -a -s --eula accept

# Install the latest general purpose GPU (GPGPU) run-time and development software packages
# https://dgpu-docs.intel.com/driver/client/overview.html

wget -qO - https://repositories.intel.com/graphics/intel-graphics.key | \
  sudo gpg --dearmor --output /usr/share/keyrings/intel-graphics.gpg
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/graphics/ubuntu jammy arc' | \
  sudo tee  /etc/apt/sources.list.d/intel.gpu.jammy.list
sudo apt update
sudo apt install -y ffmpeg intel-media-va-driver-non-free libdrm2 libmfx-gen1.2 libva2

# Add VA environment variables
if [[ -z "${LIBVA_DRIVERS_PATH}" ]]; then
  sudo sh -c "echo 'LIBVA_DRIVERS_PATH=/usr/lib/x86_64-linux-gnu/dri' >> /etc/environment"
fi
if [[ -z "${LIBVA_DRIVER_NAME}" ]]; then
  sudo sh -c "echo 'LIBVA_DRIVER_NAME=iHD' >> /etc/environment"
fi

# Give users access to the GPU device for compute and render node workloads
# https://www.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top/prerequisites/install-intel-gpu-drivers.html
sudo usermod -aG video,render ${USER}

# Set environment variables for development
echo "# oneAPI environment variables for development" >> ~/.bashrc
echo "source /opt/intel/oneapi/setvars.sh > /dev/null" >> ~/.bashrc

You need to reboot the system after installation. If installing more dependencies, it is ok to proceed with that and reboot once after installing all parts.

Support for Blackmagic Design SDI grabber cards

If the Ingest shall be able to use Blackmagic DeckLink SDI cards, the drivers need to be installed.

Download https://www.blackmagicdesign.com/se/support/family/capture-and-playback (Desktop Video).

Unpack and install (Replace all commands with the version you downloaded):

sudo apt install dctrl-tools dkms
tar -xvf Blackmagic_Desktop_Video_Linux_12.7.tar.gz
cd Blackmagic_Desktop_Video_Linux_12.7/deb/x86_64
sudo dpkg -i desktopvideo_12.7a4_amd64.deb

Support for NDI sources

For machines running Ingests, you need to install the NDI library and additional dependencies:

## Install library
sudo apt -y install avahi-daemon avahi-discover avahi-utils libnss-mdns mdns-scan
wget https://downloads.ndi.tv/SDK/NDI_SDK_Linux/Install_NDI_SDK_v5_Linux.tar.gz
tar -xvzf Install_NDI_SDK_v5_Linux.tar.gz
PAGER=skip-pager ./Install_NDI_SDK_v5_Linux.sh <<< y
cd NDI\ SDK\ for\ Linux/lib/x86_64-linux-gnu/
NDI_LIBRARY_FILE=$(ls libndi.so.5.*.*)
cd ../../..
sudo cp NDI\ SDK\ for\ Linux/lib/x86_64-linux-gnu/${NDI_LIBRARY_FILE} /usr/lib/x86_64-linux-gnu
sudo ln -fs /usr/lib/x86_64-linux-gnu/${NDI_LIBRARY_FILE} /usr/lib/x86_64-linux-gnu/libndi.so
sudo ln -fs /usr/lib/x86_64-linux-gnu/${NDI_LIBRARY_FILE} /usr/lib/x86_64-linux-gnu/libndi.so.5
sudo ldconfig
sudo cp NDI\ SDK\ for\ Linux/include/* /usr/include/.

Base packages installed via ‘apt’

When installing version 5.0.0 or earlier, please refer to this guide for this step.

All systems that run an Ingest, Rendering Engine or Control Panel need chrony as NTP client. The systemd-coredump package is also really handy to have installed to be able to debug core dumps later on, but is technically optional.

These packages can be installed with the apt package manager:

sudo apt -y update && sudo apt -y upgrade && sudo apt -y install chrony systemd-coredump

Preparing the base system for installation

To prepare the system for Agile Live, the NTP client must be configured correctly by following this guide, and some general Linux settings shall be configured by following this guide. Also, CUDA should be prevented from automatically installing upgrades by following this guide.

Installing the Agile Live components

When installing version 5.0.0 or earlier, please refer to this guide for this step.

The following Agile Live .deb packages are distributed:

  • acl-system-controller - Contains the System Controller application (explained in separate section)
  • acl-libingest - Contains the library used by the Ingest application
  • acl-ingest - Contains the Ingest application
  • acl-libproductionpipeline - Contains the libraries used by the Rendering Engine application
  • acl-renderingengine - Contains Agile Live’s default Rendering Engine application
  • acl-libcontroldatasender - Contains the libraries used by Control Panel applications
  • acl-controlpanels - Contains the generic Control Panel applications
  • acl-dev - Contains the header files required for development of custom Control Panels and Rendering Engines

All binaries will be installed in the directory /opt/agile_live/bin. Libraries are installed in /opt/agile_live/lib. Header files are installed in /opt/agile_live/include.

Ingest

To install the Ingest application, install the following .deb-packages:

sudo apt -y install ./acl-ingest_<version>_amd64.deb ./acl-libingest_<version>_amd64.deb

Rendering engine

To install the Rendering Engine application, install the following .deb-packages:

sudo apt -y install ./acl-renderingengine_<version>_amd64.deb ./acl-libproductionpipeline_<version>_amd64.deb

Control panels

To install the generic Control Panel applications acl-manualcontrolpanel, acl-tcpcontrolpanel and acl-websocketcontrolpanel, install the following .deb-packages:

sudo apt -y install ./acl-controlpanels_<version>_amd64.deb ./acl-libcontroldatasender_<version>_amd64.deb

Dev

To install the acl-dev package containing the header files needed for building your own applications, install the following .deb-package:

sudo apt -y install ./acl-dev_<version>_amd64.deb

Configuring and starting the Ingest(s)

The Ingest component is a binary which ingests raw video input sources such as SDI or NDI, timestamps, processes, encodes and originates transport to the rest of the platform.

Settings for the ingest binary are added as environment variables. There is one setting that must be set:

  • ACL_SYSTEM_CONTROLLER_PSK shall be set to the same as in the settings file of the System Controller

Also, there are some parameters that have default settings that you will most probably want to override:

  • ACL_SYSTEM_CONTROLLER_IP and ACL_SYSTEM_CONTROLLER_PORT shall be set to the location where the System Controller is hosted
  • ACL_INGEST_NAME is optional but is well used as an aid to identify the Ingest in the REST interface if multiple instances are connected to the same System Controller

You may also want to set the log location by setting ACL_LOG_PATH. The log location defaults to /tmp/.

As an example, from the installation directory, the Ingest is started with:

ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_INGEST_NAME="My Ingest" ./acl-ingest

If started successfully, it will display a log message showing that the Ingest is up and running.

Running the Ingest SW as a service

It may be useful to run the Ingest SW as a service in Ubuntu, so that it starts automatically on boot and restarts if it terminates abnormally. In Ubuntu 22.04 systemd manages the services. Here follows an example of how a service can be configured, started and monitored.

First create a “service unit file”, named for example /etc/systemd/system/acl-ingest.service and put the following in it:

[Unit]
Description=Agile Live Ingest service

[Service]
User=<YOUR_USER>
Environment=ACL_SYSTEM_CONTROLLER_PSK=<YOUR_PSK>
ExecStart=/bin/bash -c '\
source /opt/intel/oneapi/setvars.sh; \
exec /opt/agile_live/bin/acl-ingest'
Restart=always

[Install]
WantedBy=multi-user.target

Replace <YOUR_USER> with the user that will be executing the Ingest binary on the host machine. The Environment attribute should be set in the same way as described above for manually starting the binary.

After saving the service unit file, run:

sudo systemctl daemon-reload
sudo systemctl enable acl-ingest.service
sudo systemctl start acl-ingest

The service should now be started. The status of the service can be queried using:

sudo systemctl status acl-ingest.service

Configuring and starting the Production pipeline(s) including rendering engine

The production pipeline component and the rendering engine component are combined into one binary named acl-renderingengine and terminates transport of media streams from ingests, demuxes, decodes, aligns according to timestamps and clock sync, separately mixes audio and video (including HTML-based graphics insertion) and finally encodes, muxes and originates transport of finished mixed outputs as well as multiviews.

It is possible to run just one production pipeline if doing direct editing (the “traditional way”, in the stream) or to use two pipelines in order to set up a proxy editing workflow. In the latter case, there is one designated for the Low Delay flow which will be used for quick feedbacks on the editing commands and one designated for High Quality flows which will produce the final quality stream to broadcast to the viewers. The Production Pipeline is simply called a “pipeline” in the REST API. The rendering engine is not managed through the REST API but rather through the control command API from the control panels.

Settings for the production pipeline binary are added as environment variables. There is one setting that must be set:

  • ACL_SYSTEM_CONTROLLER_PSK shall be set to the same as in the settings file of the System Controller

Also, there are some parameters that have default settings that you will most probably want to override:

  • ACL_SYSTEM_CONTROLLER_IP and ACL_SYSTEM_CONTROLLER_PORT shall be set to the location where the System Controller is hosted
  • ACL_RENDERING_ENGINE_NAME is needed to identify the different production pipelines in the REST API and shall thus be set uniquely and descriptively
  • ACL_MY_PUBLIC_IP shall be set to the IP address or hostname where the ingest shall find the production pipeline. If the production pipeline is behind a NAT or needs DNS resolving, setting this variable is necessary
  • DISPLAY is needed but may already be set by the operating system. If not (because the server is headless) it needs to be set.

You may also want to set the log location by setting ACL_LOG_PATH. The log location defaults to /tmp/.

Here is an example of how to start two instances (one for Low Delay flow and one for High Quality Flow) of the acl-renderingengine application and name them accordingly so that you and the GUI can tell them apart. First start the Low Delay instance by running:

ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_RENDERING_ENGINE_NAME="My LD Pipeline" ./acl-renderingengine

Then start the High Quality Rendering Engine in the same way:

ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_RENDERING_ENGINE_NAME="My HQ Pipeline" ./acl-renderingengine

Configuring and starting the generic Control Panels

TCP control panel

acl-tcpcontrolpanel is a control panel application that starts a TCP server on a configurable set of endpoints and relays control commands over a control connection if one has been set up. It is preferably run geographically very close to the control panel whose control commands it will relay, but can alternatively run close to the production pipeline that it is connected to. To start with, it is easiest to run it on the same machine as the associated production pipeline.

Settings for the tcp control panel binary are added as environment variables. There is one setting that must be set:

  • ACL_SYSTEM_CONTROLLER_PSK shall be set to the same as in the settings file of the System Controller

Also, there are some parameters that have default settings that you will most probably want to override:

  • ACL_SYSTEM_CONTROLLER_IP and ACL_SYSTEM_CONTROLLER_PORT shall be set to the location where the System Controller is hosted
  • ACL_CONTROL_PANEL_NAME can be set to make it easier to identify the control panel through the REST API
  • ACL_TCP_ENDPOINT from release 6.0.0 on, contains the IP address and port of the TCP endpoint to be opened, written as <ip>:>port>. Several clients can connect to the same listening port. In release 5.0.0 and earlier, instead use ACL_TCP_ENDPOINTS, a comma-separated list of endpoints to be opened, where each endpoint is written as <ip>:>port>

You may also want to set the log location by setting ACL_LOG_PATH. The log location defaults to /tmp/.

As an example, from the installation directory, the tcp control panel is started with:

ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_TCP_ENDPOINT=0.0.0.0:7000 ACL_CONTROL_PANEL_NAME="My TCP Control Panel" ./acl-tcpcontrolpanel

Websocket control panel

acl-websocketcontrolpanel is a control panel application that acts as a websocket server, and relays control commands that come in over websockets that are opened typically by web-based GUIs.

The basic settings are the same as for the tcp control panel. But to specify where the websocket server shall open an endpoint, use this environment variable:

  • ACL_WEBSOCKET_ENDPOINT is the endpoint to listen for incoming messages on. On the form <ip>:<port>

You may also want to set the log location by setting ACL_LOG_PATH. The log location defaults to /tmp/.

An example of how to start it is the following:

ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q ACL_WEBSOCKET_ENDPOINT=0.0.0.0:6999 ACL_CONTROL_PANEL_NAME="My Websocket Control Panel" ./acl-websocketcontrolpanel

Manual control panel

acl-manualcontrolpanel is a basic Control Panel application operated from the command line.

The basic settings are the same as for the other generic control panels but it does not open any network endpoints for input, instead the control commands are written in the terminal. It can be started with:

ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q ACL_CONTROL_PANEL_NAME="My Manual Control Panel" ./acl-manualcontrolpanel

1 - NTP

Agile Live NTP instructions

To ensure that all ingest servers and production pipelines are in sync, a clock synchronization protocol called NTP is required. Historically that has been handled by a systemd service service called timesyncd.

Chrony

Starting from version 4.0.0 of Agile Live the required NTP client has switched from systemd-timesyncd to chrony. Chrony is installed by defualt in many Linux distributions since it has a lot of advantages over using systemd-timesyncd. Some of the advantages are listed below.

  • Full NTP support (systemd-timesyncd only supports Simple NTP)
  • Syncing against multiple servers
  • More complex but more accurate
  • Gradually adjusts the system time instead of hard steps

Installation

Install chrony with:

$ sudo apt install chrony

This will replace systemd-timesyncd if installed.

NTP servers

The default configuration for chrony uses a pool of 8 NTP servers to synchronize against. It is recommended to use these default servers hosted by Ubuntu. Verify that the configuration is valid by checking the configuration file after installing.

$ cat /etc/chrony/chrony.conf
...
pool ntp.ubuntu.com        iburst maxsources 4
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
...

The file should also contain the line below for the correct handling of leap seconds. It is used to set the leap second timezone to get the correct TAI time.

leapsectz right/UTC

Useful Commands

For a simple check to see that the system time is synchronized, run:

$ timedatectl
               Local time: Wed 2023-03-08 08:23:37 CET
           Universal time: Wed 2023-03-08 07:23:37 UTC
                 RTC time: Wed 2023-03-08 07:23:37
                Time zone: Europe/Stockholm (CET, +0100)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

To get more details on how the system time is running compared to NTP, use the chrony client chronyc. The subcommand tracking shows information about the current system time and the selected time server used.

$ chronyc tracking
Reference ID    : C23ACA14 (sth1.ntp.netnod.se)
Stratum         : 2
Ref time (UTC)  : Wed Mar 08 07:23:37 2023
System time     : 0.000662344 seconds slow of NTP time
Last offset     : -0.000669114 seconds
RMS offset      : 0.001053432 seconds
Frequency       : 10.476 ppm slow
Residual freq   : +0.001 ppm
Skew            : 0.295 ppm
Root delay      : 0.007340557 seconds
Root dispersion : 0.001045418 seconds
Update interval : 1028.7 seconds
Leap status     : Normal

The sources subcommand to get a list of the other time servers that chrony uses.

$ chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- prod-ntp-5.ntp4.ps5.cano>     2   6    17     8    +59us[  -11us] +/-   23ms
^- pugot.canonical.com           2   6    17     7  -1881us[-1951us] +/-   48ms
^+ prod-ntp-4.ntp1.ps5.cano>     2   6    17     7  -1872us[-1942us] +/-   24ms
^+ prod-ntp-3.ntp1.ps5.cano>     2   6    17     6   +878us[ +808us] +/-   23ms
^- ntp4.flashdance.cx            2   6    17     7   -107us[ -176us] +/-   34ms
^+ time.cloudflare.com           3   6    17     8   -379us[ -449us] +/- 5230us
^* time.cloudflare.com           3   6    17     8   +247us[ +177us] +/- 5470us
^- ntp5.flashdance.cx            2   6    17     7  -1334us[-1404us] +/-   34ms

Leap seconds

Due to small variations in the Earth’s rotation speed a one second time adjustment is occasionally introduced in UTC (Coordinated Universal Time). This is usually handled by stepping back the clock in the NTP server one second. These hard steps could cause problems in such a time critical product as video production. To avoid this the system uses a clock known as TAI (International Atomic Time), which does not introduce any leap seconds, and is therefore slightly offset from UTC (37 seconds in 2023).

Leap smearing

Another technique for handling the introduction of a new leap second is called leap smearing. Instead of stepping back the clock in on big step, the leap second is introduced in many minor microsecond intervals during a longer period, usually over 12 to 24 hours. This technique is used by most of the larger cloud providers. Most NTP servers that use leap smearing don’t announce the current TAI offset and will thereby cause issues if used in combination with systems that are using TAI time. It is therefore highly recommended to use Ubuntu’s default NTP servers as described above, as none of these use leap smearing. Mixing leap smearing and non-leap smearing time servers will result in that components in the system will have clocks that are off from each other by 37 seconds (as of 2023), as the ones using leap smearing time servers without TAI offset will set the TAI clock to UTC.

2 - Linux settings

Recommended changes to various Linux settings

UDP buffers

To improve the performance while receiving and sending multiple streams, and not overflow UDP buffers (leading to receive buffer errors as identified using e.g. ‘netstat -suna’), it is recommended to adjust some UDP buffer settings on the host system.

Recommended value of the buffers is 16MB, but different setups may need larger buffers or would suffice with lower values. To change the buffer values issue:

sudo sysctl -w net.core.rmem_default=16000000
sudo sysctl -w net.core.rmem_max=16000000
sudo sysctl -w net.core.wmem_default=16000000
sudo sysctl -w net.core.wmem_max=16000000

To make the changes persistant edit/add the values to /etc/sysctl.conf:

net.core.rmem_default=16000000
net.core.rmem_max=16000000

net.core.wmem_default=16000000
net.core.wmem_max=16000000

Core dump

On the rare occasion that a segmentation fault or a similar error should occur, a core dump is generated by the operating system to record the state of the application to aid in troubleshooting. By default these core dumps are handled by the ‘apport’ service in Ubuntu and not written to file. To store them to file instead we recommend installing the systemd core dump service.

sudo apt install systemd-coredump

Core dumps will now be compressed and stored in /var/lib/systemd/coredump.

The systemd core dump package also includes a helpful CLI tool for accessing core dumps.

List all core dumps:

$ coredumpctl list

TIME                            PID   UID   GID SIG COREFILE  EXE
Thu 2022-10-20 17:24:16 CEST 985485  1007  1008  11 present   /usr/bin/example

You can print some basic information from the core dump to send to the Agile Live developer to aid debugging the crash

$ coredumpctl info 985485

In some cases, the developers might want to see the entire core dump to understand the crash, then save the core dump using

$ coredumpctl dump 985485 --output=myfilename.coredump

Core dumps will not persist between reboots.

3 - Disabling unattended upgrades of CUDA toolkit

Recommended changes to the unattended upgrades

CUDA Driver/library mismatch

Per default Ubuntu runs automatic upgrades on APT installed packages every night, a process called unattended upgrades. This will sometimes cause issues when the unattended upgrade process tries to update the CUDA toolkit, as it will be able to update the libraries, but cannot update the driver, as it is in use by some application. The result is that the CUDA driver cannot be used because of the driver-library mismatch and the Agile Live applications will fail to start. In case an Agile Live application fails to start with some CUDA error, the easiest way to see if this is because of a driver-library mismatch, is to run nvidia-smi in a terminal and check the output:

$ nvidia-smi
Failed to initialize NVML: Driver/library version mismatch

In case this happens, just reboot the machine and the new CUDA driver should be loaded instead of the old one at startup. (It is sometimes possible to manually reload the CUDA driver without rebooting, but this can be troublesome as you have to find all applications using the driver)

How to disable unattended upgrades of CUDA toolkit

One way to get rid of these unpleasant surprises when the applications suddenly no longer can start, without disabling unattended upgrades for other packages, is to blacklist the CUDA toolkit from the unattended upgrades process. This will make Ubuntu still update other packages, but CUDA toolkit will require a manual upgrade, which can be done at a suitable time point.

To blacklist the CUDA driver, open the file /etc/apt/apt.conf.d/50unattended-upgrades in a text editor and look for the Unattended-Upgrade::Package-Blacklist block. Add "cuda" and "(lib)?nvidia" to the list, like so:

// Python regular expressions, matching packages to exclude from upgrading
Unattended-Upgrade::Package-Blacklist {
    "cuda";
    "(lib)?nvidia";
};

This will disable unattended upgrades for packages with names starting with cuda, libnvidia or just nvidia.

How to manually update CUDA toolkit

Once you want to manually update the CUDA toolkit and the driver, run the following commands:

$ sudo apt update
$ sudo apt upgrade
$ sudo apt install cuda

This will likely trigger the driver-library mismatch in case you run nvidia-smi. Then reboot the machine to reload the new driver.

Sometimes a reboot is required before the cuda package can be installed, so in case the upgrade did not work the first time, try a reboot, then apt install cuda and then reboot again.

4 - Installing older releases

Installation instructions parts of the base platform for older releases

This page describes a couple of steps in installing the base platform that are specific for releases 5.0.0 and older, and replaces these sections in the main documentation.

Base packages installed via ‘apt’

To make it possible to run the Ingest and Rendering Engine, some packages must be installed. The following base packages can be installed with the apt package manager:

sudo apt -y update && sudo apt -y upgrade && sudo apt -y install libssl3 libfdk-aac2 libopus0 libcpprest2.10 libspdlog1 libfmt8 libsystemd0 chrony libcairo2 systemd-coredump

The systemd-coredump package in the list above is really handy to have installed to be able to debug core dumps later on.

Installing the Agile Live components

Extract the following release artifacts from the release files listed under Releases:

  • Ingest application consisting of binary acl-ingest and shared library libacl-ingest.so
  • Production pipeline library libacl-productionpipeline.so and associated header files
  • Rendering engine application acl-renderingengine
  • Production control panel interface library libacl-controldatasender.so and associated header files
  • Generic control panel applications acl-manualcontrolpanel, acl-tcpcontrolpanel and acl-websocketcontrolpanel