This is the multi-page printable view of this section. Click here to print.
Agile Live
- 1: Overview
- 2: Introduction to Agile Live technology
- 3: Installation and configuration
- 3.1: Hardware examples
- 3.2: The System Controller
- 3.3: The base platform
- 3.3.1: NTP
- 3.3.2: Linux settings
- 3.3.3: Disabling unattended upgrades of CUDA toolkit
- 3.3.4: Installing older releases
- 3.4: Configuration and monitoring GUI
- 3.4.1: Custom MongoDB installation
- 3.5: Prometheus and Grafana for monitoring
- 3.6: Control panel GUI examples
- 4: Tutorials and How-To's
- 4.1: Getting Started
- 4.2: Using the configuration-and-monitoring GUI
- 4.3: Viewing the multiview and outputs
- 4.4: Creating a video mixer control panel in Companion
- 4.5: Using the audio control GUI
- 4.6: Setting up multi-view outputs using the REST API
- 4.7: Security in Agile Live
- 4.8: Statistics in Agile Live
- 4.9: Using Media Player metadata in HTML pages
- 5: Releases
- 5.1: Release 7.0.0
- 5.2: Release 6.0.0
- 5.3: Release 5.0.0
- 5.4: Release 4.0.0
- 5.5: Release 3.0.0
- 5.6: Release 2.0.0
- 5.7: Release 1.0.0
- 5.8: Upgrade instructions
- 6: Reference
- 6.1: Agile Live 7.0.0 developer reference
- 6.1.1: REST API v3
- 6.1.2: Agile Live Rendering Engine configuration documentation
- 6.1.3: Agile Live Rendering Engine command documentation
- 6.1.4: C++ SDK
- 6.1.4.1: Classes
- 6.1.4.1.1: AclLog::CommandLogFormatter
- 6.1.4.1.2: AclLog::FileLocationFormatterFlag
- 6.1.4.1.3: AclLog::ThreadNameFormatterFlag
- 6.1.4.1.4: AlignedAudioFrame
- 6.1.4.1.5: AlignedFrame
- 6.1.4.1.6: ControlDataCommon::ConnectionStatus
- 6.1.4.1.7: ControlDataCommon::Response
- 6.1.4.1.8: ControlDataCommon::StatusMessage
- 6.1.4.1.9: ControlDataSender
- 6.1.4.1.10: ControlDataSender::Settings
- 6.1.4.1.11: DeviceMemory
- 6.1.4.1.12: IControlDataReceiver
- 6.1.4.1.13: IControlDataReceiver::IncomingRequest
- 6.1.4.1.14: IControlDataReceiver::ReceiverResponse
- 6.1.4.1.15: IControlDataReceiver::Settings
- 6.1.4.1.16: IMediaStreamer
- 6.1.4.1.17: IMediaStreamer::Configuration
- 6.1.4.1.18: IMediaStreamer::Settings
- 6.1.4.1.19: IngestApplication
- 6.1.4.1.20: IngestApplication::Settings
- 6.1.4.1.21: ISystemControllerInterface
- 6.1.4.1.22: ISystemControllerInterface::Callbacks
- 6.1.4.1.23: ISystemControllerInterface::Response
- 6.1.4.1.24: MediaReceiver
- 6.1.4.1.25: MediaReceiver::NewStreamParameters
- 6.1.4.1.26: MediaReceiver::Settings
- 6.1.4.1.27: SystemControllerConnection
- 6.1.4.1.28: SystemControllerConnection::Settings
- 6.1.4.1.29: TimeCommon::TAIStatus
- 6.1.4.1.30: TimeCommon::TimeStructure
- 6.1.4.2: Files
- 6.1.4.2.1: include/AclLog.h
- 6.1.4.2.2: include/AlignedFrame.h
- 6.1.4.2.3: include/Base64.h
- 6.1.4.2.4: include/ControlDataCommon.h
- 6.1.4.2.5: include/ControlDataSender.h
- 6.1.4.2.6: include/DeviceMemory.h
- 6.1.4.2.7: include/IControlDataReceiver.h
- 6.1.4.2.8: include/IMediaStreamer.h
- 6.1.4.2.9: include/IngestApplication.h
- 6.1.4.2.10: include/IngestUtils.h
- 6.1.4.2.11: include/ISystemControllerInterface.h
- 6.1.4.2.12: include/MediaReceiver.h
- 6.1.4.2.13: include/PixelFormat.h
- 6.1.4.2.14: include/SystemControllerConnection.h
- 6.1.4.2.15: include/TimeCommon.h
- 6.1.4.2.16: include/UUIDUtils.h
- 6.1.4.3: Namespaces
- 6.1.4.3.1: ACL
- 6.1.4.3.2: AclLog
- 6.1.4.3.3: ControlDataCommon
- 6.1.4.3.4: IngestUtils
- 6.1.4.3.5: spdlog
- 6.1.4.3.6: TimeCommon
- 6.1.4.3.7: UUIDUtils
- 6.1.5: System controller config
- 6.2: Agile Live 6.0.0 developer reference
- 6.2.1: REST API v2
- 6.2.2: Agile Live Rendering Engine command documentation
- 6.2.3: C++ SDK
- 6.2.3.1: Classes
- 6.2.3.1.1: AclLog::CommandLogFormatter
- 6.2.3.1.2: AclLog::FileLocationFormatterFlag
- 6.2.3.1.3: AclLog::ThreadNameFormatterFlag
- 6.2.3.1.4: AlignedAudioFrame
- 6.2.3.1.5: AlignedFrame
- 6.2.3.1.6: ControlDataCommon::ConnectionStatus
- 6.2.3.1.7: ControlDataCommon::Response
- 6.2.3.1.8: ControlDataCommon::StatusMessage
- 6.2.3.1.9: ControlDataReceiver
- 6.2.3.1.10: ControlDataReceiver::IncomingRequest
- 6.2.3.1.11: ControlDataReceiver::ReceiverResponse
- 6.2.3.1.12: ControlDataReceiver::Settings
- 6.2.3.1.13: ControlDataSender
- 6.2.3.1.14: ControlDataSender::Settings
- 6.2.3.1.15: DeviceMemory
- 6.2.3.1.16: IngestApplication
- 6.2.3.1.17: IngestApplication::Settings
- 6.2.3.1.18: ISystemControllerInterface
- 6.2.3.1.19: ISystemControllerInterface::Callbacks
- 6.2.3.1.20: ISystemControllerInterface::Response
- 6.2.3.1.21: MediaReceiver
- 6.2.3.1.22: MediaReceiver::NewStreamParameters
- 6.2.3.1.23: MediaReceiver::Settings
- 6.2.3.1.24: MediaStreamer
- 6.2.3.1.25: MediaStreamer::Configuration
- 6.2.3.1.26: PipelineSystemControllerInterfaceFactory
- 6.2.3.1.27: SystemControllerConnection
- 6.2.3.1.28: SystemControllerConnection::Settings
- 6.2.3.1.29: TimeCommon::TAIStatus
- 6.2.3.1.30: TimeCommon::TimeStructure
- 6.2.3.2: Files
- 6.2.3.2.1: include/AclLog.h
- 6.2.3.2.2: include/AlignedFrame.h
- 6.2.3.2.3: include/Base64.h
- 6.2.3.2.4: include/ControlDataCommon.h
- 6.2.3.2.5: include/ControlDataReceiver.h
- 6.2.3.2.6: include/ControlDataSender.h
- 6.2.3.2.7: include/DeviceMemory.h
- 6.2.3.2.8: include/IngestApplication.h
- 6.2.3.2.9: include/IngestUtils.h
- 6.2.3.2.10: include/ISystemControllerInterface.h
- 6.2.3.2.11: include/MediaReceiver.h
- 6.2.3.2.12: include/MediaStreamer.h
- 6.2.3.2.13: include/PipelineSystemControllerInterfaceFactory.h
- 6.2.3.2.14: include/PixelFormat.h
- 6.2.3.2.15: include/SystemControllerConnection.h
- 6.2.3.2.16: include/TimeCommon.h
- 6.2.3.2.17: include/UUIDUtils.h
- 6.2.3.3: Namespaces
- 6.2.3.3.1: ACL
- 6.2.3.3.2: AclLog
- 6.2.3.3.3: ControlDataCommon
- 6.2.3.3.4: IngestUtils
- 6.2.3.3.5: spdlog
- 6.2.3.3.6: TimeCommon
- 6.2.3.3.7: UUIDUtils
- 6.2.4: System controller config
- 6.3: Agile Live 5.0.0 developer reference
- 6.3.1: REST API v2
- 6.3.2: Agile Live Rendering Engine command documentation
- 6.3.3: C++ SDK
- 6.3.3.1: Classes
- 6.3.3.1.1: AclLog::CommandLogFormatter
- 6.3.3.1.2: AclLog::FileLocationFormatterFlag
- 6.3.3.1.3: AclLog::ThreadNameFormatterFlag
- 6.3.3.1.4: AlignedAudioFrame
- 6.3.3.1.5: AlignedFrame
- 6.3.3.1.6: ControlDataCommon::ConnectionStatus
- 6.3.3.1.7: ControlDataCommon::Response
- 6.3.3.1.8: ControlDataCommon::StatusMessage
- 6.3.3.1.9: ControlDataReceiver
- 6.3.3.1.10: ControlDataReceiver::IncomingRequest
- 6.3.3.1.11: ControlDataReceiver::ReceiverResponse
- 6.3.3.1.12: ControlDataReceiver::Settings
- 6.3.3.1.13: ControlDataSender
- 6.3.3.1.14: ControlDataSender::Settings
- 6.3.3.1.15: DeviceMemory
- 6.3.3.1.16: IngestApplication
- 6.3.3.1.17: IngestApplication::Settings
- 6.3.3.1.18: ISystemControllerInterface
- 6.3.3.1.19: ISystemControllerInterface::Callbacks
- 6.3.3.1.20: ISystemControllerInterface::Response
- 6.3.3.1.21: MediaReceiver
- 6.3.3.1.22: MediaReceiver::NewStreamParameters
- 6.3.3.1.23: MediaReceiver::Settings
- 6.3.3.1.24: MediaStreamer
- 6.3.3.1.25: MediaStreamer::Configuration
- 6.3.3.1.26: PipelineSystemControllerInterfaceFactory
- 6.3.3.1.27: SystemControllerConnection
- 6.3.3.1.28: SystemControllerConnection::Settings
- 6.3.3.1.29: TimeCommon::TAIStatus
- 6.3.3.1.30: TimeCommon::TimeStructure
- 6.3.3.2: Files
- 6.3.3.2.1: include/AclLog.h
- 6.3.3.2.2: include/AlignedFrame.h
- 6.3.3.2.3: include/Base64.h
- 6.3.3.2.4: include/ControlDataCommon.h
- 6.3.3.2.5: include/ControlDataReceiver.h
- 6.3.3.2.6: include/ControlDataSender.h
- 6.3.3.2.7: include/DeviceMemory.h
- 6.3.3.2.8: include/IngestApplication.h
- 6.3.3.2.9: include/IngestUtils.h
- 6.3.3.2.10: include/ISystemControllerInterface.h
- 6.3.3.2.11: include/MediaReceiver.h
- 6.3.3.2.12: include/MediaStreamer.h
- 6.3.3.2.13: include/PipelineSystemControllerInterfaceFactory.h
- 6.3.3.2.14: include/SystemControllerConnection.h
- 6.3.3.2.15: include/TimeCommon.h
- 6.3.3.2.16: include/UUIDUtils.h
- 6.3.3.3: Namespaces
- 6.3.3.3.1: AclLog
- 6.3.3.3.2: ControlDataCommon
- 6.3.3.3.3: IngestUtils
- 6.3.3.3.4: spdlog
- 6.3.3.3.5: TimeCommon
- 6.3.3.3.6: UUIDUtils
- 6.3.4: System controller config
- 6.4: Agile Live 4.0.0 developer reference
- 6.4.1: REST API v2
- 6.4.2: Agile Live Rendering Engine command documentation
- 6.4.3: C++ SDK
- 6.4.3.1: Classes
- 6.4.3.1.1: AclLog::CommandLogFormatter
- 6.4.3.1.2: AlignedAudioFrame
- 6.4.3.1.3: AlignedFrame
- 6.4.3.1.4: ControlDataCommon::ConnectionStatus
- 6.4.3.1.5: ControlDataCommon::Response
- 6.4.3.1.6: ControlDataCommon::StatusMessage
- 6.4.3.1.7: ControlDataReceiver
- 6.4.3.1.8: ControlDataReceiver::IncomingRequest
- 6.4.3.1.9: ControlDataReceiver::ReceiverResponse
- 6.4.3.1.10: ControlDataReceiver::Settings
- 6.4.3.1.11: ControlDataSender
- 6.4.3.1.12: ControlDataSender::Settings
- 6.4.3.1.13: DeviceMemory
- 6.4.3.1.14: IngestApplication
- 6.4.3.1.15: IngestApplication::Settings
- 6.4.3.1.16: ISystemControllerInterface
- 6.4.3.1.17: ISystemControllerInterface::Callbacks
- 6.4.3.1.18: ISystemControllerInterface::Response
- 6.4.3.1.19: MediaReceiver
- 6.4.3.1.20: MediaReceiver::NewStreamParameters
- 6.4.3.1.21: MediaReceiver::Settings
- 6.4.3.1.22: MediaStreamer
- 6.4.3.1.23: MediaStreamer::Configuration
- 6.4.3.1.24: PipelineSystemControllerInterfaceFactory
- 6.4.3.1.25: SystemControllerConnection
- 6.4.3.1.26: SystemControllerConnection::Settings
- 6.4.3.1.27: TimeCommon::TAIStatus
- 6.4.3.1.28: TimeCommon::TimeStructure
- 6.4.3.2: Files
- 6.4.3.2.1: include/AclLog.h
- 6.4.3.2.2: include/AlignedFrame.h
- 6.4.3.2.3: include/Base64.h
- 6.4.3.2.4: include/ControlDataCommon.h
- 6.4.3.2.5: include/ControlDataReceiver.h
- 6.4.3.2.6: include/ControlDataSender.h
- 6.4.3.2.7: include/DeviceMemory.h
- 6.4.3.2.8: include/IngestApplication.h
- 6.4.3.2.9: include/IngestUtils.h
- 6.4.3.2.10: include/ISystemControllerInterface.h
- 6.4.3.2.11: include/MediaReceiver.h
- 6.4.3.2.12: include/MediaStreamer.h
- 6.4.3.2.13: include/PipelineSystemControllerInterfaceFactory.h
- 6.4.3.2.14: include/SystemControllerConnection.h
- 6.4.3.2.15: include/TimeCommon.h
- 6.4.3.2.16: include/UUIDUtils.h
- 6.4.3.3: Namespaces
- 6.4.3.3.1: AclLog
- 6.4.3.3.2: ControlDataCommon
- 6.4.3.3.3: IngestUtils
- 6.4.3.3.4: spdlog
- 6.4.3.3.5: TimeCommon
- 6.4.3.3.6: UUIDUtils
- 6.4.4: System controller config
- 6.5: Agile Live 3.0.0 developer reference
- 6.5.1: REST API v2
- 6.5.2: Agile Live Rendering Engine command documentation
- 6.5.3: C++ SDK
- 6.5.3.1: Classes
- 6.5.3.1.1: AlignedFrame
- 6.5.3.1.2: ControlDataCommon::ConnectionStatus
- 6.5.3.1.3: ControlDataCommon::Response
- 6.5.3.1.4: ControlDataCommon::StatusMessage
- 6.5.3.1.5: ControlDataReceiver
- 6.5.3.1.6: ControlDataReceiver::IncomingRequest
- 6.5.3.1.7: ControlDataReceiver::ReceiverResponse
- 6.5.3.1.8: ControlDataReceiver::Settings
- 6.5.3.1.9: ControlDataSender
- 6.5.3.1.10: ControlDataSender::Settings
- 6.5.3.1.11: DeviceMemory
- 6.5.3.1.12: IngestApplication
- 6.5.3.1.13: IngestApplication::Settings
- 6.5.3.1.14: ISystemControllerInterface
- 6.5.3.1.15: ISystemControllerInterface::Callbacks
- 6.5.3.1.16: ISystemControllerInterface::Response
- 6.5.3.1.17: MediaReceiver
- 6.5.3.1.18: MediaReceiver::NewStreamParameters
- 6.5.3.1.19: MediaReceiver::Settings
- 6.5.3.1.20: MediaStreamer
- 6.5.3.1.21: MediaStreamer::Configuration
- 6.5.3.1.22: PipelineSystemControllerInterfaceFactory
- 6.5.3.1.23: SystemControllerConnection
- 6.5.3.1.24: SystemControllerConnection::Settings
- 6.5.3.1.25: TimeCommon::TAIStatus
- 6.5.3.1.26: TimeCommon::TimeStructure
- 6.5.3.2: Files
- 6.5.3.2.1: include/AclLog.h
- 6.5.3.2.2: include/AlignedFrame.h
- 6.5.3.2.3: include/Base64.h
- 6.5.3.2.4: include/ControlDataCommon.h
- 6.5.3.2.5: include/ControlDataReceiver.h
- 6.5.3.2.6: include/ControlDataSender.h
- 6.5.3.2.7: include/DeviceMemory.h
- 6.5.3.2.8: include/IngestApplication.h
- 6.5.3.2.9: include/IngestUtils.h
- 6.5.3.2.10: include/ISystemControllerInterface.h
- 6.5.3.2.11: include/MediaReceiver.h
- 6.5.3.2.12: include/MediaStreamer.h
- 6.5.3.2.13: include/PipelineSystemControllerInterfaceFactory.h
- 6.5.3.2.14: include/SystemControllerConnection.h
- 6.5.3.2.15: include/TimeCommon.h
- 6.5.3.2.16: include/UUIDUtils.h
- 6.5.3.3: Namespaces
- 6.5.3.3.1: AclLog
- 6.5.3.3.2: ControlDataCommon
- 6.5.3.3.3: IngestUtils
- 6.5.3.3.4: TimeCommon
- 6.5.3.3.5: UUIDUtils
- 6.5.4: System controller config
- 6.6: Agile Live 2.0.0 developer reference
- 6.6.1: REST API v1
- 6.6.2: Agile Live Rendering Engine command documentation
- 6.6.3: C++ SDK
- 6.6.3.1: Classes
- 6.6.3.1.1: AlignedFrame
- 6.6.3.1.2: ControlDataCommon::ConnectionStatus
- 6.6.3.1.3: ControlDataCommon::Response
- 6.6.3.1.4: ControlDataCommon::StatusMessage
- 6.6.3.1.5: ControlDataReceiver
- 6.6.3.1.6: ControlDataReceiver::IncomingRequest
- 6.6.3.1.7: ControlDataReceiver::ReceiverResponse
- 6.6.3.1.8: ControlDataReceiver::Settings
- 6.6.3.1.9: ControlDataSender
- 6.6.3.1.10: ControlDataSender::Settings
- 6.6.3.1.11: DeviceMemory
- 6.6.3.1.12: IngestApplication
- 6.6.3.1.13: IngestApplication::Settings
- 6.6.3.1.14: ISystemControllerInterface
- 6.6.3.1.15: ISystemControllerInterface::Callbacks
- 6.6.3.1.16: ISystemControllerInterface::Response
- 6.6.3.1.17: MediaReceiver
- 6.6.3.1.18: MediaReceiver::NewStreamParameters
- 6.6.3.1.19: MediaReceiver::Settings
- 6.6.3.1.20: MediaStreamer
- 6.6.3.1.21: MediaStreamer::Configuration
- 6.6.3.1.22: PipelineSystemControllerInterfaceFactory
- 6.6.3.1.23: SystemControllerConnection
- 6.6.3.1.24: SystemControllerConnection::Settings
- 6.6.3.1.25: TimeCommon::TAIStatus
- 6.6.3.1.26: TimeCommon::TimeStructure
- 6.6.3.2: Files
- 6.6.3.2.1: include/AclLog.h
- 6.6.3.2.2: include/AlignedFrame.h
- 6.6.3.2.3: include/Base64.h
- 6.6.3.2.4: include/ControlDataCommon.h
- 6.6.3.2.5: include/ControlDataReceiver.h
- 6.6.3.2.6: include/ControlDataSender.h
- 6.6.3.2.7: include/DeviceMemory.h
- 6.6.3.2.8: include/IngestApplication.h
- 6.6.3.2.9: include/IngestUtils.h
- 6.6.3.2.10: include/ISystemControllerInterface.h
- 6.6.3.2.11: include/MediaReceiver.h
- 6.6.3.2.12: include/MediaStreamer.h
- 6.6.3.2.13: include/PipelineSystemControllerInterfaceFactory.h
- 6.6.3.2.14: include/SystemControllerConnection.h
- 6.6.3.2.15: include/TimeCommon.h
- 6.6.3.2.16: include/UUIDUtils.h
- 6.6.3.3: Namespaces
- 6.6.3.3.1: AclLog
- 6.6.3.3.2: ControlDataCommon
- 6.6.3.3.3: IngestUtils
- 6.6.3.3.4: TimeCommon
- 6.6.3.3.5: UUIDUtils
- 6.6.4: System controller config
- 6.7: Agile Live 1.0.0 developer reference
- 6.7.1: REST API v1
- 6.7.2: C++ SDK
- 6.7.2.1: Classes
- 6.7.2.1.1: AlignedData::DataFrame
- 6.7.2.1.2: ControlDataCommon::ConnectionStatus
- 6.7.2.1.3: ControlDataCommon::Response
- 6.7.2.1.4: ControlDataCommon::StatusMessage
- 6.7.2.1.5: ControlDataReceiver
- 6.7.2.1.6: ControlDataReceiver::IncomingRequest
- 6.7.2.1.7: ControlDataReceiver::ReceiverResponse
- 6.7.2.1.8: ControlDataReceiver::Settings
- 6.7.2.1.9: ControlDataSender
- 6.7.2.1.10: ControlDataSender::Settings
- 6.7.2.1.11: IngestApplication
- 6.7.2.1.12: IngestApplication::Settings
- 6.7.2.1.13: ISystemControllerInterface
- 6.7.2.1.14: ISystemControllerInterface::Callbacks
- 6.7.2.1.15: ISystemControllerInterface::Response
- 6.7.2.1.16: MediaReceiver
- 6.7.2.1.17: MediaReceiver::NewStreamParameters
- 6.7.2.1.18: MediaReceiver::Settings
- 6.7.2.1.19: MediaStreamer
- 6.7.2.1.20: MediaStreamer::Configuration
- 6.7.2.1.21: SystemControllerConnection
- 6.7.2.1.22: SystemControllerConnection::Settings
- 6.7.2.1.23: TimeCommon::TAIStatus
- 6.7.2.1.24: TimeCommon::TimeStructure
- 6.7.2.2: Files
- 6.7.2.2.1: include/AclLog.h
- 6.7.2.2.2: include/AlignedData.h
- 6.7.2.2.3: include/Base64.h
- 6.7.2.2.4: include/ControlDataCommon.h
- 6.7.2.2.5: include/ControlDataReceiver.h
- 6.7.2.2.6: include/ControlDataSender.h
- 6.7.2.2.7: include/IngestApplication.h
- 6.7.2.2.8: include/IngestUtils.h
- 6.7.2.2.9: include/ISystemControllerInterface.h
- 6.7.2.2.10: include/MediaReceiver.h
- 6.7.2.2.11: include/MediaStreamer.h
- 6.7.2.2.12: include/SystemControllerConnection.h
- 6.7.2.2.13: include/TimeCommon.h
- 6.7.2.2.14: include/UUIDUtils.h
- 6.7.2.3: Namespaces
- 6.7.2.3.1: AclLog
- 6.7.2.3.2: AlignedData
- 6.7.2.3.3: ControlDataCommon
- 6.7.2.3.4: IngestUtils
- 6.7.2.3.5: TimeCommon
- 6.7.2.3.6: UUIDUtils
- 6.7.3: System controller config
- 7: Troubleshooting
- 8: Credits
- 8.1: C++ SDK Credits
- 8.2: REST API Credits
1 - Overview
Agile Live is a software-based platform for production of media using standard, off-the-shelf hardware and datacenter building blocks.
The Agile Live system consists of multiple software components, distributed geographically between recording studios, remote production control rooms and datacenters.
All Agile Live components have the same understanding of the time ‘now’. The time reference used by all software components is TAI (Temps Atomique International), which provides a monotonically, steady time reference no matter the geographic location.
This makes it possible to compensate for differences in relative delay between the media streams. The synchronized timing makes the system unique compared to other solutions usually bound to buffer levels and various queueing mechanisms.
The synchronization of the streams transported over networks with various delays makes it possible to build a distributed production environment, where the mixing of the video and audio sources can take place at a different location compared to the studio where they were captured.
System overview
Media sources, such as video cameras and microphones, are connected to a computer called Ingest. In practice, this will usually be some kind of off-the-shelf desktop computer, potentially equipped with a frame grabber card to allow physical input interfaces such as SDI cables, but it could also be a smartphone with a dedicated app using the built-in camera.
The Ingest is responsible for receiving the media data from the source, encode it and transport it using a network to the production environment inside the cloud/datacenter.
From the datacenter the remote production team can get multi-view streams of the available media sources to base their production decisions on, as well as a low-delay feedback display showing the result of the production.
The production in the datacenter will also output a high quality program output, which is broadcast to the viewers. Both the multi-views and the feedback display will usually have a lower quality compared to the high quality program output that is broadcast to the viewers. This is to allow an as short as possible end-to-end delay between what happens in front of the cameras in the studio and the output stream on the feedback display.
This in turn allows for feedback loops back to the studio, for instance for video cameras that are remotely controlled by the production team based on what they see on the multi-views, or having additional feedback displays in the studio.
2 - Introduction to Agile Live technology
Agile Live is a software-based platform and solution for live media production that enables remote and distributed workflows in software running on standard, off-the-shelf hardware. It can be deployed on-premises or in private or public cloud.
The system is built with an ‘API first’ philosophy throughout. It is built in several stacked conceptual layers, and a customer may choose how many layers to include in their solution, depending on which customisations they wish to do themselves. It is implemented in a distributed manner with a number of different software binaries the customer is responsible for running on their own hardware and operating system. If using all layers, the full solution provides media capture, synchronised contribution, video & audio mixing including Digital Video Effects and HTML-based graphics overlay, various production control panel integrations and a fully customisable multiview, primary distribution and a management & orchestration API layer with a configuration & monitoring GUI on top of it.
The layers of the platform are the “Base Layer”, the “Rendering Engine & Generic Control Panels” layer and the “GUIs & Orchestration, Specific Control Panels” layer.
The base layer
The base layer of the platform can in a very simplified way be described as being responsible for transporting media of diverse types from many different locations into a central location in perfect sync, present it there to higher layer software that processes it in some way, and to distribute the end result. This piece of software, which is not a part of the base layer, is referred to as a “rendering engine”. The base layer also carries control commands from remote control panels into the rendering engine, without having any knowledge of the actual control command syntax.
The following image shows a schematic of the roles played by the base layer. The parts in blue are higher layer functionality.
The base layer is separated into three components based on their different roles in a distributed computing system:
- the Ingest component handles acquisition of the media sources and timestamping of media frames, followed by encoding, muxing and origination of reliable transport over unreliable networks using SRT or RIST
- the Production Pipeline component handles terminating all incoming media streams from ingests, demuxing and decoding, aligning all the sources according to timestamps and feeding them synchronously into the rendering engine. It also receives the resulting media frames from the rendering engine and encodes, muxes and streams out the media streams using MPEG-TS over UDP or SRT. It constructs a multiview from the aligned sources by fully customizable scaling and compositing. Finally, it receives control commands from the control panel component, aligns them in time with the media sources and feeds them into the rendering engine. NOTE that the rendering engine is NOT a part of the base layer, and must be supplied by a higher layer or by the implementer themselves.
- The control panel component is responsible for acquiring control command data (the internal syntax of which it has no knowledge of), timestamp it, and send it to the production pipeline component.
The base layer is delivered as libraries and header files, to be used to build binaries that make up the platform. The ingest component is for convenience also delivered as a precompiled binary named acl-ingest as there is very little customization to be done around it at the current moment.
The functionalities in the base layer are managed via a REST API. A separate software program called the System Controller acts as a REST service and communicates with the rest of the Base Layer software to control and monitor it. The REST API is Open API 2.0 compliant, and is documented here. Apart from the three components (ingest, production pipeline, controlpanel), the concepts of a Stream (transporting one source from one ingest to one production pipeline using one specific set of properties) and a controlconnection (transporting control commands from one control panel to one production pipeline or from one production pipeline to another) are central to the REST API.
The rendering engine and generic control panels layer
Though it is fully possible to use only the base layer and to build everything else yourself, a very capable version of a rendering engine is supplied. Our rendering engine implementation consists of a modular video mixer, an audio router and mixer, an interface towards 3rd party audio mixers, and an HTML renderer to overlay HTML-based graphics. It defines a control protocol for configuration and control which can be used by specific control panel implementations.
The rendering engine is delivered bundled with the production pipeline component as a binary named acl-renderingengine. This is because the production pipeline component from the base layer is only delivered as a library that the rendering engine implementation makes use of.
Communication with the rendering engine for configuration and control takes place over the control connections from control panels, as provided by the base layer. The control panels are contructed by using the control panel library, but to assist in getting up and running fast, a few generic control panels are provided, that will act as proxies for control commands. There is one that sets up a TCP server and accepts control commands over connections to it, and another one that acts as a websocket server for the same purpose. Finally there is one that takes commands over stdin from a terminal session forwards control commands.
The GUI and orchestration layer
On top of the REST API it is possible to build GUIs for configuring and monitoring the system. One default GUI is provided.
Prometheus-exporters can be configured to report on the details of the machines in the system, and there is also an exporter for the REST API statistics. The data in Prometheus can then be presented in Grafana.
Using the control command protocol defined by our rendering engine and either the generic control panels or custom control panel integrations, GUIs can be constructed to configure and control the components inside the rendering engine. For example, Bitfocus Companion can be configured to send control commands to the tcpcontrolpanel which will send them further into the system. The custom integration audio_control_gui can be used to interface with audio panels that support the Mackie Control protocol and to send control commands to the tcpcontrolpanel.
Direct editing mode vs proxy editing mode
The components of the Agile Live platform can be combined in many different ways. The two most common ways is the Direct Editing Mode and the Proxy Editing Mode. The direct editing mode is what has commonly been done for remote productions, where the mixer operator edits directly in the flow that goes out to the viewer.
In the proxy editing mode, the operator works with a low delay but lower quality flow, makes mixing decisions, and then those decisions are replicated in a frame accurate manner on a high quality flow that is delayed in time.
There are good reasons for using a proxy editing mode. The mixer operator may be communicating via intercom with personnel at the venue, such as camera operators, or PTZ cameras may be operated remotely. In these cases it is important that the delay from the cameras to the production pipeline and out to the mixer operator is as short as possible. But having a low latency for this flow goes in direct opposition to having a high quality video compression and a very reliable transport. Therefore, in the proxy editing mode, we have both a low delay workflow and a high quality but slower workflow. This way both requirements can be satisfied. This technique is made possible by the synchronization functionalities of the base layer platform.
3 - Installation and configuration
A working knowledge of the technical basis of the Agile Live solution is beneficial before starting with this installation guide.
This guide will often point to subpages for the detailed description of steps. The major steps of this guide are:
- Hardware and operating system requirements
- Install, configure and start the System Controller
- The base platform components
- Install prerequisites
- Preparing the base system for installation
- Install Agile Live platform
- Configure and start the Agile Live components
- Install the configuration-and-monitoring GUI (optional)
- Install Prometheus and Grafana, and configure them (optional)
- Install and start control panel GUI examples
Warning
Some installation scripts described in this guide are using sudo to install software. As always when using sudo, make sure you understand what the scripts do before running them!1. Hardware and operating system requirements
The bulk of the platform runs on computer systems based on x86 CPUs and running the Ubuntu 22.04 operating system. This includes the following software components: the ingest, the production pipeline, the control panel integrations, and the system controller. The remaining components, i.e. the configuration and monitoring GUI, the control panel GUI examples, and the Prometheus database and Grafana frontend can be run in the platform of choice, as they are cross platform.
The platform does not have explicit hardware performance requirements, as it will utilize the performance of the system it is given, but suggestions for hardware specifications can be found here.
Agile Live is supported and tested on Ubuntu 22.04 LTS. Other versions may work but are not officially supported.
2. Install, configure and start the System Controller
The System Controller is the central backend that ties together and controls the different components of the base platform, and that provides the REST API that is used by the user to configure and monitor the system. When each Agile Live component, such as an Ingest or a Production Pipeline, starts up it connects to and registers with the System Controller, so it is a good idea to install and configure the System Controller first.
Instructuctions on how to install and configure it can be found here.
3. The base platform components
The base platform consists of the software components Ingests, Production Pipelines, and Control Panel interfaces. Each machine that is intended to act as an aquisition device for media shall run exactly one instance of the Ingest application, as it is responsible for interfacing with third party software such as SDKs for SDI cards and NDI.
The production pipeline application instance (interchangably called a rendering engine application instance in parts of this documentation) uses the resources of its host machine in a non-exclusive way (including the GPU), so there may therefore reside more than one instance on a single host machine.
The control panel integrations preferably reside on a machine close to the actual control panel.
However, in a very minimal system not all of these machines need to be separate; in fact, all the above could run on a single computer if so is desired.
A guide to installing and configuring the Agile Live base platform can be found here.
4. Install the configuration-and-monitoring GUI (optional)
There is a basic configuration-and-monitoring GUI that is intended to be used as an easy way to get going with using the system. It uses the REST API to configure and monitor the system.
Instructions for how to install it can be found here and a tutorial for how to use it can be found here
5. Install Prometheus and Grafana, and configure them (optional)
To collect useful system metrics for stability and performance monitoring, we advise to use Prometheus. For visualizing the metrics collected by Prometheus you could use Grafana.
The System controller receives a lot of internal metrics from the Agile Live components. These can be pulled by a Prometheus instance from the endpoint https://system-controller-host:port/metrics
.
It is also possible to install external exporters for various hardware and OS metrics.
To install and run prometheus exporters on the servers that run ingest and production pipeline software, as well as installing, configuring and running the Prometheus database and the associated Grafana instance on the server dedicated to management and orchestration, follow this guide.
6. Install and start control panel GUI examples
To install the example control panel GUIs that are delivered as source code, follow this guide.
3.1 - Hardware examples
The following hardware specifications serve as examples of how much can be expected from certain hardware setups. The maximum number of inputs and outputs are highly dependent on the quality and resolution settings chosen when setting up a production, so these numbers shall be seen as approximations of what can be expected from a certain hardware under some form of normal use.
Hardware example #1
System Specifications | |
---|---|
CPU | Intel Core i7 12700K |
RAM | Kingston FURY Beast 32GB (2x16GB) 3200MHz CL16 DDR4 |
Graphics Card | NVIDIA RTX A2000 12GB |
PSU | 750W PSU |
Motherboard | ASUS Prime H670-Plus D4 DDR4 S-1700 ATX |
SSD | Samsung 980 NVMe M.2 1TB SSD |
Capture Card (Ingest only) | Blackmagic Design Decklink Duo 2 |
One such server is capable of fulfilling either of the two use-cases below:
- Ingest server
- With Proxy editing: ingesting 4 sources (encoded as 4 Low Delay streams, AVC 720p50 1 Mbps and 4 High Quality streams, HEVC 1080p50 25 Mbps).
- With Direct editing: ingesting 6 sources, (HEVC 1080p50 25 Mbps).
- Production server, running both a High Quality and a Low Delay pipeline with at least 10 connected sources (i.e. at least 20 incoming streams), with at least one 1080p50 multi-view output, one 1080p High Quality program output and one 720p Low Delay output.
Hardware example #2
System Specifications | |
---|---|
CPU | Intel Core i7 12700K |
RAM | Kingston FURY Beast 32GB (2x16GB) 6000MHz CL36 DDR5 |
Graphics Card | NVIDIA RTX 4000 SFF Ada Generation 20GB |
PSU | 750W PSU |
Motherboard | ASUS ROG Maximus Z690 Hero DDR5 ATX |
SSD | WD Blue SN570 M.2 1TB SSD |
Capture Card (Ingest only) | Blackmagic Design Decklink Quad 2 |
One such server is capable of fulfilling either of the two use-cases below:
- Ingest server
- With Proxy editing: ingesting 6-8 sources (encoded as 6-8 Low Delay streams, AVC 720p50 1 Mbps and 6-8 High Quality streams, HEVC 1080p50 25 Mbps).
- With Direct editing: ingesting 8-10 sources (HEVC 1080p50 25 Mbps).
- Production server, running both the High Quality and the Low Delay pipeline with at least 20 connected sources (i.e. at least 40 incoming streams), with at least three 4K multi-view outputs, four 1080p High Quality outputs and four 720p Low Delay outputs.
3.2 - The System Controller
The System Controller is the central backend that ties together and controls the different components of the base platform, and that provides the REST API that is used by the user to configure and monitor the system.
When each Agile Live component (such as an Ingest or a Production Pipeline) starts up, it connects to and registers with the System Controller. This means the System Controller must run in a network location that is reachable from all other components.
From release 6.0.0 and on, the System Controller is distributed as a debian package, acl-system-controller
.
To install the System Controller application, do the following:
sudo apt -y install acl-system-controller_<version>_amd64.deb
The System Controller binary will be installed in the directory /opt/agile_live/bin/
and its configuration file will be placed in /etc/opt/agile_live/
.
Note
In releases 5.0.0 and earlier, the System Controller application was delivered in the release artifactagile-live-system-controller-<x.y.z>-<checksum>.zip
. Unpack the file and place the contents in a desired location.
An example configuration file is shipped with this release, postfixed with .example
. Remove this postfix to use the file.The System Controller is configured either using a settings file or using environment variables, or a combination thereof. The settings file, named acl_sc_settings.json
, is searched for in /etc/opt/agile_live
and current directory, or the search path can be pointed to using a flag on the command line when starting the application:
./acl-system-controller
for default search paths or ./acl-system-controller -settings-path /path/to/directory/where/settings_file/exists
To use environment variables overriding settings from the configuration file, construct the variables from the json objects in the settings file by prepending with ACL_SC
and then the json object flattened out with _
as the delimiter, e.g.:
export ACL_SC_LOGGER_LEVEL=debug
export ACL_SC_PROTO=https
Starting with the basic configuration file in the release, edit a few settings inside it (see this guide for help with the security related parameters):
- change the
psk
parameter to something random of your own - change the
salt
to some long string of your own - set the
username
andpassword
to protect the REST API - specify TLS certificate and key files
- set the
host
andport
parameters to where the System Controller shall be reached
Also consider setting the file_name_prefix
to set the log file location, otherwise it defaults to /tmp/
.
Warning
Since passwords will be inserted in this configuration file, it is recommended to set the file rights to only allow reading by the user that executes the System Controller.From the installation directory, the System Controller is started with:
./acl-system-controller
If started successfully, it will display a log message showing the server endpoint.
The System Controller comes with built-in Swagger documentation that makes it easy to explore the API in a regular web browser. To access the Swagger REST API documentation hosted by the System Controller, open https://<IP>:<port>/swagger/
in your browser. The Swagger UI can be used to poll or edit the state of the actual system by using the “Try it out” buttons.
Running the System Controller as a service
It may be useful to run the System Controller as a service in Ubuntu, so that it starts automatically on boot and restarts if it terminates abnormally. In Ubuntu 22.04 systemd
manages the services. Here follows an example of how a service can be configured, started and monitored.
First create a “service unit file”, named for example /etc/systemd/system/acl-system-controller.service
and put the following in it:
[Unit]
Description=Agile Live System Controller service
[Service]
User=<YOUR_USER>
Environment=<ANY_ENV_VARS_HERE>
ExecStart=/opt/agile_live/bin/acl-system-controller
Restart=always
[Install]
WantedBy=multi-user.target
Replace <YOUR_USER>
with the user that will be executing the System Controller binary on the host machine. The Environment
attribute is used if you want to set any parameters as environment variables.
After saving the service unit file, run:
sudo systemctl daemon-reload
sudo systemctl enable acl-system-controller.service
sudo systemctl start acl-system-controller
The service should now be started. The status of the service can be queried using:
sudo systemctl status acl-system-controller.service
3.3 - The base platform
This page describes the steps needed to install and configure the base platform.
Install Third-party packages
Agile Live has a few dependencies on third party software that have to be installed separately first. Which ones are needed depends on the component of Agile Live that you will be running on the specific system.
A machine running the Ingest needs the following:
- CUDA Toolkit
- oneVPL
- Blackmagic Desktop Video (Optional, is needed when using Blackmagic DeckLink SDI cards)
- NDI SDK for Linux
- Base packages via ‘apt’
A machine running the Rendering Engine only needs a subset:
- CUDA Toolkit
- Base packages via ‘apt’
A machine running control panels needs yet fewer:
- Base packages via ‘apt’
Hint
For ease of use during the installation instructions below, for each commands section it is suggested to copy the contents, place them in a shell script, and run that shell script.CUDA Toolkit
For video encoding and decoding on NVidia GPUs you need Nvidia CUDA Toolkit installed. It must be installed on Ingest machines even if they don’t have Nvidia cards.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt -y install cuda
You need to reboot the system after installation. Then check the installation by running nvidia-smi
. If installing more dependencies, it is ok to proceed with that and reboot once after installing all parts.
oneVPL
All machines running the Ingest application need the Intel oneAPI Video Processing Library (oneVPL) installed. It is used for encoding and thumbnail processing on Intel GPUs and is required even for Ingests which will not use the Intel GPU for encoding. Install oneVPL by running:
sudo apt install -y gpg-agent wget
# Download and run oneVPL offline installer:
# https://www.intel.com/content/www/us/en/developer/articles/tool/oneapi-standalone-components.html#onevpl
VPL_INSTALLER=l_oneVPL_p_2023.1.0.43488_offline.sh
wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/e2f5aca0-c787-4e1d-a233-92a6b3c0c3f2/$VPL_INSTALLER
chmod +x $VPL_INSTALLER
sudo ./$VPL_INSTALLER -a -s --eula accept
# Install the latest general purpose GPU (GPGPU) run-time and development software packages
# https://dgpu-docs.intel.com/driver/client/overview.html
wget -qO - https://repositories.intel.com/graphics/intel-graphics.key | \
sudo gpg --dearmor --output /usr/share/keyrings/intel-graphics.gpg
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/graphics/ubuntu jammy arc' | \
sudo tee /etc/apt/sources.list.d/intel.gpu.jammy.list
sudo apt update
sudo apt install -y ffmpeg intel-media-va-driver-non-free libdrm2 libmfx-gen1.2 libva2
# Add VA environment variables
if [[ -z "${LIBVA_DRIVERS_PATH}" ]]; then
sudo sh -c "echo 'LIBVA_DRIVERS_PATH=/usr/lib/x86_64-linux-gnu/dri' >> /etc/environment"
fi
if [[ -z "${LIBVA_DRIVER_NAME}" ]]; then
sudo sh -c "echo 'LIBVA_DRIVER_NAME=iHD' >> /etc/environment"
fi
# Give users access to the GPU device for compute and render node workloads
# https://www.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top/prerequisites/install-intel-gpu-drivers.html
sudo usermod -aG video,render ${USER}
# Set environment variables for development
echo "# oneAPI environment variables for development" >> ~/.bashrc
echo "source /opt/intel/oneapi/setvars.sh > /dev/null" >> ~/.bashrc
You need to reboot the system after installation. If installing more dependencies, it is ok to proceed with that and reboot once after installing all parts.
Support for Blackmagic Design SDI grabber cards
If the Ingest shall be able to use Blackmagic DeckLink SDI cards, the drivers need to be installed.
Download https://www.blackmagicdesign.com/se/support/family/capture-and-playback (Desktop Video).
Unpack and install (Replace all commands with the version you downloaded):
sudo apt install dctrl-tools dkms
tar -xvf Blackmagic_Desktop_Video_Linux_12.7.tar.gz
cd Blackmagic_Desktop_Video_Linux_12.7/deb/x86_64
sudo dpkg -i desktopvideo_12.7a4_amd64.deb
Support for NDI sources
For machines running Ingests, you need to install the NDI library and additional dependencies:
## Install library
sudo apt -y install avahi-daemon avahi-discover avahi-utils libnss-mdns mdns-scan
wget https://downloads.ndi.tv/SDK/NDI_SDK_Linux/Install_NDI_SDK_v5_Linux.tar.gz
tar -xvzf Install_NDI_SDK_v5_Linux.tar.gz
PAGER=skip-pager ./Install_NDI_SDK_v5_Linux.sh <<< y
cd NDI\ SDK\ for\ Linux/lib/x86_64-linux-gnu/
NDI_LIBRARY_FILE=$(ls libndi.so.5.*.*)
cd ../../..
sudo cp NDI\ SDK\ for\ Linux/lib/x86_64-linux-gnu/${NDI_LIBRARY_FILE} /usr/lib/x86_64-linux-gnu
sudo ln -fs /usr/lib/x86_64-linux-gnu/${NDI_LIBRARY_FILE} /usr/lib/x86_64-linux-gnu/libndi.so
sudo ln -fs /usr/lib/x86_64-linux-gnu/${NDI_LIBRARY_FILE} /usr/lib/x86_64-linux-gnu/libndi.so.5
sudo ldconfig
sudo cp NDI\ SDK\ for\ Linux/include/* /usr/include/.
Base packages installed via ‘apt’
When installing version 5.0.0 or earlier, please refer to this guide for this step.
All systems that run an Ingest, Rendering Engine or Control Panel need chrony as NTP client.
The systemd-coredump
package is also really handy to have installed to be able to debug core dumps later on, but is technically optional.
These packages can be installed with the apt package manager:
sudo apt -y update && sudo apt -y upgrade && sudo apt -y install chrony systemd-coredump
Preparing the base system for installation
To prepare the system for Agile Live, the NTP client must be configured correctly by following this guide, and some general Linux settings shall be configured by following this guide. Also, CUDA should be prevented from automatically installing upgrades by following this guide.
Installing the Agile Live components
When installing version 5.0.0 or earlier, please refer to this guide for this step.
The following Agile Live .deb packages are distributed:
- acl-system-controller - Contains the System Controller application (explained in separate section)
- acl-libingest - Contains the library used by the Ingest application
- acl-ingest - Contains the Ingest application
- acl-libproductionpipeline - Contains the libraries used by the Rendering Engine application
- acl-renderingengine - Contains Agile Live’s default Rendering Engine application
- acl-libcontroldatasender - Contains the libraries used by Control Panel applications
- acl-controlpanels - Contains the generic Control Panel applications
- acl-dev - Contains the header files required for development of custom Control Panels and Rendering Engines
All binaries will be installed in the directory /opt/agile_live/bin
. Libraries are installed in /opt/agile_live/lib
. Header files are installed in /opt/agile_live/include
.
Ingest
To install the Ingest application, install the following .deb-packages:
sudo apt -y install ./acl-ingest_<version>_amd64.deb ./acl-libingest_<version>_amd64.deb
Rendering engine
To install the Rendering Engine application, install the following .deb-packages:
sudo apt -y install ./acl-renderingengine_<version>_amd64.deb ./acl-libproductionpipeline_<version>_amd64.deb
Control panels
To install the generic Control Panel applications acl-manualcontrolpanel
, acl-tcpcontrolpanel
and acl-websocketcontrolpanel
, install the following .deb-packages:
sudo apt -y install ./acl-controlpanels_<version>_amd64.deb ./acl-libcontroldatasender_<version>_amd64.deb
Dev
To install the acl-dev package containing the header files needed for building your own applications, install the following .deb-package:
sudo apt -y install ./acl-dev_<version>_amd64.deb
Configuring and starting the Ingest(s)
The Ingest component is a binary which ingests raw video input sources such as SDI or NDI, timestamps, processes, encodes and originates transport to the rest of the platform.
Settings for the ingest binary are added as environment variables. There is one setting that must be set:
ACL_SYSTEM_CONTROLLER_PSK
shall be set to the same as in the settings file of the System Controller
Also, there are some parameters that have default settings that you will most probably want to override:
ACL_SYSTEM_CONTROLLER_IP
andACL_SYSTEM_CONTROLLER_PORT
shall be set to the location where the System Controller is hostedACL_INGEST_NAME
is optional but is well used as an aid to identify the Ingest in the REST interface if multiple instances are connected to the same System Controller
You may also want to set the log location by setting ACL_LOG_PATH
. The log location defaults to /tmp/
.
Hint
In case you forget the names of these environment variables or want to see their default values, run the component with the--help
flag, e.g. ./acl-ingest --help
.As an example, from the installation directory, the Ingest is started with:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_INGEST_NAME="My Ingest" ./acl-ingest
If started successfully, it will display a log message showing that the Ingest is up and running.
Running the Ingest SW as a service
It may be useful to run the Ingest SW as a service in Ubuntu, so that it starts automatically on boot and restarts if it terminates abnormally. In Ubuntu 22.04 systemd
manages the services. Here follows an example of how a service can be configured, started and monitored.
First create a “service unit file”, named for example /etc/systemd/system/acl-ingest.service
and put the following in it:
[Unit]
Description=Agile Live Ingest service
[Service]
User=<YOUR_USER>
Environment=ACL_SYSTEM_CONTROLLER_PSK=<YOUR_PSK>
ExecStart=/bin/bash -c '\
source /opt/intel/oneapi/setvars.sh; \
exec /opt/agile_live/bin/acl-ingest'
Restart=always
[Install]
WantedBy=multi-user.target
Replace <YOUR_USER>
with the user that will be executing the Ingest binary on the host machine. The Environment
attribute should be set in the same way as described above for manually starting the binary.
After saving the service unit file, run:
sudo systemctl daemon-reload
sudo systemctl enable acl-ingest.service
sudo systemctl start acl-ingest
The service should now be started. The status of the service can be queried using:
sudo systemctl status acl-ingest.service
Configuring and starting the Production pipeline(s) including rendering engine
The production pipeline component and the rendering engine component are combined into one binary named acl-renderingengine
and terminates transport of media streams from ingests, demuxes, decodes, aligns according to timestamps and clock sync, separately mixes audio and video (including HTML-based graphics insertion) and finally encodes, muxes and originates transport of finished mixed outputs as well as multiviews.
It is possible to run just one production pipeline if doing direct editing (the “traditional way”, in the stream) or to use two pipelines in order to set up a proxy editing workflow. In the latter case, there is one designated for the Low Delay flow which will be used for quick feedbacks on the editing commands and one designated for High Quality flows which will produce the final quality stream to broadcast to the viewers. The Production Pipeline is simply called a “pipeline” in the REST API. The rendering engine is not managed through the REST API but rather through the control command API from the control panels.
Settings for the production pipeline binary are added as environment variables. There is one setting that must be set:
ACL_SYSTEM_CONTROLLER_PSK
shall be set to the same as in the settings file of the System Controller
Also, there are some parameters that have default settings that you will most probably want to override:
ACL_SYSTEM_CONTROLLER_IP
andACL_SYSTEM_CONTROLLER_PORT
shall be set to the location where the System Controller is hostedACL_RENDERING_ENGINE_NAME
is needed to identify the different production pipelines in the REST API and shall thus be set uniquely and descriptivelyACL_MY_PUBLIC_IP
shall be set to the IP address or hostname where the ingest shall find the production pipeline. If the production pipeline is behind a NAT or needs DNS resolving, setting this variable is necessaryDISPLAY
is needed but may already be set by the operating system. If not (because the server is headless) it needs to be set.
Hint
If running a headless server and setting theDISPLAY
environment variable, run an instance of Xvfb
to provide the virtual “display” to point towards. E.g. start Xvfb by running Xvfb -ac :99 -screen 0 1280x1024x16
, and when starting the Rendering Engine set DISPLAY=:99
. It is recommended to start Xvfb
as a service on headless machines.You may also want to set the log location by setting ACL_LOG_PATH
. The log location defaults to /tmp/
.
Here is an example of how to start two instances (one for Low Delay flow and one for High Quality Flow) of the acl-renderingengine application and name them accordingly so that you and the GUI can tell them apart. First start the Low Delay instance by running:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_RENDERING_ENGINE_NAME="My LD Pipeline" ./acl-renderingengine
Then start the High Quality Rendering Engine in the same way:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_RENDERING_ENGINE_NAME="My HQ Pipeline" ./acl-renderingengine
Configuring and starting the generic Control Panels
TCP control panel
acl-tcpcontrolpanel
is a control panel application that starts a TCP server on a configurable set of endpoints and relays control commands over a control connection if one has been set up. It is preferably run geographically very close to the control panel whose control commands it will relay, but can alternatively run close to the production pipeline that it is connected to. To start with, it is easiest to run it on the same machine as the associated production pipeline.
Settings for the tcp control panel binary are added as environment variables. There is one setting that must be set:
ACL_SYSTEM_CONTROLLER_PSK
shall be set to the same as in the settings file of the System Controller
Also, there are some parameters that have default settings that you will most probably want to override:
ACL_SYSTEM_CONTROLLER_IP
andACL_SYSTEM_CONTROLLER_PORT
shall be set to the location where the System Controller is hostedACL_CONTROL_PANEL_NAME
can be set to make it easier to identify the control panel through the REST APIACL_TCP_ENDPOINT
from release 6.0.0 on, contains the IP address and port of the TCP endpoint to be opened, written as<ip>:>port>
. Several clients can connect to the same listening port. In release 5.0.0 and earlier, instead useACL_TCP_ENDPOINTS
, a comma-separated list of endpoints to be opened, where each endpoint is written as<ip>:>port>
You may also want to set the log location by setting ACL_LOG_PATH
. The log location defaults to /tmp/
.
As an example, from the installation directory, the tcp control panel is started with:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q" ACL_TCP_ENDPOINT=0.0.0.0:7000 ACL_CONTROL_PANEL_NAME="My TCP Control Panel" ./acl-tcpcontrolpanel
Websocket control panel
acl-websocketcontrolpanel
is a control panel application that acts as a websocket server, and relays control commands that come in over websockets that are opened typically by web-based GUIs.
The basic settings are the same as for the tcp control panel. But to specify where the websocket server shall open an endpoint, use this environment variable:
ACL_WEBSOCKET_ENDPOINT
is the endpoint to listen for incoming messages on. On the form<ip>:<port>
You may also want to set the log location by setting ACL_LOG_PATH
. The log location defaults to /tmp/
.
An example of how to start it is the following:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q ACL_WEBSOCKET_ENDPOINT=0.0.0.0:6999 ACL_CONTROL_PANEL_NAME="My Websocket Control Panel" ./acl-websocketcontrolpanel
Manual control panel
acl-manualcontrolpanel
is a basic Control Panel application operated from the command line.
The basic settings are the same as for the other generic control panels but it does not open any network endpoints for input, instead the control commands are written in the terminal. It can be started with:
ACL_SYSTEM_CONTROLLER_IP="10.10.10.10" ACL_SYSTEM_CONTROLLER_PORT=8080 ACL_SYSTEM_CONTROLLER_PSK="FBCZeWB3kMj5me58jmcpmEd3bcPZbp4q ACL_CONTROL_PANEL_NAME="My Manual Control Panel" ./acl-manualcontrolpanel
3.3.1 - NTP
To ensure that all ingest servers and production pipelines are in sync, a clock synchronization protocol called NTP is required. Historically that has been handled by a systemd service service called timesyncd.
Chrony
Starting from version 4.0.0 of Agile Live the required NTP client has switched from systemd-timesyncd to chrony. Chrony is installed by defualt in many Linux distributions since it has a lot of advantages over using systemd-timesyncd. Some of the advantages are listed below.
- Full NTP support (systemd-timesyncd only supports Simple NTP)
- Syncing against multiple servers
- More complex but more accurate
- Gradually adjusts the system time instead of hard steps
Installation
Install chrony with:
$ sudo apt install chrony
This will replace systemd-timesyncd
if installed.
NTP servers
The default configuration for chrony
uses a pool of 8 NTP servers to
synchronize against. It is recommended to use these default servers hosted by Ubuntu.
Verify that the configuration is valid by checking the configuration file after
installing.
$ cat /etc/chrony/chrony.conf
...
pool ntp.ubuntu.com iburst maxsources 4
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
...
The file should also contain the line below for the correct handling of leap seconds. It is used to set the leap second timezone to get the correct TAI time.
leapsectz right/UTC
Useful Commands
To get detailed information on how the system time is running compared to NTP,
use the chrony client chronyc
. The subcommand tracking
shows information
about the current system time and the selected time server used.
$ chronyc tracking Reference ID : C23ACA14 (sth1.ntp.netnod.se) Stratum : 2 Ref time (UTC) : Wed Mar 08 07:23:37 2023 System time : 0.000662344 seconds slow of NTP time Last offset : -0.000669114 seconds RMS offset : 0.001053432 seconds Frequency : 10.476 ppm slow Residual freq : +0.001 ppm Skew : 0.295 ppm Root delay : 0.007340557 seconds Root dispersion : 0.001045418 seconds Update interval : 1028.7 seconds Leap status : Normal
The sources
subcommand to get a list of the other time
servers that chrony uses.
$ chronyc sources -v
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current best, '+' = combined, '-' = not combined,
| / 'x' = may be in error, '~' = too variable, '?' = unusable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- prod-ntp-5.ntp4.ps5.cano> 2 6 17 8 +59us[ -11us] +/- 23ms
^- pugot.canonical.com 2 6 17 7 -1881us[-1951us] +/- 48ms
^+ prod-ntp-4.ntp1.ps5.cano> 2 6 17 7 -1872us[-1942us] +/- 24ms
^+ prod-ntp-3.ntp1.ps5.cano> 2 6 17 6 +878us[ +808us] +/- 23ms
^- ntp4.flashdance.cx 2 6 17 7 -107us[ -176us] +/- 34ms
^+ time.cloudflare.com 3 6 17 8 -379us[ -449us] +/- 5230us
^* time.cloudflare.com 3 6 17 8 +247us[ +177us] +/- 5470us
^- ntp5.flashdance.cx 2 6 17 7 -1334us[-1404us] +/- 34ms
Leap seconds
Due to small variations in the Earth’s rotation speed a one second time adjustment is occasionally introduced in UTC (Coordinated Universal Time). This is usually handled by stepping back the clock in the NTP server one second. These hard steps could cause problems in such a time critical product as video production. To avoid this the system uses a clock known as TAI (International Atomic Time), which does not introduce any leap seconds, and is therefore slightly offset from UTC (37 seconds in 2023).
Leap smearing
Another technique for handling the introduction of a new leap second is called leap smearing. Instead of stepping back the clock in on big step, the leap second is introduced in many minor microsecond intervals during a longer period, usually over 12 to 24 hours. This technique is used by most of the larger cloud providers. Most NTP servers that use leap smearing don’t announce the current TAI offset and will thereby cause issues if used in combination with systems that are using TAI time. It is therefore highly recommended to use Ubuntu’s default NTP servers as described above, as none of these use leap smearing. Mixing leap smearing and non-leap smearing time servers will result in that components in the system will have clocks that are off from each other by 37 seconds (as of 2023), as the ones using leap smearing time servers without TAI offset will set the TAI clock to UTC.
3.3.2 - Linux settings
UDP buffers
To improve the performance while receiving and sending multiple streams, and not overflow UDP buffers (leading to receive buffer errors as identified using e.g. ‘netstat -suna’), it is recommended to adjust some UDP buffer settings on the host system.
Recommended value of the buffers is 16MB, but different setups may need larger buffers or would suffice with lower values. To change the buffer values issue:
sudo sysctl -w net.core.rmem_default=16000000
sudo sysctl -w net.core.rmem_max=16000000
sudo sysctl -w net.core.wmem_default=16000000
sudo sysctl -w net.core.wmem_max=16000000
To make the changes persistant edit/add the values to /etc/sysctl.conf:
net.core.rmem_default=16000000
net.core.rmem_max=16000000
net.core.wmem_default=16000000
net.core.wmem_max=16000000
Core dump
On the rare occasion that a segmentation fault or a similar error should occur, a core dump is generated by the operating system to record the state of the application to aid in troubleshooting. By default these core dumps are handled by the ‘apport’ service in Ubuntu and not written to file. To store them to file instead we recommend installing the systemd core dump service.
sudo apt install systemd-coredump
Core dumps will now be compressed and stored in /var/lib/systemd/coredump
.
The systemd core dump package also includes a helpful CLI tool for accessing core dumps.
List all core dumps:
$ coredumpctl list
TIME PID UID GID SIG COREFILE EXE
Thu 2022-10-20 17:24:16 CEST 985485 1007 1008 11 present /usr/bin/example
You can print some basic information from the core dump to send to the Agile Live developer to aid debugging the crash
$ coredumpctl info 985485
In some cases, the developers might want to see the entire core dump to understand the crash, then save the core dump using
$ coredumpctl dump 985485 --output=myfilename.coredump
Core dumps will not persist between reboots.
3.3.3 - Disabling unattended upgrades of CUDA toolkit
CUDA Driver/library mismatch
Per default Ubuntu runs automatic upgrades on APT installed packages every night, a process called unattended upgrades. This will sometimes cause issues when the unattended upgrade process tries to update the CUDA toolkit, as it will be able to update the libraries, but cannot update the driver, as it is in use by some application. The result is that the CUDA driver cannot be used because of the driver-library mismatch and the Agile Live applications will fail to start.
In case an Agile Live application fails to start with some CUDA error, the easiest way to see if this is because of a driver-library mismatch, is to run nvidia-smi
in a terminal and check the output:
$ nvidia-smi
Failed to initialize NVML: Driver/library version mismatch
In case this happens, just reboot the machine and the new CUDA driver should be loaded instead of the old one at startup. (It is sometimes possible to manually reload the CUDA driver without rebooting, but this can be troublesome as you have to find all applications using the driver)
How to disable unattended upgrades of CUDA toolkit
One way to get rid of these unpleasant surprises when the applications suddenly no longer can start, without disabling unattended upgrades for other packages, is to blacklist the CUDA toolkit from the unattended upgrades process. This will make Ubuntu still update other packages, but CUDA toolkit will require a manual upgrade, which can be done at a suitable time point.
To blacklist the CUDA driver, open the file /etc/apt/apt.conf.d/50unattended-upgrades
in a text editor and look for the Unattended-Upgrade::Package-Blacklist
block. Add "cuda"
and "(lib)?nvidia"
to the list, like so:
// Python regular expressions, matching packages to exclude from upgrading
Unattended-Upgrade::Package-Blacklist {
"cuda";
"(lib)?nvidia";
};
This will disable unattended upgrades for packages with names starting with cuda
, libnvidia
or just nvidia
.
How to manually update CUDA toolkit
Once you want to manually update the CUDA toolkit and the driver, run the following commands:
$ sudo apt update
$ sudo apt upgrade
$ sudo apt install cuda
This will likely trigger the driver-library mismatch in case you run nvidia-smi
. Then reboot the machine to reload the new driver.
Sometimes a reboot is required before the cuda
package can be installed, so in case the upgrade did not work the first time, try a reboot, then apt install cuda
and then reboot again.
3.3.4 - Installing older releases
This page describes a couple of steps in installing the base platform that are specific for releases 5.0.0 and older, and replaces these sections in the main documentation.
Base packages installed via ‘apt’
To make it possible to run the Ingest and Rendering Engine, some packages must be installed. The following base packages can be installed with the apt package manager:
sudo apt -y update && sudo apt -y upgrade && sudo apt -y install libssl3 libfdk-aac2 libopus0 libcpprest2.10 libspdlog1 libfmt8 libsystemd0 chrony libcairo2 systemd-coredump
The systemd-coredump
package in the list above is really handy to have installed to be able to debug core dumps later on.
Installing the Agile Live components
Extract the following release artifacts from the release files listed under Releases:
- Ingest application consisting of binary
acl-ingest
and shared librarylibacl-ingest.so
- Production pipeline library
libacl-productionpipeline.so
and associated header files - Rendering engine application
acl-renderingengine
- Production control panel interface library
libacl-controldatasender.so
and associated header files - Generic control panel applications
acl-manualcontrolpanel
,acl-tcpcontrolpanel
andacl-websocketcontrolpanel
3.4 - Configuration and monitoring GUI
The following guide will show how to run the configuration and monitoring GUI using Docker compose. The OS specific parts of the guide, how to install Docker and aws-cli, is written for Ubuntu 22.04.
Install Docker Engine
First install Docker on the machine you want to run the GUI and the GUI database:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Login to Docker container registry
To access the Agile Docker container registry to pull the GUI image, you need to log in install and log into AWS CLI
Install AWS CLI:
sudo apt-get install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Configure AWS access, use the access key id and secret access key that you should have received from Agile Content.
Set region to eu-north-1
and leave output format as None
(just hit enter):
aws configure --profile agile-live-gui
AWS Access Key ID [None]: AKIA****************
AWS Secret Access Key [None]: ****************************************
Default region name [None]: eu-north-1
Default output format [None]:
Login to Agile’s AWS Elastic Container Repository to be able to pull the container image: 263637530387.dkr.ecr.eu-north-1.amazonaws.com/agile-live-gui
. If your user is in the docker
group, sudo
part can be skipped.
aws --profile agile-live-gui ecr get-login-password | sudo docker login --username AWS --password-stdin 263637530387.dkr.ecr.eu-north-1.amazonaws.com
MongoDB configuration
The GUI uses MongoDB to store information. When starting up the MongoDB Docker container it will create a database user that the GUI backend will use to communicate with the Mongo database.
Create the file mongo-init.js
for the API user (replace <API_USER>
and <API_PASSWORD>
with a username and password of your choice):
productionsDb = db.getSiblingDB('agile-live-gui');
productionsDb.createUser({
user: '<API_USER>',
pwd: '<API_PASSWORD>',
roles: [{ role: 'readWrite', db: 'agile-live-gui' }]
});
Create a docker-compose.yml file
Create a file docker-compose.yml
to start the GUI backend and the MongoDB instance. Give the file the following content:
version: '3.7'
services:
agileui:
image: 263637530387.dkr.ecr.eu-north-1.amazonaws.com/agile-live-gui:latest
container_name: agileui
environment:
MONGODB_URI: mongodb://<API_USER>:<API_PASSWORD>@mongodb:27017/agile-live-gui
AGILE_URL: https://<SYSTEM_CONTROLLER_IP>:<SYSTEM_CONTROLLER_PORT>
AGILE_CREDENTIALS: <SYSTEM_CONTROLLER_USER>:<SYSTEM_CONTROLLER_PASSWORD>
NODE_TLS_REJECT_UNAUTHORIZED: 1
NEXTAUTH_SECRET: <INSERT_GENERATED_SECRET>
NEXTAUTH_URL: http://<SERVER_IP>:<SERVER_PORT>
BCRYPT_SALT_ROUNDS: 10
UI_LANG: en
ports:
- <HOST_PORT>:3000
mongodb:
image: mongo:latest
container_name: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: <MONGODB_USERNAME>
MONGO_INITDB_ROOT_PASSWORD: <MONGODB_PASSWORD>
ports:
- 27017:27017
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongodb:/data/db
Details about some of the parameters:
- image - Use
latest
tag for always starting the latest version, set to:X.Y.Z
to lock on a specific version. - ports - Internally port 3000 is used, open a mapping from your
HOST_PORT
of choice to access the GUI at. - extra_hosts - Adding “host.docker.internal:host-gateway” makes it possible for docker container to access host machine from host.docker.internal
- MONGODB_URI - The mongodb connection string including credentials,
API_USER
andAPI_PASSWORD
should match what you wrote inmongo-init.js
- AGILE_URL - The URL to the Agile-live system controller REST API
- AGILE_CREDENTIALS - Credentials for the Agile-live system controller REST API, used by backend to talk to the Agile-live REST API.
- NODE_TLS_REJECT_UNAUTHORIZED - Set to 0 to disable SSL verification. This is useful if testing using self-signed certs
- NEXTAUTH_SECRET - The secret used to encrypt the JWT Token, can be generated by
openssl rand -base64 32
- NEXTAUTH_URL - The base url for the service, used for redirecting after login, eg. http://my-host-machine-ip:3000
- BCRYPT_SALT_ROUNDS - The number of salt rounds the bcrypt hashing function will perform, you probably don’t want this higher than 13 ( 13 = ~1 hash/second )
- UI_LANG - Set language for the user interface (en|sv). Default is en
- MONGODB_USERNAME - Select a username for a root user in MongoDB
- MONGODB_PASSWORD - Select a password for your root user in MongoDB
HTTPS support
The Agile-live GUI doesn’t have native support for HTTPS. In order to secure the communication to it you need to add something that can handle HTTPS termination in front of it. An example would be a load balancer on AWS or a container running nginx.
nginx example
This is an example nginx config and docker compose service definition that solves the task of redirecting from HTTP to HTTPS and also terminates the HTTPS connection and forwards the request to the Agile-live GUI. For more examples and documentation please visit the official nginx site.
Create an nginx server config in a directory called nginx
along side your docker-compose.yml
file, and add the file nginx/nginx.conf
:
server {
listen 80;
listen [::]:80;
server_name $hostname;
server_tokens off;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name $hostname;
ssl_certificate /etc/nginx/ssl/live/agileui/cert.pem;
ssl_certificate_key /etc/nginx/ssl/live/agileui/key.pem;
location / {
proxy_pass http://agileui:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Ssl on; # Optional
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
}
}
Store your certificate and key file next to the docker-compose.yml
file (cert.pem
and key.pem
in this example).
Update the docker-compose.yml
file to also start the official nginx container, and point it to the nginx.conf
, cert.pem
and key.pem
file via volume mounts.
version: '3.7'
services:
agileui:
image: 263637530387.dkr.ecr.eu-north-1.amazonaws.com/agile-live-gui:latest
container_name: agileui
environment:
MONGODB_URI: mongodb://<API_USER>:<API_PASSWORD>@mongodb:27017/agile-live-gui
AGILE_URL: https://<SYSTEM_CONTROLLER_IP>:<SYSTEM_CONTROLLER_PORT>
AGILE_CREDENTIALS: <SYSTEM_CONTROLLER_USER>:<SYSTEM_CONTROLLER_PASSWORD>
NODE_TLS_REJECT_UNAUTHORIZED: 1
NEXTAUTH_SECRET: <INSERT_GENERATED_SECRET>
NEXTAUTH_URL: https://<SERVER_IP>
BCRYPT_SALT_ROUNDS: 10
UI_LANG: en
ports:
- <HOST_PORT>:3000
webserver:
image: nginx:latest
ports:
- 80:80
- 443:443
restart: always
volumes:
- ./nginx/:/etc/nginx/conf.d/:ro
- ./cert.pem:/etc/nginx/ssl/live/agileui/cert.pem:ro
- ./key.pem:/etc/nginx/ssl/live/agileui/key.pem:ro
mongodb:
image: mongo:latest
container_name: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: <MONGODB_USERNAME>
MONGO_INITDB_ROOT_PASSWORD: <MONGODB_PASSWORD>
ports:
- 27017:27017
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongodb:/data/db
Note that the NEXTAUTH_URL
parameter now should be the https
address and not specify the port, as we use the default HTTP and HTTPS ports (without nginx we specified the internal GUI port, 3000).
Start, stop and signing in to the Agile-live GUI
Start the UI container by running Docker compose in the same directory as the docker-compose.yml
file`:
sudo docker compose up -d
Once the containers are started you should now be able to reach the GUI at the host-port
that you selected in the docker-compose.yml
-file.
To login to the GUI, use username admin
and type a password longer than 8 characters. This password will now be stored in the database and
required for any new logins. For more information on how to reset passwords and add new users, read more here
After signing in to the GUI, the statuses in the bottom left should all be green, else there is probably a problem in your System Controller or database configuration in the docker-compose.yml
-file.
To stop the GUI and database container run:
sudo docker compose down
Installing support for VLC on client machine, to open Multiview directly from GUI:
On Windows operating system, the GUI supports redirecting the user via links to VLC. To make this work, the following open-source program needs to be installed: https://github.com/stefansundin/vlc-protocol/tree/main
This is not supported on Linux or Mac.
3.4.1 - Custom MongoDB installation
This page contains instruction on making a manual installation of MongoDB, instead of running MongoDB in Docker as is described here. In most cases we recommend running MongoDB in Docker, as it is easier to set up and manage.
Custom installation of MongoDB
Install the database software, MongoDB.
sudo apt-get install gnupg
curl -fsSL https://pgp.mongodb.com/server-6.0.asc | \
sudo gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg \
--dearmor
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org
Create a /etc/mongod.conf
that contains the following:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
# engine:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
Then start the database
sudo systemctl enable mongod
sudo systemctl start mongod
Create the file ~/mongosh-admin.js
containing (replace <ADMIN_PASSWORD>
):
use admin
db.createUser(
{
user: "admin",
pwd: "<ADMIN_PASSWORD>",
roles: [
{ role: "userAdminAnyDatabase", db: "admin" },
{ role: "readWriteAnyDatabase", db: "admin" }
]
}
)
Create the admin user
mongosh localhost < ~/mongosh-admin.js
Create the file ~/mongosh-api.js
for the API user (replace <API_PASSWORD>
):
productionsDb = db.getSiblingDB('agile-live-gui');
productionsDb.createUser({
user: 'api',
pwd: '<API_PASSWORD>',
roles: [{ role: 'readWrite', db: 'agile-live-gui' }]
});
Create the API user
mongosh -u admin -p <ADMIN_PASSWORD> < ~/mongosh-api.js
3.5 - Prometheus and Grafana for monitoring
To collect useful system metrics for stability and performance monitoring, we advise to use Prometheus. For visualizing the metrics collected by Prometheus you could use Grafana.
The System controller receives a lot of internal metrics from the Agile Live components. These can be pulled by a Prometheus instance from the endpoint https://system-controller-host:8080/metrics.
It is also possible to install external exporters for various hardware and OS metrics.
Prometheus
Installation
Use this guide to install Prometheus: https://prometheus.io/docs/prometheus/latest/installation/.
Configuration
Prometheus should be configured to poll or scrape the system
controller with something like this in the prometheus.yml
file:
scrape_configs:
- job_name: 'system_controller_exporter'
scrape_interval: 5s
scheme: https
tls_config:
insecure_skip_verify: true
static_configs:
- targets: ['system-controller-host:8080']
External exporters
Node Exporter
Node Exporter is an exporter used for general hardware and OS metrics, such as CPU load and memory usage.
Instructions for installation and configuration can be found here: https://github.com/prometheus/node_exporter
Add a new scrape_config
in prometheus.yml
like so:
- job_name: 'node_exporter'
scrape_interval: 15s
static_configs:
- targets: ['node-exporter-host:9100']
DCGM Exporter
This exporter uses Nvidia DCGM to gather metrics from Nvidia GPUs. Includes encoder and decoder utilization.
More info and installation instructions to be found here: https://github.com/NVIDIA/dcgm-exporter
Add a new scrape_config
in prometheus.yml
like so:
- job_name: 'dcgm_exporter'
scrape_interval: 15s
static_configs:
- targets: ['dcgm-exporter-host:9400']
Grafana
Installation of Grafana is described here: https://grafana.com/docs/grafana/latest/setup-grafana/installation/
As a start, the following Dashboards can be used to visualize the Node Exporter and DCGM Exporter data:
Example of running Node Exporter and DCGM Exporter with Docker Compose
To simplify setup of the Node Exporter and DCGM Exporter on multiple machines to monitor, the following example Docker Compose file can be used. First, after a normal installation of Docker and the Docker Compose plugin, the Nvidia Container Toolkit must be installed and configured to allow access to the Nvidia GPU from inside a Docker container:
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
Then the following example docker-compose.yml
file can be used to start both the Node Exporter and the DCGM Exporter:
version: '3.8'
services:
node_exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
command:
- '--path.rootfs=/host'
network_mode: host
pid: host
restart: unless-stopped
volumes:
- '/:/host:ro,rslave'
dcgm_exporter:
image: nvcr.io/nvidia/k8s/dcgm-exporter:3.3.3-3.3.1-ubuntu22.04
container_name: dcgm_exporter
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [ gpu ]
restart: unless-stopped
environment:
- DCGM_EXPORTER_NO_HOSTNAME=1
cap_add:
- SYS_ADMIN
ports:
- "9400:9400"
Start the Docker containers as usual with docker compose up -d
.
To verify the exporters work, you can use Curl to access the metrics data like:
curl localhost:9100/metrics
for the Node Exporter and curl localhost:9400/metrics
for the DCGM exporter. Note that the DCGM exporter might take several seconds before the first metrics are collected, resulting in that the first requests might yield an empty response body.
3.6 - Control panel GUI examples
The rendering engine, which contains the video mixer, HTML renderer, audio mixer, etc., is configured and controlled via a control command API over the control connections. Some source code examples of control panel integrations are provided, together with 3rd party software recommendations. All these use the generic control panels as proxies when communicating with the rendering engine. However, it is recommended for integrators to, when possible, write their own control panel integrations using the libraries and header files provided by the base layer of the platform instead of relaying the commands via the generic control panels.
Streamdeck and Companion
A good first starting point for a control panel to control the video (and audio) mixer is to use an Elgato Streamdeck in combination with the open source software Bitfocus Companion.
The source code examples
Unzip the file agile-live-unofficial-tools-5.0.0.zip
to find several different source code examples of control panel integrations in various languages. Some communicate with the generic tcp control panel, some with the generic websocket control panel, some directly with the REST API. The source code is free to reuse and adapt.
The audio control interface and GUI
An example audio control panel interface and GUI is provided as source code in Python 3. It attempts to find a control panel HW, either the Korg nanoKONTROL2 or Behringer X-Touch Extender, via MIDI and to communicate with it using the Mackie Control protocol. (Note that the Korg nanoKONTROL2 does not use the Maicke protocol by default. Refer to the nanoKONTROL2 user manual on how to set it up)
The audio control panel uses the tkinter GUI library in the Python standard libraries, and also depends upon the python packages python_rtmidi
and mido
which can be installed using pip3 install
.
The chroma key control panels
There is a Python3+tkinter-based GUI for configuring the chroma key (“green screen”) functionality, which communicates with the generic tcp control panel.
There is also a HTML-based version that communicates with the generic websocket control panel.
The thumbnail viewer
The thumbnail viewer is an example web GUI to poll the REST API and visualize all ingests and sources that are connected to a given System Controller. Thumbnails of all sources are shown, and updated every 5 seconds.
4 - Tutorials and How-To's
4.1 - Getting Started
This is a tutorial that describes the steps needed to set up a full Agile Live system, everything from the Base Platform to the configuration-and-monitoring GUI and the control panel integrations. It relies on links to a number of separate sub-guides.
Prerequisites
A working knowledge of the technical basis of the Agile Live solution is beneficial when starting with this tutorial but not required. It is assumed that sufficient hardware has been provided, i.e.:
- at least one server to be used for ingesting media via an SDI capture card or as NDI via a network interface
- at least one server to be used for running the production pipeline(s) including the rendering engine that performs the video and audio mixing, as well as the generic control panels
- one server that will run all management and orchestration software, including the System Controller, the configuration-and-monitoring GUI, the Prometheus database and the associated Grafana instance
- at least one computer that will have control panel hardware connected to it, run the control panel integration software, and receive and display the multiview video stream and the monitor feed media stream
The servers may be in the field, in a datacenter or in a private or public cloud. For cloud VMs, the requirement is that they have Nvidia GPUs with NvEnc+NvDec support. All major cloud providers have VMs with Nvidia T4’s.
In a very minimal system not all of these machines need to be separate; in fact, all the above could run on a single computer if so is desired. Recommended hardware requirements for the servers running the ingest and production pipelines can be found here.
Installing all components
A guide to installing all the components of the system can be found here, start by following it.
Using the configuration-and-monitoring GUI to set up and monitor a production workflow
A guide on how to use the basic configuration-and-monitoring GUI to set up a working system can be found here.
Using the Grafana dashboard example
Viewing the multiview and outputs
The multiview and outputs (low delay monitor feed and high quality program out) are sent from the platform as MPEG TS streams in SRT or UDP. For viewing the streams, please refer to this guide.
Creating a video mixer control panel in Companion
A good starting point for a video mixer control panel is to use Bitfocus Companion with an Elgato Streamdeck device. Some guidelines for how to use those to build a video mixer control panel for Agile Live can be found here
Using the audio control GUI
There is an example audio control panel integration that also includes a GUI. A guide on how to use it can be found here
4.2 - Using the configuration-and-monitoring GUI
Using the configuration-and-monitoring GUI to set up and monitor a production workflow
When loading the configuration and monitoring GUI in a web browser the first screen you will encounter is the login screen. Log in using the credentials that you set when installing. After logging in you will see the home screen. Make sure that the bottom strip says that the System Controller is online and that the database is online.
Configure sources
On the home screen, click the Inventory Management
button. This will take you to the inventory management page. On the left side there is a list of all sources in the system at the moment. It is possible to filter the list based on source type, location, and active state. For each source, it is possible (and recommended) to click the Edit
button to open the source configuration editing page. It is recommended to set a descriptive name (e.g. “Front wide camera”), a type, and a location for the source. This view is also used to configure which embedded audio channels to transport up to the production pipelines. Click Save
to save your changes and exit this page and return to the inventory management page. Repeat the process for each source. Whenever a new source is added to the system, it is recommended to perform this step on it. Press the Home
button to return to the home screen.
Create and start a production configuration
A production configuration is a description of how the system is configured for a specific production workflow, e.g. a TV program. On the home screen, click the Create New
button to create a new production configuration. Give it a decriptive name, e.g. “Stockholm local news”, and click the Create
button. This will bring you to the configuration page for the new production configuraton. Click Add Source
to add a source to this production configuration. This will bring out a list of available sources on the left side of the screen. Again, it is possible to filter the sources the same way as on the inventory management page. Click Add
for the sources that you want to have in your production configuration. The order of the sources is the same as the order they will be presented in the multiviewer. To change the order, drag and drop them. For each source, it is possible to set the UMD text by clicking the name above the thumbnail and editing it.
A preset is a collection of all the detailed variables of each part of the production configuration that will be set up. This includes detailed quality settings, network latency settings, etc. To choose a preset for your current production configuration, click Select Preset
and select one of the pre-defined presets. Once that has been done, some settings can be edited by clicking the button with the cogwheel icon.
To run a production configuration click the Start
button. Once the production is started, click the Home
button to return to the home screen.
Use the runtime monitoring
On the top of the screen there will now be a pane for each pipeline that is running, and information on where to find the multiview and output streams. For the production configuration that is currently running, there is a Runtime Monitoring
button, that is either green if all is well, or red if there are any error counters that have increased in the last minute. Clicking the button takes you to the runtime monitoring screen.
The runtime monitoring screen is built as an expanding tree, where each level and instance has a status icon that is either green or red. This way it is very easy to trace an error through the tree down to the erroring counter. On the lowest level, the actual counters can have three colors. White means it is an informational counter that cannot indicate an error. Green color means that the counter is able to indicate an error but at the moment it is fine. Red color means that the counter is currently indicating an error. Such a counter is red if it has increased in the last 60 seconds. A red counter will revert to green after 60 seconds without increasing.
Editing the database
To add or edit production presets and new users, the Mongo database can be manually edited.
The GUI tool MongoDB Compass can be used to edit the database, but MongoDB Shell and other MongoDB tools will also work. Since you connect to the MongoDB via the network, Compass can be used on any computer with network access to the GUI server on port 27017.
After installing MongoDB Compass, connect to the database by pressing the “New Connection” button. In the URI field, exchange localhost
for the hostname of the GUI server (unless you run Compass on the same server as the GUI, of course). Add agile_live_gui
last in the URL-field, to access the GUI database only. Then under Advanced Connection Options, select Authentication
and use method Username/Password
. Type the <API_USER>
and <API_PASSWORD>
you put in the mongo-init.js
file when setting up the GUI server.
The URI field should then look something like:
mongodb://<API_USER>:<API_PASSWORD>@<HOSTNAME>:27017/agile-live-gui?authMechanism=DEFAULT
Add or edit production presets
Once connected you will see the databases to the left. The database agile-live-gui
contains the collections used by the GUI. To add a new, or modify an old production preset, open the presets
collection under the agile-live-gui
database. To create a new preset, the easiest way is to clone an old preset and make your changes to it. Hover your mouse over one of the presets you want to clone, and press the Clone document
button on the top right of the preset. In the popup window you can make your edits to the new preset. Most important is to set a new name for it in the "name"
field. This is the name that will be displayed in the GUI when selecting which preset to use. Under pipelines
the quality settings for the streams from the Ingests to the Pipelines can be set.
Add new users to the system
New users to the GUI can be added to the Mongo database as well. Open the users
collection under the agile-live-gui
database. Then add a new user by pressing the Clone document
button on one of the existing users. In the popup window, set the username of the new user and remove the hashed password, leaving the password field empty (i.e. "password": ""
). Press insert
to add the new user to the database. Then open the GUI in a web browser. On the sign-in page, write the username of the new user and then type a password to set it as the new password in the database. A new password must have at least 8 characters to be approved.
4.3 - Viewing the multiview and outputs
Viewing the multiview and outputs
The multiview and outputs (low delay monitor feed and high quality program out) are sent from the platform as MPEG TS streams in SRT or UDP. For viewing the streams, VLC can be used. For the multiview and low delay monitoring feeds it is important that the viewer does not add a lot of buffering delay. Therefore there is a need to configure it not to do so. The setting network-caching
can be used to set the buffering. It can be done in the VLC GUI in settings, or on the command line by specifying --network-caching=140
when starting VLC, for example when using Windows:
C:\Program Files (x86)\VideoLAN\VLC\vlc.exe" --network-caching=140 srt://mydomain:9900
4.4 - Creating a video mixer control panel in Companion
Create buttons in Companion by using the action tcp-udp: Send Command
to send control commands that are listed here
An example companion config can be found here
4.5 - Using the audio control GUI
The audio control GUI is written in Python and should run in all systems that have Python3. However, since it uses tkinter and the native support of tkinter on some platforms is not perfect, there may occur problems. The application is verified to work on Ubuntu 22.04.
It is started with: ./acl_audio_control.py
First connect to an open TCP port on an acl-tcpcontrolpanel
by setting the correct IP/hostname and port under “Mixer network address”. Once connected, use the “Add strip” and “Remove strip” buttons to add and remove volume strips. Then map the audio of each strip. Select the “Slot” of the connected (video) source you want to take the audio from. Then select which channel in the source to find the audio (or to find the L channel in case of stereo, the R channel will be assumed to be the next channel) and if the strip should be a mono or stereo strip. The name of the strip can be changed by clicking on it and editing it.
Once everything is set up, the configuration can be saved with the “Save” and “Save as…” buttons. The configuration is stored as a json file.
4.6 - Setting up multi-view outputs using the REST API
This is a tutorial that describes how to create and update multi-view outputs from a Pipeline using the REST API of the System controller.
Prerequisites
Knowledge on how to set up a system and create a production using the REST API.
The multi-view generator
The Agile Live system includes a built-in multi-view generator that can be used with any Rendering Engine and produce multiple composited mosaic video steams of selected inputs with a given layout. The multi-view generator can technically be used both in the High Quality and Low Delay pipelines, but in the general case it is only used in the Low Delay pipeline.
Start a new multi-view output stream
To start a new multi-view output stream, use the [POST] /pipelines/{uuid}/multiviews/
endpoint in the REST API, where {uuid}
is the UUID of the Pipeline component from which you want to create the multi-view output (generally the Low Delay pipeline). Then provide a JSON payload to describe how the multi-view output should look. For instance:
{
"layout": {
"output_height": 1080,
"output_width": 1920,
"views": [
{
"width": 960,
"height": 540,
"x": 0,
"y": 0
"input_slot": 1,
"label": "Camera 1",
},
... // More views here
]
},
"output": {
"format": "MPEG-TS-SRT",
"frame_rate_d": 1,
"frame_rate_n": 50,
"ip": "0.0.0.0",
"port": 4567,
"video_format": "AVC",
"video_kilobit_rate": 5000
}
}
The description contains two parts, the layout
part describing how the multi-view output will look and the output
part to describe the format the stream should be encoded to. Starting with the output
part, this will generate a stream of MPEG-TS over SRT format, with the SRT in server mode (IP address is 0.0.0.0) on port 4567. The frame rate is set to 50 FPS and the encoding format is set to AVC (H264) and 5000 kilobits/second. To playback the stream, use ffplay
or VLC.
ffplay srt://<ip-address>:4567
or in VLC, press Media
and then Open network stream
and write srt://<ip-address>:4567
in the dialog that pops up. Note that the default latency in VLC is quite high (several seconds) compared to ffplay.
Multi-view layout
The layout
part of the JSON payload contains a description of the multi-view. The parameters output_height
and output_width
defines the resolution of the main frame in pixels, i.e. the resolution of the resulting video stream. Within this frame, multiple views can be placed. Each view can display any of the input sources connected to the pipeline, or any of the auxiliary feedback outputs from the Rendering Engine, more on that below. The views are defined as a width and height of the view in pixels and the x and y position of the view inside the outer frame (the position of the top left corner of the view in pixels from the top left corner of the frame, see the illustration below). Then the content of the view is defined using the input_slot
parameter and an optional label to display under the view is provided using the label
parameter. In case two views overlap, the one first appearing in the view
array will be drawn on top of the other one. In case the aspect ratio of the view does not match the aspect ratio of the source, the video image will be resized with its aspect ratio kept, to fit entirely inside the defined view. Areas that are not covered by the video will be filled with black.
The input_slot
parameter can be any of the sources connected to an input slot in the Pipeline. Just use the same input slot as used when the source was connected using the [POST] /streams
endpoint. The input_slot
parameter of the multi-view layout can also be any of the auxiliary feedback outputs of the Rendering Engine, i.e. video streams that are created by the Rendering Engine. These feedback streams also have their own input_slot
number. To show which such feedback streams the Rendering Engine is providing, use the [GET] /pipelines/{uuid}
endpoint with the UUID of the Pipeline. The result might look something like this:
{
"uuid": "ab552c6c-4226-a9ec-66b8-65f98753beaa",
"name": "My LD Pipeline",
"type": "pipeline",
"streams": [
...
],
"feedback_streams": [
{
"input_slot": 1001,
"name": "Program"
},
{
"input_slot": 1002,
"name": "Preview"
}
],
"multiviews": [
...
]
}
In this case the Rendering Engine provides two feedback streams to the multi-view generator, one called “Program” on input_slot
1001 which is the mixed output stream and one “Preview” on input_slot
1002 which shows what the Rendering Engine is previewing right now. The “Program” output is the same as the video stream you get from the Output component connected to the same pipeline.
Tally borders
Tally borders, i.e. colored borders around the views shown in the multi-view output that tell which views are currently being used in the program output (red frame) and the preview output (green frame) are automatically added to the multi-view outputs. Setting, removing and choosing the colors of the borders are controlled by the Rendering Engine. The Agile Live Rendering engine (acl-renderingengine
) will also use borders on views containing the Program and Preview feedback outputs.
Update the layout
Once a multi-view output is created it will turn up when listing multi-view outputs using the [GET] /pipelines/{uuid}/multiviews
endpoint, as well as in some of the other Pipeline related endpoints. When a multi-view output is created, the layout of that multi-view stream can be updated, without interrupting the stream. This can be done using the [PUT] /pipelines/{uuid}/multiviews/{viewid}
endpoint. The viewid
parameter is the id of the multi-view output stream within that Pipeline component. This id is provided in the JSON-blob returned with the HTTP response when creating the multi-view, and can also be retrieved using the [GET] /pipelines/{uuid}
endpoint.
In the update request, an array of views, just like the one used when creating the multi-view output, is provided as a JSON payload. It is not possible to change the resolution of the main frame/stream or change any of the values set in the output
section when the stream was created, as it would interrupt the stream. In case you want to alter any of these settings, you will need to stop the multi-view output and create a new one with the desired format.
Multiple multi-view outputs
The Pipeline supports outputting multiple multi-view output stream. Each output stream can have its own, unique layout and have different labels on the views compared to other multi-view outputs. The maximum number of multi-view output streams are limited by the hardware the Pipeline is running on.
Remove a multi-view output
A multi-view output stream can be closed by calling the [DELETE] /pipelines/{uuid}/multiviews/{viewid}
with the id of the multi-view output to close.
4.7 - Security in Agile Live
This is a tutorial that describes how to setup and use encryption throughout the Agile Live system, as-well as other security related topics.
Connections in the system
There are several types of network connections in the Agile Live system that need encryption to protect them, which is especially important when running productions in a public cloud environment or at least transporting parts of a production’s data over public internet. These connections are:
- The HTTP requests to the REST API of the System Controller
- The WebSocket connections between the components in the system and the System Controller
- The SRT/RIST streams transporting the video and audio from the Ingests to the Production Pipelines.
- The TCP control connections used to transport control commands from the Control Panel to the Production Pipeline, and to propagate the commands from the Low Delay pipeline to the High Quality pipeline.
The last two connection types above will always be encrypted. Each stream/connection will be setup with its own uniquely generated encryption key. The REST API and the WebSocket connections between the components and the System Controller, however, require some manual setup by the user.
Warning
Even if encryption of the SRT/RIST stream connections between Ingests and Pipelines and the control connections are enabled automatically, when HTTPS is turned off in the System Controller, the encryption keys are passed between the components over unencrypted links where anyone can read them in clear text. This effectively means HTTPS must be enabled in the System Controller to consider the other connections secure as well.Enable HTTPS in the System Controller
Turning the HTTP requests of the REST API and the WebSocket connection between the components and the System Controller secure, requires the following steps to be taken:
- Get a TLS certificate from a certificate authority
- Point out the
cert.pem
andkey.pem
provided by your certificate authority in the System Controller config file underhttps
, attributescertificate_file
andprivate_key_file
. Also make sureenabled
underhttps
is set totrue
. - Start the System controller and make sure it prints that the API is served using
https
- Try accessing the REST API using HTTPS to make sure it works as expected
WebSocket connections will automatically be encrypted when HTTPS is turned on.
In case a component trying to connect to the System Controller using HTTPS and fails with the following error message:
Failed to connect to System Controller, message: set_fail_handler: 8: TLS handshake failed
this is likely because the ACL_SYSTEM_CONTROLLER_IP
is not set to the same hostname as the “Common Name” field is set to in the certificate. Set ACL_SYSTEM_CONTROLLER_IP
to the same hostname as in the System Controller’s certificate to make it work.
Self-signed certificates
Self-signed certificates can be used by pointing out the cert.pem
and key.pem
files as above. The components connecting to a System Controller with self-signed certificates must also enable this by setting the environment variable ACL_INSECURE_HTTPS
to true
.
Note that it is recommended to always use certificates from a certificate authority.
Setting ACL_INSECURE_HTTPS
to true
means that the components never verify the server certificate. A somewhat more
secure way is to let the components trust a custom CA or a self signed certificate. This can be done by copying
the custom CA certificate file or self signed certificate to a file readable by the component. Then set the environment
variable ACL_CUSTOM_CA_CERT_FILE
to the full path of this certificate file.
Pre-shared key (PSK)
Components connecting to the System Controller must authorize themselves to be accepted and connected. When components connect to the System Controller they present a pre-shared key (PSK) to the System Controller, which must match the PSK set in the System Controller for the component to be allowed to connect. The PSK is set in the System Controller’s config file using the psk
attribute before it is started. The PSK must be 32 characters long. All components must then connect using the same PSK, by setting the environment variable ACL_SYSTEM_CONTROLLER_PSK
before starting the application. In case the user fails to set the PSK before starting the application, it will immediately quit and print an error message.
Warning
In case HTTPS is not enabled in the System Controller, the PSK will be transported over an unencrypted network connection where anyone can read it in clear text!Authentication in the REST API
The System Controller REST API is protected with Basic Authentication that will require a username and password to access the API endpoints. To configure this, edit the client_auth
part of the System Controller’s config file. Choose which username and password should grant access to the REST API and make sure enabled
is set to true
to actually activate the authentication check. The configured password is tested for strength on start. The System Controller will not start if the password is judged to be too easy to crack with brute-force.
Warning
In case HTTPS is not enabled in the System Controller, the username and password will be transported over an unencrypted network connection where anyone can read it!System Controller HTTP headers
The System Controller serves the REST API and sets a few headers in
the responses. Among these headers, the Content-Security-Policy
(CSP) can be fine-tuned according to your configuration needs.
The CSP header set in the distributed configuration file is:
Content-Security-Policy: frame-ancestors 'none'
This header plays a crucial role in enhancing security by controlling
iframe (inline frame) embedding, specifically for this site. The
frame-ancestors
directive with the value 'none'
signifies that no
pages or sites are allowed to embed the current page as an
iframe. This helps prevent various attacks such as cross-site
scripting (XSS) and code injection by blocking the embedding of this
site in others’ iframes.
4.8 - Statistics in Agile Live
This is a tutorial that explains in more detail how the statistics in Agile Live should be interpreted and how the system measures them.
Statistics
Timestamps and measurements
Let’s start by taking a closer look on how the system measures these statistics:
/ingests/{uuid}/streams/{stream_uuid} {
processing_time_audio,
processing_time_video,
audio_encode_duration,
video_encode_duration
}
/pipelines/{uuid}/streams/{stream_uuid} {
time_to_arrival_audio,
time_to_arrival_video,
time_to_ready_audio,
time_to_ready_video,
audio_decode_duration,
video_decode_duration
}
All media that is ingested will be assigned a timestamp as soon as the audio and video is captured, this is the capture timestamp. With this timestamp as a reference the system calculates three more timestamps which all measure the time passed since the capture timestamp was taken.
processing_time_audio & processing_time_video
After the capture step, the audio and video is converted into the format that the user selected, this may involve resizing, deinterlacing and any color format conversion that is needed. Next step will be to encode the audio and video. Once the encoding is done, a new timestamp is taken which then measures the difference between the capture time and the current time after the encoding is done. Note that these values may be negative (and in rare cases the following timestamps), in cases where the system has shifted the capture timestamp into the future to handle drifting.
time_to_arrival_audio & time_to_arrival_video
When the Rendering Engine receives the EFP stream, and has received a full audio or video frame, a new timestamp is taken, which measures the difference between the capture time and the current time after a full frame was received. This duration also includes the network transport time, i.e. the maximum time that the network protocol will use for re-transmissions. This value can be configured with the max_network_latency_ms value when setting up the stream.
time_to_ready_audio & time_to_ready_video
Next step will be to decode the compressed audio and video into raw data. When the media is decoded a new timestamp is taken, which measures the difference between the capture time and the current time after the decoding of a frame is done. This will be when a frame is ready to be delivered to the Rendering Engine. The alignment of the stream between Ingest and Pipeline must be larger than time_to_ready_audio/video statistics, otherwise the frames will be too late and dropped. This is a good value to check if you experience dropped frames, and potentially increase the alignment value if so. You can also experiment with lowering the bitrate or decreasing the max_network_latency_ms setting. If a frame is ready earlier than the aligment time, it will be queued until the system reaches the alignment time. For the video frames there is also a queue of compressed frames before the actual decoding, this queue makes sure that only a maximum of 10 decoded frames are kept in memory after the decoder. Due to this the difference between these timestamps and the alignment value normally is never larger than the duration of 10 video frames.
The sytem also tracks for how long a frame is processed by the encoders and decoders with the following values:
audio_encode_duration & video_encode_duration
These metrics show how long time audio and video frames take to pass through their respective encoder. This duration includes any delay in the encoder as well, measuring the time from when each frame is passed to the encoder until the encoder returns the compressed frame. For video this also includes the time to do any format conversions, resizing and de-interlacing. Note that B-frames will cause the metric to increase to more than one frame-time, as more frames are needed in some cases before the previous ones can be outputted to the stream.
audio_decode_duration & video_decode_duration
These metrics show how long time audio and video frames take to pass through their respective decoder. This duration includes any delay in the decoder as well, measuring the time from when each frame is passed to the decoder until the decoder returns the raw frame. Also here, B-frames will cause this time to be longer than one frame time, because the decoder has to wait for more frames before the decoding can start.
4.9 - Using Media Player metadata in HTML pages
This is a tutorial that describes how the playback state of currently active Media Players can be propagated and used in HTML pages within the production. The functionality is available from Agile Live version 6.0.0.
Subscribing to metadata
HTML pages loaded into the Rendering Engine can subscribe to metadata from Media Players located on other input slots of the Rendering Engine. The metadata can for instance be used to create graphics to aid the production crew.
In order for Media Player metadata to propagate to an HTML page, the page must first subscribe to metadata updates. This is done in the JavaScript code of the HTML page by calling the function window.aclInputSlotMetadataSubscribe()
when the page has been loaded. After doing this the global variable aclInputSlotMetadata
will become available and updated periodically.
In the simplest case:
<body onLoad="window.aclInputSlotMetadataSubscribe()">
Metadata structure
aclInputSlotMetadata
is a JSON object with the Media Player information located in the “media_players” section and further organized by which input slot the Media Player is located at. An example with a media file being played at input slot number 2:
var aclInputSlotMetadata = {
"media_players": {
"2": {
"file_name": "/mnt/media/ads/Clip32.mov",
"is_paused": false,
"is_looping": true,
"current_time_ms": 400000,
"start_time_ms": 300000,
"input_duration_ms": 1800000,
"duration_ms": 120000,
"time_left_ms": 20000
}
}
}
HTML example
The file “media-playback.html” located in the Agile Live examples ZIP file provides a full example of how to subscribe to, and use, the metadata from a Media Player. It can be loaded into the Rendering Engine to create a view that displays the metadata of the currently playing file. The HTML page accepts the Media Player’s input slot number as a query parameter. For example, if a Media Player is located at input slot number 4 and media-playback.html
is served at localhost:8899 (on the same machine as the Rendering Engine is running) the following commands can be used to load the page into a new input slot:
html create 5 1920 1080
html load http://localhost:8899/media-playback.html?slot=4
5 - Releases
5.1 - Release 7.0.0
Build date
2024-05-21
Release content
Product version
7.0.0
Release artifacts
Details | Filename | SHA-256 |
---|---|---|
Ingest binary and shared library | agile-live-ingest-7.0.0-809031f.tar | 7532b16204bcf4d8e018284bbfd93606106132a2307428fcee098269c0117d86 |
Rendering Engine binary and acl-libproductionpipeline shared library | agile-live-renderingengine-7.0.0-809031f.tar | a0e7ff5ef26dc6dc679d3db2fbdef65dd330baadce2e9ae0bb8abde88d09c1ca |
Generic control panels and acl-libcontroldatasender shared library | agile-live-controlpanels-7.0.0-809031f.tar | 188f2ed50e9bf6726e02dc0a7a61c0ab99337890ce806a35dc3472580770291f |
Include files | agile-live-dev-7.0.0-809031f.tar | 00e6b561cbf71275bbefb93d16f94cd7efb91f29204fff2823fd2a34982834ae |
System Controller | agile-live-systemcontroller-7.0.0-6c26469.tar | 52618467f6417d12e77f3c03b012aa6444443cb523b562309318e7cf74e14890 |
Changelog
- Add support for customizing the video and audio mixers using a config file passed to the
acl-renderingengine
application at startup - A new endpoint
[PUT] /pipelines/{uuid}/reset
has been added to the REST API, which is used to make the Rendering Engine/Pipeline reset its runtime state back to what it was when the application started. This includes resetting audio strip mappings, resetting all parameters in all video graph nodes and closing all HTML browsers and media players - The REST API has been overhauled and improved, resulting in a version bump to
/api/v3
. Major changes include removal of most of thecontrolreceviers
concept as well as improved format of the JSON responses of the control panel endpoints- The UUIDs of control receivers have now been removed. Instead exactly one
ControlDataReceiver
instance is connected to eachPipeline
/MediaReceiver
, meaning the Pipeline’s UUID is now used in place of the control receiver UUID when setting up control connections. - The C++ SDK has changed when creating
ControlDataReceiver
andMediaStreamer
instances. Now they are grabbed/created via theMediaReceiver
class API. - The
ControlDataReceiver
andMediaStreamer
s will now use their parentMediaReceiver
WebSocket connection to the System Controller, reducing the overall number of network connections to the System Controller
- The UUIDs of control receivers have now been removed. Instead exactly one
- The EQ of the audio mixer has been reworked. It can now include up to 10 filters, where each filter can be configured to any of these types: lowpass, highpass, bandpass, lowshelf, highshelf, peak or notch. The audio quality of the already existing EQ filter types has been improved.
- The audio mixer (and audio router), video mixer, media players and HTML browsers can now be reset via the Control panel API.
- The quality of the deinterlacing algorithm of the Ingest has been improved
- The encoder of the Ingest now supports automatic reconfiguration when a format change occurs for the input source, without having to tear down and reconnect the stream via the REST API (Nvidia encoder only)
- The internal audio mixer has received a mute functionality, both for each input in an output bus, and for the master fader of each output bus.
- The internal audio mixer has a new automatic volume transition command, where a single command can be sent to fade the volume to a specified level over a set time
- Audio mixer outputs can now be set up as “post fader auxes”, to follow a certain audio mixer output, possibly removing some of the inputs from the aux mix (so called mix-minus)
- All Control Panel commands for the video mixer nodes are now prefixed with
video
to make them more coherent with the other commands which has the target module as the first keyword. The old way of having just the video node name as first keyword is still supported for backwards compatibility. - The REST API calls to list the Pipelines, Ingests and Control panels with the expand flag set to false will now return the name of the component together with the UUID, for easier mapping between name and UUID.
- The negotiated SRT latency, (i.e. max of the peer’s requested SRT latencies) can now be seen in the REST API for both outputs and SRT sources
- The multithreading performance of MediaStreamers has been improved, allowing more outputs to be set up from Rendering Engines
- The Ingest now supports sending an 8 bit incoming SRT sources as 10 bit HEVC to a production
- The System Controller now performs a version check on the major version of all components when they try to connect, and rejects those with another major version to prevent version mismatches
- Logging of important, successful events, such as stream/controlconnection setups and teardown has been improved
- Resolved an issue with outputs and multi-views where some SRT clients could cause the SRT connection to lock up and sometimes cause the Rendering Engine to freeze for up to two minutes
- SRT sources now have better drift detection and correction
- HTML browsers will no longer display an error message instead of the HTML page when the HTML page failed to load, e.g. if the server was not reachable. Use the logs for debugging why the HTML page is not shown instead
- Fix a bug where the Prometheus metrics endpoint in the System Controller would get stuck, reporting the same old values over and over again in some cases.
- The Prometheus metrics endpoint now stops reporting values for old metrics, whether they are temporarily or permanently gone
- Fix an issue where the lost_frames and dropped_frames counters were wrongly incremented for NDI sources when grabbing thumbnails
- Fix an issue where the TCP ports could lock up in the TCP control panel
- The default settings for the compressor has changed to make it inactive as default
- Logging of new and closed connections in acl-websocketcontrolpanel has been improved
- The Ingest will now return an error when trying to close an SRT source that is in use by a stream
- Fix a bug where the Rendering Engine would hang in some cases when a media player was closed
- Control Commands for the Media and HTML modules are now logged to the control command log
- All applications now shut down gracefully when SIGHUP is sent to them
- Fix bug where the UUID for the output was set to the UUID of the Pipeline for output stream metrics in the Prometheus exporter
- Fix bug where the sides of portrait mode HTML browsers would have video artifacts
- Fix bug where the RTT for Outputs and Multi-views was always reported as 0
- Fix bug where the video mixer could malfunction if the resolution of the Rendering Engine was changed between two productions
Known limitations
Media Player
Videos will be stretched in case the video played does not have the same aspect ratio as the Rendering Engine is using.
GPU performance
The GPU utilization in the Ingest status message is currently not working. Use intel_gpu_top
and/or nvidia-smi
instead.
MPEG-TS over UDP
For Outputs and multi-view outputs it is currently not possible to bind to a specific IP and port with MPEG-TS over UDP, even if local_ip
and local_port
are exposed in the REST API.
Drivers
You need to install the CUDA Toolkit on Ingest machines, even if they don’t have Nvidia cards.
Security
No known issues
5.2 - Release 6.0.0
Build date
2024-01-29
Release status
Type: Sixth production ready release
Release content
Product version
6.0.0
Release artifacts
Details | Filename | SHA-256 |
---|---|---|
Ingest binary and shared library | agile-live-ingest-6.0.0-5d0d6e4.tar | cd21f5cf57a7ac5a4022d290c4badab87d768bafd8057c8142dc9397beb77086 |
Rendering Engine binary and acl-libproductionpipeline shared library | agile-live-renderingengine-6.0.0-5d0d6e4.tar | 7aa15accad3a254a0f8a842cc41046f51d5e7a2b586489596831eb6310bf53f2 |
Generic control panels and acl-libcontroldatasender shared library | agile-live-controlpanels-6.0.0-5d0d6e4.tar | 095ead70169afc0f4f5e2be00fc4507e787955cd4bfd93677a7e61779c0881e1 |
Include files | agile-live-dev-6.0.0-5d0d6e4.tar | 2686768b00e38bb02fd211da981c55e1399604edbaef317601ea11ae82d31cf1 |
System Controller | agile-live-systemcontroller-6.0.0-38a4fc7.tar | 86e58778cb5641ac1d7f2761ffabb3a69c51e24cbe2d41ed6b8df6093862f4b8 |
Warning
From this release the components will change from having their individual version numbers to have the same version number as the release (i.e. 6.0.0). This means the version of some components have changed to a lower number, while other have skipped a couple of major versions. In the end this makes it easier to verify all components in a system are from the same release.Changelog
- From this release Agile Live will be released as
.deb
packages instead of an archive of binaries and shared libs. - To simplify compatibility verifications, all components inside Agile Live now have the same version number as the version of the release they belong to, in this release 6.0.0. This means some components have received an older version number compared to earlier and some components have skipped several major versions.
- The component -> System Controller communication has been changed from a request-response type of communication to a one-way communication, where the System Controller won’t send a response back and the component won’t expect a response back.
- More informative error messages from the System Controller in case requests fail
Ingest
- Support for ingesting SRT sources. Both caller and listener mode is supported, with or without passphrase. Currently AVC and HEVC video codecs are supported and AAC audio. New SRT sources can be set up using the REST API.
- Counters for
dropped_frames
,lost_frames
andduplicated_frames
are now split into separate counters for audio and video calleddropped_audio_frames
,dropped_video_frames
,lost_audio_frames
and so on - The NDI interface now supports interlaced NDI sources with individual fields
- The Ingest application now logs the enabled input interfaces more clearly and also outputs the API version of all interfaces supported on the machine.
Rendering Engine
- The Rendering Engine now supports playback of media files directly in the mixer itself. The player supports seeking as well as looping of the entire clip, or a section of the clip.
- The Rendering Engine can send information on the states of all media players to HTML pages. This way graphics can be shown for countdown until a clip ends, or to overlay such information to aid hosts in the studio.
- The generic control panel
acl-tcpcontrolpanel
now accepts multiple connections on a single port. As a result of this, the environment variable has changed name toACL_TCP_ENDPOINT
without anS
and only accepts a single endpoint. - Fix a crash when some invalid control commands was sent to the Rendering Engine.
Production Pipeline
- The visibility of the audio bars in the multi-views is now configurable per multi-view output.
Control Data Sender and Control Data Receiver
No changes
System Controller
- Fields are not correctly marked as “required” or “optional” in the OpenAPI specification served from the System Controller
- Fixed a bug where the metrics for
retransmitted_packets
andsent_bytes
on the GET requests to multi-view did not increment - The PSK is now checked for complexity. A too simple PSK (for instance just the same letter repeated 32 times) will not be accepted anymore
- Fix for CVE-2023-44487: DOS with HTTP/2
- The list of locations where the System Controller searches for the
acl_sc_settings.json
file has changed to just two locations:/etc/opt/agile_live
followed by the current working directory. - The log file of the System Controller is now created with file permissions that let any user read it
- The public IP of Pipelines are now included in the REST API
Known limitations
Media Player
Videos will be stretched in case the video played does not have the same aspect ratio as the Rendering Engine is using.
GPU performance
The GPU utilization in the Ingest status message is currently not working. Use intel_gpu_top
and/or nvidia-smi
instead.
MPEG-TS over UDP
For Outputs and multi-view outputs it is currently not possible to bind to a specific IP and port with MPEG-TS over UDP, even if local_ip
and local_port
are exposed in the REST API.
Drivers
You need to install the CUDA Toolkit on Ingest machines, even if they don’t have Nvidia cards.
Security
No known issues
5.3 - Release 5.0.0
Build date
2023-10-09
Release status
Type: Fifth production ready release
Release content
Product version
5.0.0
Release artifacts
Details | Filename | SHA-256 |
---|---|---|
Ingest and Rendering Engine binaries, headers, and shared libraries | agile-live-5.0.0-9d0ae4b.zip | 08681e43e592b7d347752457be7fadfbb1f09d9ba6145714c004003fcc45f666 |
System Controller | agile-live-system-controller-5.0.0-1820bd9.zip | 826c99d8bc679ebe37762a4cfd408655807b19ac36419f826c9fbee02af1fe91 |
Library details
libacl-ingest.so
Version: 11.0.0
Library containing the core Ingest functionality used by the acl-ingest
executable.
libacl-productionpipeline.so
Version: 12.0.0
Library containing the MediaReceiver, MediaStreamer and ControlDataReceiver classes, used to implement a Rendering Engine. Also used by the acl-renderingengine
application
libacl-controldatasender.so
Version: 11.0.0
Library containing the ControlDataSender class, used to send control messages to a ControlDataReceiver component.
Changelog
- It is now possible to use hostnames instead of just IP addresses for ACL_SYSTEM_CONTROLLER_IP environment variable.
- HTTPS and client authentication are now enabled by default in the system
- The new environment variable
ACL_CUSTOM_CA_CERT_FILE
makes it possible to set a custom CA certificate to use for verifying the system controller’s certificate. This can be used in cases when the CA certificate cannot be added to the list of OS trusted CA certificates. - SRT logs are now printed to the components’ log files and not just to the terminal
Ingest 11.0.0
- SRT connections to Production Pipelines will now automatically try to reconnect in case the network link goes down.
Rendering Engine 3.0.0
- The Rendering Engine now features a built in HTML renderer using the Chromium web engine through Chromium Embedded Framework. The HTML renderer is capable of opening multiple “browsers”, connected to input slots of the operator’s choice, rendering HTML graphics hosted by HTTP servers or even loading regular webpages.
- A new chroma key node was added to make green screen effects possible.
- The internal audio mixer now features a high pass and a low pass filter
Production Pipeline 12.0.0
- The cadence and speed/quality balance can now be set for the MediaStreamer and Multi view outputs
- MediaStreamer now supports multiple output streams (e.g. both SRT and UDP, or multiple SRTs and UDP outputs with different settings), as long as they use the same encoding settings.
- Performance of the Production Pipeline has been improved. The load on the NvDec chip is reduced by up to ~50% and the load on the CUDA cores reduced by up to ~60% in certain conditions.
- The Multi View outputs now show peak meter audio bars for all views, which makes solving audio problems much easier, as one can see both the audio of the incoming and outgoing video streams.
- The Random Access Indicator (RAI) is now correctly set in the MPEG-TS output streams
- Outputs with SRT in caller mode will now start even if the connection was not successfully made. The SRT connection will try to connect until it succeeds, or is manually closed down. In case the network link is broken, the SRT caller will try to keep connecting until it succeeds or is manually closed down.
Control Data Sender 11.0.0 and Control Data Receiver 12.0.0
- The release now features a new
acl-websocketcontrolpanel
, which works like the already existingacl-tcpcontrolpanel
, but allows for multiple clients to connect via WebSocket to the control panel instead. - It is now possible to use hostnames instead of just IP addresses when connecting a Control Data Sender or Receiver to another Control Data Receiver.
- Fix bug where it was not possible to disconnect a Control Data Sender/Control Panel from a Receiver and connect it to another Receiver on another port.
System Controller 5.0.0
- The REST API now correctly flags parameters with a default value in the Swagger documentation. These values does not need to be passed in the JSON blob with the REST API call. Parameters without a default value are correctly flagged as required in the documentation.
- The Outputs part of the REST API has been updated to support multiple output streams from the same output component. The name of the REST API endpoints has been moved from
pipelines/{uuid}/outputs/{output_uuid}/stream
topipelines/{uuid}/outputs/{output_uuid}/streams
to reflect this. The POST request will return an id integer to identify the stream within the Output component. A new endpoint[DELETE] pipelines/{uuid}/outputs/{output_uuid}/streams/{stream_id}
has been added to close a specific stream. All streams of an output can be closed with the old[DELETE] pipelines/{uuid}/outputs/{output_uuid}/streams
endpoint. - System controller now logs to file. Default log location is
/tmp
with the log file prefixed withacl-system-controller
. - Requests to the REST API are now rate limited to prevent overloading the components and to mitigate DOS attacks.
- The IPBB cadence and speed/quality balance can now be specified for the encoder in both Output streams and Multiview output streams.
- Prometheus endpoint now stops reporting metrics for closed streams.
- Improve execution time of
[DELETE] streams/{stream_uuid}
- Authentication has been added to the Prometheus endpoint.
- The parameters
ip
andport
in the[GET] pipelines/{uuid}/outputs/{output_uuid}/streams
has been corrected toremote_ip
andremote_port
. Their values are now reported correctly. - Security of the System Controller has been improved in several ways.
- Prometheus endpoint has been updated with some missing statistics
- The missing
audio_mapping
parameter has been added to the[GET] ingests/{uuid}/streams/{stream_uuid}
and[GET] pipelines/{uuid}/streams/{stream_uuid}
endpoints. - The
sent_bytes
parameter of the output statistics is now incremented as expected for both Multi View outputs and Output streams. - Fixed a bug where the System Controller application would crash when started with a filename as
-settings-path
parameter.
Known limitations
GPU performance
The GPU utilization in the Ingest status message is currently not working. Use intel_gpu_top
and/or nvidia-smi
instead.
MPEG-TS over UDP
For Outputs and multi-view outputs it is currently not possible to bind to a specific IP and port with MPEG-TS over UDP, even if local_ip
and local_port
are exposed in the REST API.
Drivers
You need to install the CUDA Toolkit on Ingest machines, even if they don’t have Nvidia cards.
Security
No known issues
5.4 - Release 4.0.0
Build date
2023-06-16
Release status
Type: Fourth production ready release
Release content
Product version
4.0.0
Release artifacts
Details | Filename | SHA-256 |
---|---|---|
Ingest and Rendering Engine binaries, headers, and shared libraries | agile-live-4.0.0-d02f52d.zip | 9f95bfe78ab7d295268ad2de913ed18cda8b9407d977feb9afc7f5a738034a73 |
System Controller | agile-live-system-controller-4.0.0-745ecee.zip | f19331fffa499db7d57d10c99af09b78e94fec5459188d6df832a009b0b92947 |
Library details
libacl-ingest.so
Version: 10.0.0
Library containing the core Ingest functionality used by the acl-ingest
executable.
libacl-productionpipeline.so
Version: 11.0.0
Library containing the MediaReceiver, MediaStreamer and ControlDataReceiver classes, used to implement a Rendering Engine. Also used by the acl-renderingengine
application
libacl-controldatasender.so
Version: 9.0.0
Library containing the ControlDataSender class, used to send control messages to a ControlDataReceiver component.
Changelog
- The system now requires Chrony as the NTP client for time synchronization. See NTP instructions for more details. The applications will not start if Chrony reports an drift greater than 5 ms.
- New environment variables
ACL_LOG_MAX_FILE_SIZE
andACL_LOG_MAX_LOG_ROTATIONS
to control the number and size of the rotated log files for the applications - Fix parsing of boolean environment variables
Ingest 10.0.0
- Support for transporting up to 16 channels of audio to the production, where a subset of the channels can be selected.
- Added support for using Opus as audio codec in transport to the Production Pipeline. This codec has a lower delay compared to the existing AAC codec. The user can now select codec per connected stream.
- Improved handling of NDI sources.
- Handle cases when SDI sources deliver one extra or one too few audio sample.
- The built-in MediaSourceGenerator now has a lower audio level of -20 LUFS.
Rendering Engine 2.0.0
- New audio mixer implementation with per-input-strip three-band parametric equalizer, compressor and panning.
- New automatic transition types: left and right wipe
- Add support for picture-in-picture effects. Currently the rendering engine support two separate picture-in-picture effects with an additional graphic effect on top.
- All control commands executed by a rendering engine is now written to a log file in
/tmp
- Fix bug where disconnecting a source would leave the last frame frozen in the input slot with corrupted audio.
Production Pipeline 11.0.0
- It is now possible to include input slots where no stream has been connected yet to a multi-view output. The view will be empty until a source has been connected to that input slot, with the label visible.
- Views in the multi-view will remain after the corresponding input slot has been disconnected, showing a black frame with the label. The video will automatically turn up again when another stream is connected to the same input slot.
- Fix bug where the multi-view output would stop sending video when none of the sources in the multi-view were connected.
- Add support for SRT in caller mode for multi-view outputs and MediaStreamer outputs.
- Add support for setting the SRT latency and an optional passphrase for the multi-view outputs and MediaStreamer outputs.
MediaReceiver
API has a new methodremoveCustomMultiViewSourceInput
to remove a custom feedback input earlier registered usinggetCustomMultiViewSourceInput
.- Improved latency of video decoder with ~1 frame.
Control Data Sender 9.0.0 and Control Data Receiver 10.0.0
- Fix application crash with
acl-tcpcontrolpanel
on connection failure. - Updated control command protocol for video commands, where the first word addresses the node in the graph to make the change to. This allows for the same node to be used multiple times. See Rendering Engine commands for more details on the new protocol.
- Updated API where multiple, separate control commands can be sent in the same message, guaranteed to be delivered to the Rendering Engine at the same time point. The new method
sendMultiRequestToReceivers
is used to send such messages. The old API for sending just a single control command per message is unchanged. - Raise the maximum control message length to 65535 bytes.
System Controller 4.0.0
- Expose a number of new metrics from the components for system monitoring in the REST API and Prometheus endpoint, including average/min/max encode and decode durations per stream, frame counters for video and audio in multiple locations, statistics for outgoing output streams and round trip time and measured bandwidth on the contribution links.
- The POST
/streams
endpoint now require anaudio_mapping
to know which audio channels to transport to the production. Use"[[0,1]]"
for the same behavior as in previous versions (i.e. transport only channels 0 and 1, encoded as stereo). - New counter
lost_frames
for Pipeline streams. It counts fully lost frames, not only broken frames, as thereceived_broken_frames
counter. - The counter
lost_packets
in theGET /ingests/{uuid}/streams
endpoints has been removed in favor for thelost_packets
anddropped_packets
in the Pipeline. - Counter
queuing_video_frames
has changed name tovideo_frames_in_queue
. - REST API now correctly returns the selected port when automatic port selection is requested (by setting the
local_port
to 0), instead of returning the port number as 0. - REST API now show if sources are active or inactive, instead of hiding inactive sources (a source is active if it is ready to be used, as opposed to an inactive source, which has been seen earlier, but not any longer)
- Fix bug where the wrong port numbers of Control Connections would be shown in REST-API for port numbers automatically assigned by the OS.
Known limitations
GPU performance
The GPU utilization in the Ingest status message is currently not working. Use intel_gpu_top
and/or nvidia-smi
instead.
MPEG-TS over UDP
For Outputs and multi-view outputs it is currently not possible to bind to a specific IP and port with MPEG-TS over UDP, even if local_ip
and local_port
are exposed in the REST API.
Drivers
You need to install the CUDA Toolkit on Ingest machines, even if they don’t have Nvidia cards.
Security issues
There is no rate limiting implemented in any of the components. Should the application be running in a network where ports are accessible to unauthorized users, there is a risk of a denial of service (DOS). The network should be configured with firewall rules closing out unauthorized users.
Encryption (HTTPS) is not enabled by default in acl-system-controller. This means that pre-shared keys (PSK) used for encrypted communication between components and the System Controller will be exchanged in the clear and could be revealed.
Client authentication is not enabled by default in acl-system-controller so anyone with access to HTTP on the machine hosting acl-system-controller will have access to the REST API.
Even when client authentication is enabled, there is no enforcement of the length or format of the password. Also note that HTTPS should be enabled when using client authentication since the credentials would otherwise be sent in clear text.
It is strongly recommended to enable both HTTPS and client authentication.
5.5 - Release 3.0.0
Build date
2023-01-23
Release status
Type: Third production ready release
Release content
Product version
3.0.0
Release artifacts
Details | Filename | SHA-256 |
---|---|---|
Ingest and Rendering Engine binaries, headers, and shared libraries | acl-3.0.0-d4de107.zip | 4743f2c8c6fe74b09d04a0210747276fac7ef9fa0145a7ee51ec54de4def84dc |
System Controller | acl-system-controller-3.0.0-10046c8.zip | 919f469ed773bb3ad65735d41dd0942e08b91bd23a317f7227ccdccbc2807437 |
Library details
libacl-ingest.so
Version: 9.0.0
Library containing the core Ingest functionality used by the acl-ingest
executable.
libacl-productionpipeline.so
Version: 10.0.0
Library containing the MediaReceiver, MediaStreamer and ControlDataReceiver classes, used to implement a Rendering Engine. Also used by the acl-renderingengine
application
libacl-controldatasender.so
Version: 9.0.0
Library containing the ControlDataSender class, used to send control messages to a ControlDataReceiver component.
Changelog
Ingest 9.0.0
- Ingesting interlaced sources is now supported both with the Intel and Nvidia encoders. De-interlacing is automatically applied to interlaced input, so no extra work is needed by the user of the REST API when setting up the stream compared to a progressive source. The implementation supports de-interlacing 50i sources into 50p.
- The Ingest now supports encoding JPEG thumbnails using Nvidia GPUs. For now, only the first Nvidia GPU will be used on multi-GPU systems.
- Earlier when encoding HEVC with the Intel encoder, the setup would fail in case any of the
fast
,faster
,good
orbetter
speed/quality settings were used, as these were not supported by the encoder. Now these settings are automatically changed to the closest supported setting, i.e.fast
andgood
is changed tobalanced
,faster
tofastest
andbetter
tobest
. - The Intel JPEG encoder now supports encoding JPEGs with a width that is not evenly divisible by 16. Also requests for thumbnails with odd number of pixels in width/height will automatically be encoded with one extra pixel in width/height compared to the request, instead of failing the request.
- The Intel JPEG encoder now supports encoding thumbnails of interlaced sources.
Rendering Engine 1.0.0
No new features since last release.
Production Pipeline 10.0.0
- The network endpoints of the incoming streams from the Ingests are now closed after the Ingest has connected.
Control Data Sender 9.0.0 and Control Data Receiver 10.0.0
- Support for connecting multiple ControlDataSenders (or Receivers) to a single ControlDataReceiver is now working again.
- Improved verification of the established connection between ControlData components.
- Message delay can now be set per incoming connection to a ControlDataReceiver.
System Controller 3.0.0
- New version of the REST API, v2, which is served from
/api/v2
. The API has been reorganized and cleaned up to make it easier to use. Streams
andControl Connections
now get their own unique UUIDs when created using the REST API. This UUID is then used to reference the connection when making changes to and closing the connection. The UUID is returned when the connection is created and is also shown when querying information from the components on both sides of the connection. The oldconnection_id
concept has been completely removed.- The endpoints related to
Control Connections
are now on the top level and the same endpoints are used whether the connection is originating from a ControlDataSender or a ControlDataReceiver. BothControl Connections
andStreams
are now closed using their UUID and the correspondingDELETE
endpoints. Also thePATCH
endpoints use the UUID of the connection to identify the connection to make changes to. - The
Output
andControl Receiver
endpoint has been moved in under thePipeline
endpoint to better show their tight connection. When querying the Pipeline endpoint it will return itsOutputs
andControl Receivers
. - The parameter
id
used for an Ingest’s sources has been changed tosource_id
. - The parameters
resolution_x
andresolution_y
have been changed towidth
andheight
. - The endpoint for
/components
has been removed in favor for the/ingests
,/pipelines
and/controlpanels
endpoints. - The thumbnail endpoint now has a new parameter
encoder
where the user can explicitly request that a certain encoder should be used for encoding the thumbnail. Current options areintel_gpu
,nvidia_gpu
orauto
, where the last option will make the Ingest select the best encoder that is supported on the hardware the Ingest is running on.
Known limitations
GPU performance
The GPU utilization in the Ingest status message is currently not working. Use intel_gpu_top
and/or nvidia-smi
instead.
Drivers
You need to install the CUDA Toolkit on Ingest machines, even if they don’t have Nvidia cards.
Audio channels
Only stereo audio is supported with two channels embedded per connected input.
Security issues
There is no rate limiting implemented in any of the components. Should the application be running in a network where ports are accessible to unauthorized users, there is a risk of a denial of service (DOS). The network should be configured with firewall rules closing out unauthorized users.
Encryption (HTTPS) is not enabled by default in acl-system-controller. This means that pre-shared keys (PSK) used for encrypted communication between components and the System Controller will be exchanged in the clear and could be revealed.
Client authentication is not enabled by default in acl-system-controller so anyone with access to HTTP on the machine hosting acl-system-controller will have access to the REST API.
Even when client authentication is enabled, there is no enforcement of the length or format of the password. Also note that HTTPS should be enabled when using client authentication since the credentials would otherwise be sent in clear text.
It is strongly recommended to enable both HTTPS and client authentication.
5.6 - Release 2.0.0
Build date
2022-11-17
Release status
Type: Second production ready release
Release content
Product version
2.0.0
Release artifacts
Details | Filename | SHA-256 |
---|---|---|
Ingest and Rendering Engine binaries, headers, and shared libraries | acl_2.0.0-4ae95d4.zip | dacc86efa0c2c08b28658e4218abd4ba58f24381a37eba93f0dbac10d9702e2a |
System Controller | acl_system_controller_2.0.0-85ac2cd.zip | 085465799bae8ceaa201ad462efedcdc19a2ea2f80e99c1332efc88ac0ce82dd |
Library details
libacl-ingest.so
Version: 8.0.0
Library containing the core Ingest functionality used by the acl-ingest
executable.
libacl-productionpipeline.so
Version: 9.0.0
Library containing the MediaReceiver, MediaStreamer and ControlDataReceiver classes, used to implement a Rendering Engine. Also used by the acl-renderingengine
application
libacl-controldatasender.so
Version: 8.0.0
Library containing the ControlDataSender class, used to send control messages to a ControlDataReceiver component.
Changelog
The supported OS has changed from Ubuntu 20.04 LTS to Ubuntu 22.04 LTS. Official support for Ubuntu 20.04 LTS is dropped from this release.
Ingest 8.0.0
- Changed to epoch aligned timestamps
- Improve duplicating and dropping of frames to cover up drifting cameras
- Expose counters for dropped, duplicated and lost frames per source in the REST API
- Fix a memory leak when generating thumbnails
- Interlaced sources are now shown in the REST API and correctly marked as interlaced
- Improved support for detecting connection and disconnection of SDI cables at runtime, as well as changes in the format of the sent SDI stream.
- Applications are now stopped by pressing
Ctrl-C
instead of pressing enter to prevent accidental stopping - Thumbnails and source info are now cached for each source to improve response time in the REST API. This means that sources that are not connected to any production will only update its thumbnails and source info every ~5 seconds or so. Sources that are connected will update the info continuously.
- The field
active
for sources in the REST API was not working before and has now been removed in favor for a new fieldin_use
which indicates if a source is currently being streamed somewhere or not. - Log files will now get a unique name by appending the date of creation to the filename
- Log files will now have the same log level as the terminal output
- Appending the
--help
flag to the ingest executable will print information about the binary and list the environment variables - A stack trace is now printed in case the application should crash
Rendering Engine 1.0.0
- First version of the Rendering Engine application
- Support for program, preview and cutting between them
- Support for fading between sources over a given time
- Support for graphics with key (alpha) by ingesting the fill and key (color and alpha) as two separate sources
- Support for fade to black
- Support for setting the audio volume per source
Production Pipeline 9.0.0
- Support for tally borders in the Multi-view generator with new API for setting and clearing the borders from the Rendering Engine in the
MediaReceiver
class. - Updated API for passing video memory to and from the Rendering Engine, now uses RAII classes for automatic memory handling. See upgrade guide for more details.
- The
MediaReceiver
now passes themRenderingTimestamp
to the Rendering Engine, i.e. the timestamp when the frame was delivered. - Alignment is internally rounded to the nearest even frame time. This together with the changes in the Ingest results in that the frames delivered to the
RenderingEngine
will have rendering timestamps that are evenly devisable by the frame time. - PTS in
MediaStreamer
s are now calculated based on themRenderingTimestamp
, instead of a newly taken timestamp when the frame was sent to theMediaStreamer
. This will make the PTSes of the output streams equidistant, even if the Rendering Engine jitters slightly. - New class
PipelineSystemControllerInterfaceFactory
to help Rendering Engine implementors in creation ofSystemControllerConnection
s by reducing code duplication
Control Data Sender 8.0.0 and Control Data Receiver 9.0.0
- The control channel between
ControlDataSenders
andControlDataReceivers
are now encrypted by default. Read more about it in the security tutorial - The support for connecting multiple
ControlDataSenders
orControlDataReceivers
to anotherControlDataReceiver
is temporarily disabled.
System Controller 2.0.0
- Added support for HTTPS communication with the REST API and encrypted connections between the components and the System Controller. Read about how to activate it in the security tutorial
- Components now automatically reconnects to the System Controller in case they loose connection to it. This means that the System Controller can be restarted during a broadcast and all components will automatically reconnect to it once it turns up again on the network. It also means that components can be started before the System Controller and get connected as soon as the System Controller is started
- REST API now features basic authentication. Read more about it in the security tutorial
- Streams (of audio and video) between Ingests and Production Pipelines are now automatically encrypted when set up by the System controller
- Control connections between Control Panels and Production Pipelines are now automatically encrypted when set up by the System controller
- Fix a bug where the System Controller would not search for the config file in all the directories listed in the documentation. Also make the application search in the application’s directory and not only the current working directory.
- New endpoint
[PATCH] /pipelines/{uuid}/streamconnections/{connection_id}
to change the alignment of an already created stream during runtime. Note that changing the alignment will cause the audio and video of that stream to make a sudden jump and/or freeze for a moment, so it is not recommended to do this on stream connections that are on air - The endpoint
[POST] /stream
now has improved error messages in case the System Controller failed to start the stream. The endpoint also closes the Pipeline’s stream connection instead of leaving it open in case the Ingest failed to start the stream - The
[DELETE]
endpoints for the specific components has been removed in favor for the general[DELETE] /components/{uuid}
endpoint - Fix a bug where the System Controller would crash in some rare cases when a component is shut down at the same time as a REST request is using that component.
- Fix some false positive log outputs
Known limitations
GPU performance
The GPU utilization in the Ingest status message is currently not working. Use intel_gpu_top
and/or nvidia-smi
instead.
Drivers
You need to install the CUDA Toolkit on Ingest machines, even if they don’t have Nvidia cards.
Number of Control Senders per Control Receiver
For now the system is limited to only connect one Control Sender (or Receiver) to another Control Receiver. This means that, for now, it is only possible to connect a single Control Panel to a production.
Audio channels
Only stereo audio is supported with two channels embedded per connected input.
Security issues
There is no rate limiting implemented in any of the components. Should the application be running in a network where ports are accessible to unauthorized users, there is a risk of a denial of service (DOS). The network should be configured with firewall rules closing out unauthorized users.
Encryption (HTTPS) is not enabled by default in acl-system-controller. This means that pre-shared keys (PSK) used for encrypted communication between components and the System Controller will be exchanged in the clear and could be revealed.
Client authentication is not enabled by default in acl-system-controller so anyone with access to HTTP on the machine hosting acl-system-controller will have access to the REST API.
Even when client authentication is enabled, there is no enforcement of the length or format of the password. Also note that HTTPS should be enabled when using client authentication since the credentials would otherwise be sent in clear text.
It is strongly recommended to enable both HTTPS and client authentication.
5.7 - Release 1.0.0
Build date
2022-06-23
Release status
Type: First production ready release
Release content
Product version
1.0.0
Release artifacts
Details | Filename | SHA-256 |
---|---|---|
Ingest binary, headers, and shared libraries | acl_v1.0.0-336067b-release.zip | f2beb75a0109403860d455bde14618e4e7a9bbf690b5a7ed79f743fb19224cef |
Ingest binary, Headers, and shared libraries, debug build | acl_v1.0.0-336067b-debug.zip | e555ea542b84f73008efa0ab3769835e456c31f499ef88d6a315bb0506592d04 |
System Controller | acl_system_controller_v1.0.0-0f2417f.zip | 82094afe07da10c81d96fab03babca1737ca48f57c2cde9584babebbe44db947 |
Library details
libacl-ingest.so
Version: 7.0.0
Library containing the IngestApplication class used to implement an Ingest component.
libacl-productionpipeline.so
Version: 8.0.0
Library containing the MediaReceiver, MediaStreamer and ControlDataReceiver classes, used to implement a Rendering Engine.
libacl-controldatasender.so
Version: 7.0.0
Library containing the ControlDataSender class, used to send control messages to a ControlDataReceiver component.
Changelog
Ingest 7.0.0
- New metrics are available for the Ingest streamconnections in the REST API that can be used to monitor the streams.
- Fixed bug where the Ingest would crash with a
std::runtime_error
if grabbing a thumbnail on hardware without an Intel CPU with integrated GPU. - Improved error messages for encoders and sources.
- Missing fields in the REST API for the Ingests have been added, such as
max_network_latency_ms
for streamconnections. - Logs are now automatically flushed every 5 seconds.
- The class
StaticIngestSystemController
has been removed, please use theSystemControllerConnection
class and the system controller instead.
Production Pipeline 8.0.0
- New metrics are available for the Pipeline streamconnections and Output in the REST API that can be used to monitor the streams.
- Fixed the AUD format in MEPG-TS output of the Pipelines when using HEVC, which earlier caused clicks in the audio channel and a lot of
Invalid NAL unit 4, skipping.
errors when playing back the streams using ffmpeg. - Support for labels (so-called “Under Monitor Display”, UMD) in the multi-view outputs. The labels are set through the REST API when setting up multi-view outputs and can be changed at runtime.
- Missing fields in the REST API for the Pipelines have been added, such as
max_network_latency_ms
,alignment_ms
andconvert_color_range
for streamconnections. - Fixed DTS calculation for output streams in both the Multi-view generator and Output components.
- The network connections between a
ControlDataReceiver
and otherControlDataReceivers
are now using TCP as protocol instead of SRT, which means that the messages are delivered as soon as they arrive and that no control messages will be dropped due to to timeouts in the transport. - Fixed a minor CUDA memory leak in the multi-view generator that caused a slow, constantly growing GPU memory usage.
- Fixed CUDA memory corruption sometimes occurring when stopping and quickly restarting a multi-view output.
- Nvidia Ampere GPUs are now working with older CUDA drivers than version 510.
- Logs are now automatically flushed every 5 seconds.
- The classes
StaticMediaReceiverSystemController
andStaticMediaStreamerSystemController
have been removed, please use theSystemControllerConnection
class and the system controller instead.
Control Data Sender 7.0.0 and Control Data Receiver 8.0.0
- The network connections between a
ControlDataSender
andControlDataReceivers
are now using TCP as protocol instead of SRT, which means that the messages are delivered as soon as they arrive and that no control messages will be dropped due to timeouts in the transport. - The classes
StaticControlDataSenderSystemController
andStaticControlDataReceiverSystemController
have been removed, please use theSystemControllerConnection
class and the System Controller instead.
System Controller 1.0.0
- Support for setting CORS and custom HTTP response headers in the configuration file.
- Improved validation of HTTP request payloads and improved error messages.
- New endpoint for deleting any type of component,
[DELETE] /component
. - New, built-in Prometheus exporter, accessible through endpoint at
[GET] /metrics
. - The version of the running system controller is now returned in each HTTP response in the
X-Binary-Version
header.
Known limitations
GPU performance
The GPU utilization in the Ingest status message is currently not working. Use intel_gpu_top
and/or nvidia-smi
instead.
Drivers
You need to install the CUDA Toolkit on Ingest machines, even if they don’t have Nvidia cards.
MPEG-TS output
The timestamps in the MPEG-TS output are not equidistant, which might confuse some playback tools. Playback the stream using a tool that works for now, such as ffmpeg.
Pre-shared key
The pre-shared key used to authorize components in the System controller is currently hard-coded into the Ingest application. This means that the psk
in the System controller’s example config file must be used, otherwise is it not possible to connect an Ingest to the system.
5.8 - Upgrade instructions
Upgrade to 7.0.0 from 6.0.0
System Controller
- All REST API calls now must use the
/api/v3
prefix - The UUID of the Control Receivers has been removed, use the UUID of the Pipeline instead in
[POST] /controlconnections
- All endpoints starting with
/pipelines/{uuid}/controlreceivers
have been removed. To get the statistics of the controlreceiver, see thecontrolreceiver
part of the response from the[GET] /pipelines/{uuid}?expand=true
request. In this response theconnected_to
has been renamedoutgoing_connections
and the list of incoming connections insidelistening_interface
has been moved out of that object and namedincoming_connections
- The parameter
connected_to
in, among other endpoints,[GET] /controlpanels/{uuid}
has been renamed tooutgoing_connections
- The
status
JSON object for responses from controlreceivers, controlpanels and outputs has been removed and all parameters inside of it has been moved out
C++ SDK
ControlDataReceiver
andMediaStreamer
classes have now been removed from the SDK and replaced by the interface classesIControlDataReceiver
andIMediaStreamer
. UseMediaReceiver::createMediaStreamerOutput
to create newMediaStreamer
instances andMediaReceiver::getControlDataReceiver
to get theControlDataReceiver
instance.getGitRevision()
andgetBuildInfo()
have been removed from the SDK. UsegetVersion()
instead.MediaReceiver::Settings
now have a new functionmResetCallback
that must be set to a callback that should be called- A new parameter
const ControlDataReceiver::Settings& receiverSettings
is required when callingMediaReceiver::start
to set up the internalControlDataReceiver
.
Upgrade to 6.0.0 from 5.0.0
System Controller
- In endpoints
/ingests/{uuid}?expand=true
,/ingests/{uuid}/sources
and/ingest/{uuid}/sources/{source_id}
these three metrics has been split into audio and video variants:acl_ingest_source_dropped_frames
has been split intoacl_ingest_source_dropped_audio_frames
andacl_ingest_source_dropped_video_frames
, metricacl_ingest_source_duplicated_frames
is split intoacl_ingest_source_duplicated_audio_frames
andacl_ingest_source_duplicated_video_frames
; andacl_ingest_source_lost_frames
is split intoacl_ingest_source_lost_audio_frames
andacl_ingest_source_lost_video_frames
. - In case the entropy of the PSK set in the System Controller was too low, the System Controller won’t start. Generate a new PSK and use that one.
- The System Controller does not longer search for the configuration file
acl_sc_settings.json
in the directories/etc/acl_system_controller
,/app
,/service
,/acl_system_controller
or/opt/acl_system_controller
. Either put the file in the current working directory of the system controller application, in the directory/etc/opt/agile_live
, or explicitly point at the directory of the config file using the flag--settings-path <PATH>
when starting the System Controller.
Rendering Engine
- For the
acl-tcpcontrolpanel
the environment variableACL_TCP_ENDPOINTS
has changed name toACL_TCP_ENDPOINT
and only accept a single endpoint. This endpoint will now accept multiple connections.
C++ SDK
AlignedFrame::PixelFormat
has been replaced withACL::PixelFormat
, found in PixelFormat.h.ISystemControllerInterface::sendRequest
has been replaced withISystemControllerInterface::sendMessage
, the System Controller will no longer send a response on messages. Since connection is over TCP, if the message could be sent, it has arrived properly to the System Controller, so there is no need for a response message that verifies that.
Upgrade to 5.0.0 from 4.0.0
System Controller
- Endpoints
/pipelines/{uuid}/outputs/{output_uuid}/stream
have been moved to/pipelines/{uuid}/outputs/{output_uuid}/streams
to indicate multiple streams for the same Output can be created. The POST request takes the same parameters as before, but returns an ID of the newly created stream, which is globally unique together with the UUID of the Output. As of now, it is only possible to start multiple streams if the encoder settings match for all streams. Use the new endpoint[DELETE] /pipelines/{uuid}/outputs/{output_uuid}/streams/{stream_id}
to stop a stream with a given ID. - The response of
[GET] /pipelines/{uuid}/outputs/{output_uuid}/streams
differs from the old variant. In the new one not just a single active stream is presented, but a list ofactive_streams
. - The
[POST] /pipelines/{uuid}/outputs/{output_uuid}/stream
and[POST] /pipelines/{uuid}/multiviews
endpoints now support setting thepic_mode
andspeed_quality_balance
to control the quality of the encoding. While they don’t have to be explicitly specified in the REST API request as they have default values (pic_mode_ip
andbalanced
), it might be desirable to manually specify them.
ProductionPipeline
MediaReceiver::start
has changed signature. It now takes aCUcontext
as the second parameter, which should be a CUDA context reference to the device to run the CUDA operations on.- The
MediaReceiver::Settings
struct has a new parametermAlignedFrameFreeStream
which should be a CUDA stream within the CUDA context passed to theMediaReceiver::start
method, to use as the stream to make the asynchronous free on, after all the DeviceMemory pointers have been released. MediaStreamer::configure
now has a second parameter to aCUcontext
, which is the CUDA context defining which device to run the MediaStreamer on.PipelineSystemControllerInterfaceFactory
constructor now takes an extracustomCaCertFile
parameter, pointing out an optional CA certificate file to use for validating certificates from the System Controller.
Upgrade to 4.0.0 from 3.0.0
System Controller
- The endpoint
POST /streams
now require two additional parametersaudio_format
andaudio_mapping
. The first one selects ifAAC-LC
or the newOpus
codec should be used for audio encoding. AAC-LC has slightly higher quality, but is slower to encode. Opus has a lower latency and gives better quality at lower bitrates. The second parameter,audio_mapping
tells the Ingest which audio channels to pick from the source and if they should be encoded as mono or stereo streams. (Stereo streams save bandwidth, but does not need to be used as stereo in the Rendering Engine). For example the mapping[4, [1, 2], 0]
will result in 4 channels out in the Rendering Engine, where the channel with index 4 is output at index 0 and encoded as a mono channel, index 1 and 2 in the Ingest are encoded as stereo and output at channels 1 and 2 in the output and channel 0 in the input is encoded as mono and output to channel 3 in the output. Channel 4 in the input (and 5 and above) are ignored. To get the same behavior as in the previous release, passAAC-LC
and[[0,1]]
respectively foraudio_format
andaudio_mapping
. - Metric
queueing_video_frames
has changed name tovideo_frames_in_queue
. - In
POST /pipelines/{uuid}/outputs/{output_uuid}/stream
andPOST /pipelines/{uuid}/multiviews
the parametersip
andport
are removed and replaced by four parameters calledlocal_ip
,local_port
,remote_ip
andremote_port
. In MPEG-TS over SRT mode, their useage depends on the SRT mode. When in listener mode (srt_mode == listener
) only thelocal_ip
andlocal_port
are used to set the address to listen to. Setlocal_port
to 0, to make the OS automatically pick an available port. In SRT caller mode bothlocal_ip
,local_port
andremote_ip
,remote_port
are used. Theremote_*
ones define the remote address to connect to and thelocal*
ones the local address to bind to. To get the old behavior, where the local bind address is automatically chosen, setlocal_ip
to0.0.0.0
andlocal_port
to 0 to automatically pick an unused port. For MPEG-TS over UDP mode, set the address to send the stream to using theremote_*
parameters (Thelocal_*
variants should be present but must be set to0.0.0.0
and0
, currently). - The output of the
GET /pipelines/{uuid}/outputs/{output_uuid}/stream
andGET /pipelines/{uuid}/multiviews
endpoints have changed the layout. See the REST API documentation for more details. - The
video_kilobit_rate
parameter inGET /pipelines/{uuid}/streams/{stream_uuid}
has been removed. Use the same parameter in theGET /ingests/{uuid}/streams/{stream_uuid}
endpoint instead. - The counter
lost_packets
in theGET /ingests/{uuid}/streams/{stream_uuid}
endpoint has been removed in favor for thelost_packets
anddropped_packets
in the Pipeline.
ProductionPipeline
- The API of
ControlDataReceiver::IncomingRequest
has changed to get anstd::vector
of messages (std::vector<uint8_t>
), to allow multiple messages to be delivered at the same time point. Just loop through the messages and handle them as before. - The settings for
mDebugVideo
andmDebugAudio
has been removed fromMediaReceiver::Settings
.
Upgrade to 3.0.0 from 2.0.0
System Controller
- The System Controller uses the new API prefix
/api/v2
, so all calls must update to this API version. The old API/api/v1
has been removed. - All calls to the Control Receiver and Output endpoints must be updated, as these have been moved in under the
/pipelines
endpoint. See the REST API reference for details. - When setting up Control Connections, the old
[POST] /controlpanels/{uuid}/controlconnections
and[POST] /controlpanels/{uuid}/controlconnections
endpoints have been replaced with a single endpoint[POST] /controlconnections
. The sending side component (whether a Control Panel or a Control Receiver) is passed as a parameter in the JSON HTTP body. Delete endpoints forcontrolconnections
formerly under Control Panel and Control Receiver, have been replaced with a single endpoint[DELETE] /controlreceivers/{uuid}
, which will close both sides of the connection. - When closing streams, the
[DELETE] /ingests/{uuid}/streamconnections/{connection_id}
and[DELETE] /pipeline/{uuid}/streamconnections/{connection_id}
have been replaced by a single endpoint[DELETE] /streams/{uuid}
, which will close down both sides of the Stream. - The endpoints for getting information from either the Ingest or Pipeline end of a Stream has been moved from
[GET] /ingests/{uuid}/streamconnections/{connection_id}
and[GET] /pipeline/{uuid}/streamconnections/{connection_id}
to[GET] /ingests/{uuid}/streams/{stream_uuid}
and[GET] /pipeline/{uuid}/streams/{stream_uuid}
, wherestream_uuid
is the system-wide, unique identifier of the stream and is the same UUID on both the Ingest and Pipeline side of the connection in contrast to the oldconnection_id
which was different in the Ingest and the Pipeline. - The endpoint for changing the alignment of a stream has been moved from
[PATCH] /pipelines/{uuid}/streamconnections/{connection_id}
to[PATCH] /streams/{uuid}
. - The parameter
id
referring to a source has changed name tosource_id
in[POST] /streams
and in responses from the Ingest when querying for information on the Sources or Streams. - The parameters
resolution_x
andresoulution_y
have changed name towidth
andheight
in several endpoints, among others the[POST] /streams
and[POST] /ingests/{uuid}/sources/{source_id}/thumbnail
. - The thumbnails endpoint has been moved from
[POST] /ingests/{uuid}/thumbnail
to[POST] /ingests/{uuid}/sources/{source_id}/thumbnail
, passing thesource_id
parameter in the path instead of the JSON HTTP body. - The new
encoder
parameter must be specified when requesting thumbnails using the[POST] /ingests/{uuid}/sources/{source_id}/thumbnail
endpoint. Set it toauto
unless a specific implementation is required.
Upgrade to 2.0.0 from 1.0.0
System Controller
- Make sure to generate a new PSK (pre-shared key) for the
acl_sc_settings.json
file. This will be used to authorize components connecting to the System controller. All components connecting to the System controller must use the same PSK set using the environment variableACL_SYSTEM_CONTROLLER_PSK
. In case a component is started without the environment variable it will log an error and stop. Read more about this in the Security tutorial.
ProductionPipeline
- The API of the former struct
AlignedData::DataFrame
has changed.- The namespace
AlignedData
has been removed and the struct itself has been renamed toAlignedFrame
. The enumsCompressedVideoFormat
andPixelFormat
and typedefAudioSampleRow
have been moved inside the struct. - The field
mPTS
has been removed and replaced by two new fieldsmCaptureTimestamp
andmRenderingTimestamp
, where the first is the TAI timestamp in mircoseconds since the UNIX epoch when the frame was captured by the Ingest and the second one is the timestamp when the frame was delivered to the Rendering Engine from theMediaReceiver
. Both these timestamps are rounded to the nearest even frame step since the UNIX epoch, which should make it easier to group frames with the same rendering time. ThemRenderingTimestamp
is also used by theOutputStreamer
to calculate the timestamps for the outgoing streams, so in case the Rendering Engine creates newAlignedFrame
instances, make sure this value is incremented correctly for each frame that is put to theOutputStreamer
. - The CUDA pointer
mVideoDataCudaPtr
has been removed in favor for anstd::shared_ptr
to an RAII class calledDeviceMemory
that keeps track of the CUDA memory pointer adn releases the memory automatically once not longer used by anyone. This makes it possible for the Rendering Engine to make copies of the shared pointer and then discard theAlignedFrame
it was delivered with, in order to, for instance, cache the video frame or to handle video and audio data separately. Please do remember that theDeviceMemory
from theAlignedFrame
s delivered by theMediaReceiver
is read-only and should never be changed by any CUDA kernel, as this memory is also used by the multi-view generator to generate the multi-view outputs. The Rendering Engine might copy the shared pointer and keep it at long as it wants to, but never change the content of theDeviceMemory
or write to the CUDA memory pointed to by it.AlignedFrames
(andDeviceMemory
instances made outside of anAlignedFrame
) created by the Rendering Engine can of course be changed and used freely.
- The namespace
6 - Reference
6.1 - Agile Live 7.0.0 developer reference
6.1.1 - REST API v3
The following is a rendering of the OpenAPI for the Agile Live System Controller using Swagger. It does not have a backend server, so it cannot be run interactively.
This API is available from the server at the base URL /api/v3
.
6.1.2 - Agile Live Rendering Engine configuration documentation
This page describes how to configure the Rendering Engine. This topic is closely related to this page on the control command protocol for the video and audio mixers.
Rendering Engine components
The Rendering Engine is an application that uses the Production Pipeline library in the base platform for transport, and adds to that media file playback, HTML rendering, a full video mixer and an audio router and audio mixer. The figure below shows a schematic of the different components and an example of how streams may transition through it.
HTML Renderers
Multiple HTML renderers can be instantiated in runtime by control panels, and in each an HTML page can be opened. Each HTML renderer produces a video stream, but no audio.
Media Players
Multiple media players can be instantiated in runtime by control panels, and in each a media file can be opened. Each media player produces a stream containing a video stream and all audio streams from the file.
Video Mixer
The Video mixer receives all video inputs into the system, i.e. streams from ingests, from HTML renderers and from media players. It outputs one or more named video output streams. The internal structure of the video mixer is defined at startup (as decribed in detail below), and controlling the video mixer is done in runtime by control panels.
Audio Router
The audio router receives all audio streams into the system, i.e. streams from ingests and from media players. It outputs a set of streams that are either mono or a stereo pair, to the audio mixer. The mappings from input streams to output streams in the router is configured in runtime by control panels.
Audio Mixer
Note
The audio mixer can be either our internal audio mixer or an integration towards an external third party audio mixer. The rest of the documentation here is valid for the internal audio mixer.The audio mixer takes a number of mono or stereo pair streams as inputs that we call input strips. It outputs a number of named output streams in stereo. The outputs of the audio mixer are defined at startup, and the audio mixer is controlled in runtime by control panels.
Combine outputs
The combination of video and audio outputs into full output streams from the rendering engine is configured at startup.
Configuring the Rendering Engine
Some aspects of the rendering engine can be configured statically at startup through the use of a configuration file in JSON format. Specifically, the video mixer node graph, the audio mixer outputs and the combination of video and audio outputs can be configured. As an example, here is such a JSON configuration file that we will refer to throughout this guide:
{
"version": "1.0",
"video": {
"nodes": {
"transition": {
"type": "transition"
},
"chroma_key_select": {
"type": "select"
},
"chroma_key": {
"type": "chroma_key"
},
"chroma_key_alpha_over": {
"type": "alpha_over"
},
"fade_to_black": {
"type": "fade_to_black"
},
"program": {
"type": "output"
},
"chroma_key_preview": {
"type": "output"
},
"preview": {
"type": "output"
}
},
"links": [
{
"from_node": "transition",
"from_socket": 0,
"to_node": "chroma_key_alpha_over",
"to_socket": 1
},
{
"from_node": "transition",
"from_socket": 1,
"to_node": "preview",
"to_socket": 0
},
{
"from_node": "chroma_key_select",
"from_socket": 0,
"to_node": "chroma_key",
"to_socket": 0
},
{
"from_node": "chroma_key",
"from_socket": 0,
"to_node": "chroma_key_alpha_over",
"to_socket": 0
},
{
"from_node": "chroma_key_alpha_over",
"from_socket": 0,
"to_node": "fade_to_black",
"to_socket": 0
},
{
"from_node": "fade_to_black",
"from_socket": 0,
"to_node": "program",
"to_socket": 0
},
{
"from_node": "chroma_key",
"from_socket": 0,
"to_node": "chroma_key_preview",
"to_socket": 0
}
]
},
"audio": {
"outputs": [
{
"name": "main",
"channels": 2
},
{
"name": "aux1",
"channels": 2,
"follows": "main"
},
{
"name": "aux2",
"channels": 2
}
]
},
"output_mapping": [
{
"name": "program",
"video_output": "program",
"audio_output": "main",
"feedback_input_slot": 1001,
"stream": true
},
{
"name": "preview",
"video_output": "preview",
"audio_output": "",
"feedback_input_slot": 1002,
"stream": false
},
{
"name": "aux1",
"video_output": "program",
"audio_output": "aux1",
"feedback_input_slot": 0,
"stream": true
},
{
"name": "chroma_key_preview",
"video_output": "chroma_key_preview",
"audio_output": "aux2",
"feedback_input_slot": 100,
"stream": true
}
]
}
Video Mixer node graph
The video mixer is defined as a tree graph of nodes that work together in sequence to produce one or several video outputs. Each node performs a specific action, and has zero or more input sockets and zero or more output sockets. Links connect output sockets on one node to input sockets on other nodes. Each node is named and the name is used to control the node in runtime. The node tree configuration is specified in the video
section of the JSON file, which contains two parts: the list of nodes and the list of links connecting the nodes.
The following is a graphical representation of the video mixer node graph configuration in the JSON example file above, with two input nodes, three processing nodes and three output nodes, and links in between:
Video nodes block
The nodes
object of the video
block is a JSON object where the names/keys in the object is the unique name of the
node. The name is used to refer to the node when operating the mixer later on, so names that are easy to
understand what they are supposed to be used for in the production is recommended.
The name can only contain lower case letters, digits, _
and -
. Space or other special characters are not allowed.
Each node is an JSON object with parameters defining the node properties.
Here the parameter type
defines which type of node it is.
The supported types are listed below. They can be divided into three groups depending on if they provide input to the
graph, output from the graph or is a processing node, placed in the middle of the graph.
Input nodes
The input nodes take their input from the input slots of the Rendering Engine, which contains the latest frame for all connected sources (this includes connected cameras, HTML graphics, media players etc.) The nodes in this group does not have any input sockets, as they take the input frames from the input slots of the Rendering Engine. Which slot to take the frames from is dynamically controlled during runtime.
Alpha combine node (alpha_combine
)
Input sockets: 0 (Sources are taken from the input slots of the mixer)
Output sockets: 1
The alpha combine node takes two inputs and combines the color from one of them with the alpha from the other one. The
node features multiple modes that can be set during runtime to either pick the alpha channel of the second video input,
or to take any of the other R, G and B channels, or to average the RGB channels and use as alpha for the output.
This node is useful in case videos with alpha is provided from SDI sources, where the alpha must be sent as a separate
video feed and then combined with the fill/color source in the mixer.
Select node (select
)
Input sockets: 0 (Source is taken from the input slots of the mixer)
Output sockets: 1
The select node simply selects an input slot from the Rendering Engine and forwards that video stream.
Which input slot to forward is set during runtime.
The node is a variant of the transition node, but does not support the transitions supported by the transition node.
Transition node (transition
)
Input sockets: 0 (Sources are taken from the input slots of the mixer)
Output sockets: 2 (One with the program output and one with the preview output)
The transition node takes two video streams from the Rendering Engine’s input slots, one to use as the program and one
as the preview output stream.
These are output through the two output sockets.
During runtime this node can be used to make transitions such as wipes and fades between the selected program and
preview.
Output nodes
This group only contains a single node, used to mark a point where processed video can exit the graph and be used in output mappings.
Output node (output
)
Input sockets: 1
Output sockets: 0 (Output is sent out of the video mixer)
The output node marks an output point in the video mixer’s graph, where the video stream can be used by the output
mapping to be included in the Rendering Engine’s Output streams, or as streams to view in the multi-viewer.
The node takes a single input and makes it possible to use that video feed outside the video mixer.
Output nodes can be used both to output the program and preview feeds of a video mixer, but also to mark auxiliary
outputs, as in the example above, where chroma_key_preview
is output to be included in the graph to be able to view
the result of the chroma keying, without the effect being keyed on to the program output.
Processing nodes
The processing nodes take their input from another node’s output and outputs a result that is sent to another node’s input. They are therefore placed in the middle of the graph, after nodes from the input group and before output nodes.
Alpha over node (alpha_over
)
Input sockets: 2 (Index 0 for overlay video and index 1 for background video)
Output sockets: 1
The alpha over node composites the overlay video input on top of the background video input. The alpha of the overlay
video input is taken into consideration. During runtime, this node can be controlled to show or not to show the overlay,
and to fade the overlay in or out. This node is useful to composite things such as graphics or chroma keyed video onto a
background video.
Chroma key node (chroma_key
)
Input sockets: 1
Output sockets: 1
The chroma key node takes an input video stream and performs chroma keying on it based on parameters set during runtime.
The video output will have the alpha channel (and in some cases also the color channels) altered. The result of this
node can then be composited on top of a background using an Alpha over node.
Fade to black node (fade_to_black
)
Input sockets: 1
Output sockets: 1
The fade to black node takes a single input video stream and can fade that video stream to and from black.
This node is normally used as the last node before the main program output node, to be able to fade to and from black at
the beginning and end of the broadcast.
Transform node (transform
)
Input sockets: 1
Output sockets: 1
The transform node takes an input video stream and transform the result inside the visible canvas. This node can be
configured during runtime to scale and move the input video. This node is useful for picture-in-picture effects, or to
move a chroma key node output to the lower corner of the frame.
Video delay node (video_delay
)
Input sockets: 1
Output sockets: 1
The video delay node is used to delay the video by a given number of frames. The number of frames to delay is controlled
during runtime. This node is useful whenever the need for dynamically delay a video stream arises, for example in
case an external audio mixer is used, which comes with a delay of some frames.
Video links block
The links
array in the video
block is a list of links between video nodes. The video frames are fed in one direction
from node to node via these links.
An input socket on a node can only have one connected link.
The output sockets on a node can have multiple connected links.
Each block contains the following keys:
from_node
- The name of the node in thenodes
object, from which the link is receiving frames fromfrom_socket
- The index of the output socket in the node from which this link originatesto_node
- The name of the node in thenodes
object, to which the link is sending frames toto_socket
- The index of the input socket in the node to which this link connects
Some examples
The simplest video mixer node graph imaginable would be a select node feeding an output node. This mixer would only be able to select one of the inputs and output it unaltered, like a video router would do:
The JSON file section for this is:
"video": {
"nodes": {
"input_select": {
"type": "select"
},
"program": {
"type": "output"
}
},
"links": [
{
"from_node": "input_select",
"from_socket": 0,
"to_node": "program",
"to_socket": 0
}
]
}
A slightly more advanced node graph would be to use a transition node and two outputs, one for the program out and one for the preview:
The JSON file section for this is:
"video": {
"nodes": {
"transition": {
"type": "transition"
},
"program": {
"type": "output"
},
"preview": {
"type": "output"
}
},
"links": [
{
"from_node": "transition",
"from_socket": 0,
"to_node": "program",
"to_socket": 0
},
{
"from_node": "transition",
"from_socket": 1,
"to_node": "preview",
"to_socket": 0
},
]
}
Audio outputs
The audio block of the configuration file defines the properties of the audio mixer.
The JSON object contains a list called outputs
which lists the outputs of the audio mixer and the configuration of
each output.
Each output has these parameters:
name
- The unique name of this audio output, used to refer to if from theoutput_mapping
blockchannels
- The number of audio channels of this outputfollows
- (Optional parameter) Used to identify the audio output this output is a post fader aux output for. If the output does not have this parameter set, it is considered a main output
The follows
parameter is used to set the output in “post fader aux send” mode.
This is used to create extra aux outputs from the audio mixer which are used for mix minus, i.e. where you want the
program output, but with some specific audio source(s) removed.
When follows
is set, the volume of that output will follow the volume of the main bus and scale that volume.
If the volume fader of the aux bus is up, it will send the same volume of audio of that strip as the main bus does.
So when the main bus volume is turned down, the same thing will happen automatically in the aux bus.
If an aux bus’ fader is down on the other hand, that strip will not contribute to the aux bus at all.
The aux bus volume faders can therefore be used to remove audio (or scale the volume up or down), but cannot be used to
add more audio strips compared to the main bus it is following.
The input routing and configuration of the strips is made via the control command API.
Combine outputs
The output_mapping
block is used to define outputs of the entire Rendering Engine, by combining outputs from the video
and audio mixers and configuring where those should be sent.
The output mapping is an array of output mappings. Each mapping has the following parameters:
name
- The unique name of the output mapping. This will be displayed in the REST API both for sources being streamed and sources with feedback streams that can be included in the multi-viewvideo_output
- The name of the video mixer node of typeoutput
to get the video stream from. Leave empty, or omit the key to create an audio-only outputaudio_output
- The name of the audio output to get the audio stream from. Leave empty, or omit the key to create a video-only outputfeedback_input_slot
- The input slot to use to feed this output back to the multi-view. The REST API may then refer to this input slot to include this output in a multi-view view. Value must be >= 100 as input slots up to 99 are reserved for “regular” sources. Use 0 to disable feedback of this stream.stream
- Boolean value to tell if the output should be streamable and visible as an output in the REST API. If set to false the output will not turn up as anPipeline Output
in the REST API.
The following is a graphical visualisation of the output mappings in the JSON example configuration file above:
6.1.3 - Agile Live Rendering Engine command documentation
This page describes the commands for controlling the video mixer, audio router, audio mixer, HTML renderers and media players in the Agile Live Rendering Engine. This topic is closely related to this page on how to configure the rendering engine at startup.
Command protocol
All commands to the Agile Live Rendering Engine are sent as human readable strings and are listed below. The Rendering Engine expect no line termination, meaning that you do not need to append any type of new line (\n
) characters, or string termination (\0
) characters.
Each subsystem of the rendering engine has their own set of commands, prefixed by the subsystem name. As can be seen in the image below, the prefixes of the different subsystems are:
html
for the html renderer instancesmedia
for the media playback instancesvideo
for the video mixeraudio map
for the audio routeraudio
for the audio mixer (if using the built-in mixer)
Video mixer commands
Background
The video mixer is built as a tree graph of processing nodes, please see this page for further information on the video mixer node graph. The names of the nodes defined in the node graph are used as the first word in the control commands as the way to address the specific node. On this page we will specify the detailed protocol for each node type.
There are eight different node types.
- The
Transition
node is used to pick which input slots to use for the program and preview output. The node also supports making transitions between the program and preview. - The
Select
node simply forwards one of the available sources to its output. - The
Alpha combine
node is used to pick two input slots (or the same input slot) to copy the color from one input and combine it with some information from the other input to form the alpha. The color will just be copied from the color input frame, but there are several modes that can be used to produce the alpha channel in the output frame in different ways. This is known as “key & fill” in broadcasting. - The
Alpha over
node is used to perform an “Alpha Over” operation, that is to put the overlay video stream on top of the background video stream, and let the background be seen through the overlay depending on the alpha of the overlay. The node also features fading the graphics in and out by multiplying the alpha channel by a constant factor. - The
Transform
node takes one input stream and transforms it (scaling and translating) to one output stream. - The
Chroma Key
node takes one input stream, and by setting appropriate parameters for the keying, it will remove areas with the key color from the incoming video stream both affecting the alpha and color channels - The
Fade to black
node takes one input stream, which it can fade to or from black gradually, and then outputs that stream. - The
Output
node has one input stream and will output that stream out from the video mixer, back to the Rendering Engine. It has no control commands
To reset the runtime configuration of all video nodes to their default state, the following command is used:
video reset
- Reset the runtime configuration of all video mixer nodes
Transition
The transition type of node can be controlled with the following commands:
video <node_name> cut <input_slot>
- Cut (hot punch) the program output to the given input slot. This won’t affect the preview output.video <node_name> preview <input_slot>
- Preview the given input slot. This won’t affect the program output.video <node_name> cut
- Cut between the current program and preview outputs, effectively swapping their place.video <node_name> panic <input_slot>
- Cut (hot punch) both the preview and program output to the given input slot instantly for all rendering engines that are receiving this command, disregarding any timing differences!
Some transition commands last over a duration of time, for example wipes. These can be performed either automatically or manually. The automatic mode works by the operator first selecting the type of transition, for instance a fade, setting the preview to the input slot to fade to and then trigger the transition at the right time with a set duration for the transition. In manual mode the exact “position” of the transition is set by the control panel. This is used for implementing T-bars, where the T-bar repeatedly sends the current position of the bar. In the manual mode, the transition type is set before the transition begins, just as in the automatic mode. Note that an automatic transition will be overridden in case the transition “position” is manually set, by interrupting the automatic transition and jumping to the manually set position.
video <node_name> type <type>
- Set the transition type to use for transitions hereafter. Valid modes are:fade
- Fade (mix) from one source to the nextwipe_left
- Wipe from one source to the next, wipe going from right side of the screen to the leftwipe_right
- Wipe from one source to the next, wipe going from left side of the screen to the right
video <node_name> auto <duration_ms>
- Perform an automatic transition from the current program output to the current preview output, lasting for<duration_ms>
milliseconds. The currently set transition type is used.video <node_name> factor <factor>
- Manually set the position/factor of the transition.<factor>
should be between 0.0 and 1.0, where 0.0 is the value before the transition starts and 1.0 is the end of the transition. Note that setting this value to 1.0 will effectively swap place of program and preview output, resetting the transition factor to 0.0 internally. Control panels using this command must take this into consideration to not cause a sudden jump in the transition once the physical control is moved again. Sending this command will interrupt any ongoing automatic transition.
Select
The select type of node only has one control command:
video <node_name> cut <input_slot>
- Choose which input slot to forward. Changing looks like a cut.
Alpha Combine
The alpha combine node type can be controlled with these commands:
video <node_name> color <input_slot>
- Set which input slot to copy the color data from in the Alpha combine nodevideo <node_name> alpha <input_slot>
- Set which input slot to use to create the alpha channel of the output frame in the Alpha combine nodevideo <node_name> mode <mode>
- Set which way the alpha channel of the output frame will be produced. Valid modes are:copy-r
- Copy the R-channel of the input “alpha frame” and use it as alpha for the outputcopy-g
- Copy the G-channel of the input “alpha frame” and use it as alpha for the outputcopy-b
- Copy the B-channel of the input “alpha frame” and use it as alpha for the outputcopy-a
- Copy the A-channel of the input “alpha frame” and use it as alpha for the outputaverage-rgb
- Take the average of the R, G and B channels of the input “alpha frame” and use it as alpha channel for the output
Alpha over
The alpha over node type can be controlled with these commands:
video <node_name> factor <factor>
- Manually set the transparency multiplier for the Alpha over node. The value<factor>
should be a float between 0.0 and 1.0, where 0.0 means that the overlay is completely invisible and 1.0 means that the overlay is fully visible (where it should be visible according to the overlay image’s alpha channel). Default is 0.0.video <node_name> fade to <duration_ms>
- Start an automatic fade from the current factor to a fully visible overlay over the duration of<duration_ms>
milliseconds.video <node_name> fade from <duration_ms>
- Start an automatic fade from the current factor to a fully invisible overlay over the duration of<duration_ms>
milliseconds.
Transform
The transform node type can be controlled with these commands:
video <node_name> scale <factor>
- Set the scale of the output size, relative the original size. Parameter factor must be > 0. Default is 1.0.video <node_name> x <position>
- Set the X offset of the top left corner of the output, relative the top left corner of the frame. Position is measured in percent of frame width, where 0.0 is the left side of the frame and 1.0 is the right side of the frame. Values outside of the range 0, 1 are valid. Default is 0.0.video <node_name> y <position>
- Set the Y offset of the top left corner of the output, relative the top left corner of the frame. Position is measured in percent of frame height, where 0.0 is the top of the frame and 1.0 is the bottom of the frame. Values outside of the range 0, 1 are valid. Default is 0.0.
Chroma Key
The chroma key node type can be controlled with these commands:
video <node_name> r <value>
- Set the red component of the chroma key color. Should be a float between 0.0 and 1.0.video <node_name> g <value>
- Set the green component of the chroma key color. Should be a float between 0.0 and 1.0.video <node_name> b <value>
- Set the blue component of the chroma key color. Should be a float between 0.0 and 1.0.video <node_name> distance <value>
- Set the distance parameter, ranging from 0.0 to 1.0, telling how far from the key color is also considered a part of the key. Larger values will result in more colors being included, 0.0 means only the exact key color is considered.video <node_name> falloff <value>
- Set the falloff parameter, ranging from 0.0 to 1.0, which makes edges smoother. A lower value means harder edges between the key color and the parts to keep, a higher value means a more smooth falloff between removed and kept parts.video <node_name> spill <value>
- Set the color spill reduction parameter, ranging from 0.0 to 1.0, which will desaturate the key color in the parts of the image that are left. Useful for hiding green rims around objects in from of a green screen etc. 0.0 means off and a higher value means more colors further away from the chroma key color will be desaturated.video <node_name> color_picker <x_factor> <y_factor> <square_size>
- Set the chroma key by sampling the pixels covered by the square area of size square_size x square_size. The position x,y is set by<x_factor>
* width of video and<y_factor>
* height of video.<x_factor>
&<y_factor>
should be a float between 0.0 and 1.0. The<square_size>
should be an integer > 0.0.video <node_name> alpha_as_video <value>
- Instead of writing an alpha mask, set the alpha value as the RGB value, creating a grayscale image of the mask. Value should be true or false.video <node_name> show_key_color <value>
- Draw a 100x100 box in upper left corner with the current chroma key (RGB) value. Value should be true or false.video <node_name> show_color_picker <value>
- Draw a pink border around the area where the color picker will sample a new chroma key. Value should be true or false.
Note
For the true/false parameters above, also TRUE, True, t, T and 1 will be treated as true, any other values will be treated as false.Fade to black
The fade to black node can be controlled via the following commands:
video <node_name> factor <factor>
- Manually set the blackness factor on a scale from 0.0 to 1.0, where 0.0 means not black and 1.0 means fully black output.video <node_name> fade to <duration_ms>
- Start an automatic fade to a fully black output over the duration of<duration_ms>
milliseconds.video <node_name> fade from <duration_ms>
- Start an automatic fade to a fully visible (non-black) output over the duration of<duration_ms>
milliseconds.
Audio Router commands
The audio router is used to route specific audio channels from an input slot of the Rendering Engine to an input channel strip on the audio mixer. By default, it has no audio routes set up. The router can be controlled with the following commands:
audio map <input_slot> <channel> mono <input_strip>
- Map the<channel>
from input slot<input_slot>
as a mono channel into input strip<input_strip>
of the audio mixeraudio map <input_slot> <left_channel> stereo <input_strip>
- Map the<left_channel>
and<left_channel> + 1
from input slot<input_slot>
as a stereo pair into input strip<input_strip>
of the audio mixeraudio map reset <input_strip>
- Remove the current mapping from<input_strip>
of the audio mixer.
To reset both the audio mixer and the audio router to their default state, including removal of all configured input strips, the following command can be used:
audio reset
- Reset both the audio mixer and audio router
Audio mixer commands
The audio mixer and audio router have the following data path:
In the example picture above, three sources are connected to the input slots and four mono or stereo audio streams are picked as input to the audio mixer. The audio mixer is completely separate from the video mixer, so switching image in the video mixer will not change the audio mixer in any way. The internal audio mixer can be controlled using the following commands:
Pre-gain
To control the pre-gain of each input strip use:
audio gain <input_strip> <level_dB>
- Set the pre-gain level of a specific input in dB. Parameter<level>
should be a floating point number. Default is 0.0.
Low-pass filter
To control the low-pass filter of each input strip use:
audio lpf <input_strip> freq <value>
- Set the cutoff frequency for the low-pass filter in an input strip.<value>
is the frequency in Hz as a floating point number. The frequency range is 20 Hz to 20 kHz. Setting a value of 20 kHz or higher will bypass the filter.audio lpf <input_strip> q <value>
- Set the q-value for the low-pass filter in an input strip. The q-value controls the steepness of cutoff curve with a higher value being a steeper slope. The value is a floating point number ranging from 0.4 to 6.0. The default value is 0.7.
High-pass filter
To control the high-pass filter of each input strip use:
audio hpf <input_strip> freq <value>
- Set the cutoff frequency for the high-pass filter in an input strip.<value>
is the frequency in Hz as a floating point number. The frequency range is 20 Hz to 20 kHz. Setting a value of 20 Hz or lower will bypass the filter.audio hpf <input_strip> q <value>
- Set the q-value for the high-pass filter in an input strip. The q-value controls the steepness of cutoff curve with a higher value being a steeper slope. The value is a floating point number ranging from 0.4 to 6.0. The default value is 0.7.
Parametric equalizer
The parametric equalizer has ten bands, indexed from 0 to 9. By default, all bands are disabled (type “none”) with a gain of 0 dB, a Q value of 0.707, and a frequency of 1 kHz. To control the parametric equalizer of each input strip use:
audio eq <input_strip> <band> freq <value>
- Set the frequency of a specific band in the parametric equalizer. Here<band>
is the zero-based index of the band.<value>
is a floating point number with the frequency in Hz. For peak, notch, and band-pass filters this is the center frequency. For low-pass, high-pass, low-shelf, and high-shelf filters this is the corner frequency.audio eq <input_strip> <band> gain <value>
- Set the gain of a specific band in the parametric equalizer. Here<band>
is the zero-based index of the band.<value>
is a floating point number with the gain in dB. The gain parameter only has effect on peaking and shelving filters.audio eq <input_strip> <band> q <value>
- Set the q-value of a specific band in the parametric equalizer to shape the falloff of the band. Here<band>
is the zero-based index of the band.<value>
is a floating point number with the q-value where a higher value means a more pointy curve. The default value is 0.707.audio eq <input_strip> <band> type <type>
- Set the filter type to use for a specific band in the parametric equalizer. Here<band>
is the zero-based index of the band. The available values for<type>
are:none
: Bypass audio without any changeslowpass
: Low-pass filter at the current frequency. Gain has no effect.highpass
: High-pass filter at the current frequency. Gain has no effect.bandpass
: Band-pass filter at the current frequency. Gain has no effect.lowshelf
: Low-shelf filter. Audio frequencies below the currently set value are modified by the current gain value.highshelf
: High-shelf filter. Audio frequencies above the currently set value are modified by the current gain value.peak
: Peak filter. Frequencies around the currently set value are modified by the current gain value.notch
: Notch filter. Frequencies around the currently set value are reduced greatly. Gain has no effect.
Panning
To control the panning of each input strip use:
audio pan <input_strip> <factor>
- Pan the mono or stereo input of an input strip to the left or right. The range of<factor>
is between -1.0 and +1.0, where 0.0 means center (no panning), negative values means more to the left and positive value more to the right. Values outside this range will be clamped. A value of +/- 1.0 means that there will only be audio in the left/right channel.
Dynamic range compressor
To control the dynamic range compressor of each input strip use:
audio comp <input_strip> threshold <value>
- Set the threshold for activation of the compressor. The threshold is a negative dB value ranging from -30 dB to 0 dB. The volume of audio which is above the threshold value will be reduced (compressed). The default value is 0 dB, i.e. only compression if the audio signal is overloaded.audio comp <input_strip> ratio <value>
- Set the compression ratio for audio exceeding the loudness threshold. The value is the numerator in the compression ratio<value>:1
. For instance, if the value is set to 4, the compression ratio is 4:1 and volume overshoot above the threshold will be scaled down to 25 %. The ratio numerator is a floating point number in the range from 1.0 to 24.0, with 4.0 as the default value.audio comp <input_strip> knee <value>
- Set the width of the soft knee in decibels. Instead of simply turning the compression completely on or off at the threshold, the knee defines a volume range in which the compression ratio follows a curve, the “knee”. The knee is a floating point number between 0 dB and 30 dB, with a default value of 2.5 dB.audio comp <input_strip> attack <value>
- Set the attack time of the compressor in milliseconds. The attack time determines how long it takes to reach the full compression after the threshold has been exceeded. The value is a floating point number in the range from 0.1 ms to 120 ms. The default value is 50 ms.audio comp <input_strip> release <value>
- Set the release time of the compressor in milliseconds. The release time determines how long it takes to return to zero compression when the volume is below the compression threshold. The value is a floating point number in the range from 10 ms to 1000 ms. The default value is 200 ms.audio comp <input_strip> gain <value>
- Set the make-up gain i decibels. Since the compression filter lowers the volume of louder audio sections it can be desirable to increase the gain after the filtering. The gain value increases the audio volume with the specified number of decibels. The value is a floating point number in the range from 0 dB to 24 dB. The default value is 0 dB.
Volume and master volume
To control the volume faders and master volume faders use:
audio volume <input_strip> <output> <level>
- Set the volume level of a specific input in proportion to its original strength for a given output. Parameter<level>
should be a floating point number, where 0.0 means that the input slot is not mixed into the output at all and 1.0 means that the volume of the input slot is kept as is for the output. The signal can also be amplified by using values > 1.0. Default is 0.0.audio mastervolume <output> <level>
- Set the master volume level of an output, before the audio is clamped and converted from floating point samples to 16-bit integer samples. This can be used to avoid clipping of the audio when mixing multiple input sources. Parameter<level>
is a floating point number where 0.0 means that all audio output is off, 1.0 means that the volume is not changed and any value above 1.0 means that the master volume is amplified. Default is 1.0.audio volume <input_strip> <output> auto <level> <time_ms>
- Do automatic transition of volume from the current volume to the requested level overtime_ms
milliseconds. The volume will be linearly faded over time in the dB scale.audio mastervolume <output> auto <level> <time_ms>
- Do automatic transition of the master volume from the current volume to the requested level overtime_ms
milliseconds. The volume will be linearly faded over time in the dB scale.audio volume <input_strip> <output> mute <value>
- Mute or unmute a specific input strip for a specific output bus. This operation will not change any gain or volume fader of the strip. I.e. unmuting will restore the same audio level as before mute was sent. Value should be a boolean,true
orfalse
. Usetrue
to mute andfalse
to unmute.audio mastervolume <output> mute <value>
- Mute or unmute all audio in a specific output bus. This operation will not change any gain or volume fader of any strip. I.e. unmuting will restore the same audio levels as before mute was sent. Value should be a boolean,true
orfalse
. Usetrue
to mute andfalse
to unmute.
Reset all audio filters
To reset both the audio mixer and the audio router to their default state, including removal of all configured input strips, the following command can be used:
audio reset
- Reset both the audio mixer and audio router
HTML rendering
The Rendering Engine also features a built-in HTML renderer which uses the Chromium web browser engine to render HTML pages. The following commands can be used to create and control the HTML browser instances:
html create <input_slot> <width> <height>
- Create a new HTML browser instance with the canvas size ofwidth
xheight
pixels and output the rendered frames to the given input slothtml close <input_slot>
- Close the HTML browser instance on the given input slothtml load <input_slot> <url>
- Load a new URL in the HTML browser on the given input slothtml execute <input_slot> <java script>
- Execute JavaScript in the HTML browser on the given input slot. The JavaScript snippet might span over multiple lines and may contain spaces.html reset
- Close all open HTML browsers
Media player
The Rendering Engine can create media player instances to play video and audio files from the hard drive of the machine running the Rendering Engine. It is up to the user of the API to ensure the files are uploaded
to the machines running the Rendering Engine(s) before trying to run them. The media players use the FFmpeg library to demux and decode the media files, so most files supported by FFmpeg should work.
The media players each have three state parameters: start
, duration
and looping
, to mark a section of the video to loop, or a last frame to stop at in case looping is set to false.
The following commands can be used to create and control the media player instances:
media create <input_slot> <filename>
- Create a new media player instance with the given media file and output the rendered frames to the given input slot.media load <input_slot> <filename>
- Load another file into an existing media player on the given input slot. This will reset the start, duration and looping parameters of that media player.media close <input_slot>
- Close the media player instance on the given input slotmedia play <input_slot>
- Start/resume playing the currently loaded media file in the media player on the given input slotmedia pause <input_slot>
- Pause the media player on the given input slotmedia seek <input_slot> <time_ms>
- Seek the media player on the given input slot to a time pointtime_ms
milliseconds from the start of the media file and pause the playback at the first frame. Usemedia play <input_slot>
to resume the playbackmedia set_start <input_slot> <start_time_ms>
- Set the start time of the looping section of the media player on the given input slot to a time pointstart_time_ms
milliseconds from the start of the media file. It will still be possible to seek before this time point and play the video from there.media set_duration <input_slot> <duration_ms>
- Set the duration of the playing/looping section of the media player on the given input slot toduration_ms
milliseconds from thestart_time_ms
of the media file. The duration will be clamped in case it spans past the end of the media file. A running media player will either stop at this point, or loop the video depending on the state of the looping parametermedia set_looping <input_slot> <true/false>
- Set if the media should loop from the start time when the media player on the given input slot hits either the end of the looping section defined by the start time and duration, or the end of the media file. If set to false, the media player will stop at the end of the section or at the end of the file instead of looping it from the start point.media reset
- Close all open media players
6.1.4 - C++ SDK
Agile Content Live C++ SDK reference.
6.1.4.1 - Classes
Classes
- namespace ACL
- namespace AclLog
A namespace for logging utilities.- class CommandLogFormatter
This class is used to format log entries for Rendering Engine commands. - class FileLocationFormatterFlag
A custom flag formatter which logs the source file location between a par of “[]”, in case the location is provided with the log call. - class ThreadNameFormatterFlag
- class CommandLogFormatter
- struct AlignedAudioFrame
AlignedAudioFrame is a frame of interleaved floating point audio samples with a given number of channels. - struct AlignedFrame
A frame of aligned data that is passed to the rendering engine from the MediaReceiver. A DataFrame contains a time stamped frame of media, which might be video, audio and auxiliary data such as subtitles. A single DataFrame can contain one or multiple types of media. Which media types are included can be probed by nullptr-checking/size checking the data members. The struct has ownership of all data pointers included. The struct includes all logic for freeing the resources held by this struct and the user should therefore just make sure the struct itself is deallocated to ensure all resources are freed. - namespace ControlDataCommon
- struct ConnectionStatus
Connection status struct containing information about a connection event. - struct Response
A response from a ControlDataReceiver to a request. The UUID tells which receiver the response is sent from. - struct StatusMessage
A status message from a ControlDataReceiver. The UUID tells which receiver the message is sent from.
- struct ConnectionStatus
- class ControlDataSender
A ControlDataSender can send control signals to one or more receivers using a network connection. A single ControlDataSender can connect to multiple receivers, all identified by a UUID. The class is controlled using an ISystemControllerInterface; this interface is responsible for setting up connections to receivers. The ControlDataSender can send asynchronous requests to (all) the receivers and get a response back. Each response is identified with a request ID as well as the UUID of the responding receiver. The ControlDataSender can also receive status messages from the receivers.- struct Settings
Settings for a ControlDataSender.
- struct Settings
- class DeviceMemory
RAII class for a CUDA memory buffer. - class IControlDataReceiver
IControlDataReceiver is the interface class for the control data receiver. An IControlDataReceiver can receive messages from a sender or other IControlDataReceivers using a network connection. It can also connect to and forward the incoming request messages to other receivers. The connections to the sender and the other receivers are controlled by an ISystemControllerInterface instance. The ControlDataReceiver has areceiving
orlistening
side, as well as asending
side. The listening side can listen to one single network port and have multiple ControlDataSenders and ControlDataReceivers connected to that port to receive requests from them. On the sending side of the ControlDataReceiver, it can be connected to the listening side of other ControlDataReceivers, used to forward all incoming messages to that receiver, as well as sending its own requests.- struct IncomingRequest
An incoming request to this ControlDataReceiver. - struct ReceiverResponse
A response message to a request. - struct Settings
Settings for a ControlDataReceiver.
- struct IncomingRequest
- class IMediaStreamer
IMediaStreamer is an interface class for MediaStreamers, that can take a single stream of uncompressed video and/or audio frames and encode and output it in some way. This output can either be a stream to a network or writing down the data to a file on the hard drive. This class is configured from two interfaces. The input configuration (input video resolution, frame rate, pixel format, number of audio channels…) is made through this C++ API. The output stream is then started from the System Controller. Any of these configurations can be made first. The actual stream to output will start once the first call to.- struct Configuration
The input configuration of the frames that will be sent to this MediaStreamer. The output stream configuration is made from the System controller via the ISystemControllerInterface. - struct Settings
Settings used when creating a new MediaStreamer.
- struct Configuration
- class ISystemControllerInterface
An ISystemControllerInterface is the interface between a component and the System controller controlling the component. The interface allows for two-way communication between the component and the system controller by means of sending requests and getting responses. Classes deriving from the ISystemControllerInterface should provide the component side implementation of the communication with the system controller. This interface can be inherited and implemented by developers to connect to custom system controllers, or to directly control a component programmatically, without the need for connecting to a remote server. - class IngestApplication
- struct Settings
- namespace IngestUtils
- class MediaReceiver
A MediaReceiver contains the logic for receiving, decoding and aligning incoming media sources from the Ingests. The aligned data is then delivered to the Rendering Engine which is also responsible for setting up the MediaReceiver. The MediaReceiver has a builtin multi view generator, which can create output streams containing composited subsets of the incoming video sources. This class is controlled using an ISystemControllerInterface provided when starting it.- struct NewStreamParameters
A struct containing information on the format of an incoming stream. - struct Settings
Settings for a MediaReceiver.
- struct NewStreamParameters
- class SystemControllerConnection
An implementation of the ISystemControllerInterface for a System controller residing in a remote server. The connection to the server uses a Websocket.- struct Settings
Settings for a SystemControllerConnection.
- struct Settings
- namespace TimeCommon
- struct TAIStatus
- struct TimeStructure
- namespace UUIDUtils
A namespace for UUID utility functions. - namespace spdlog
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.1 - AclLog::CommandLogFormatter
AclLog::CommandLogFormatter
This class is used to format log entries for Rendering Engine commands.
#include <AclLog.h>
Inherits from spdlog::formatter
Public Functions
Name | |
---|---|
CommandLogFormatter() | |
void | format(const spdlog::details::log_msg & msg, spdlog::memory_buf_t & dest) override |
std::unique_ptr< spdlog::formatter > | clone() const override |
Public Functions Documentation
function CommandLogFormatter
CommandLogFormatter()
function format
void format(
const spdlog::details::log_msg & msg,
spdlog::memory_buf_t & dest
) override
function clone
std::unique_ptr< spdlog::formatter > clone() const override
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.2 - AclLog::FileLocationFormatterFlag
AclLog::FileLocationFormatterFlag
A custom flag formatter which logs the source file location between a par of “[]”, in case the location is provided with the log call.
#include <AclLog.h>
Inherits from spdlog::custom_flag_formatter
Public Functions
Name | |
---|---|
void | format(const spdlog::details::log_msg & msg, const std::tm & , spdlog::memory_buf_t & dest) override |
std::unique_ptr< custom_flag_formatter > | clone() const override |
Public Functions Documentation
function format
inline void format(
const spdlog::details::log_msg & msg,
const std::tm & ,
spdlog::memory_buf_t & dest
) override
function clone
inline std::unique_ptr< custom_flag_formatter > clone() const override
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.3 - AclLog::ThreadNameFormatterFlag
AclLog::ThreadNameFormatterFlag
Inherits from spdlog::custom_flag_formatter
Public Functions
Name | |
---|---|
void | format(const spdlog::details::log_msg & , const std::tm & , spdlog::memory_buf_t & dest) override |
std::unique_ptr< custom_flag_formatter > | clone() const override |
Public Functions Documentation
function format
inline void format(
const spdlog::details::log_msg & ,
const std::tm & ,
spdlog::memory_buf_t & dest
) override
function clone
inline std::unique_ptr< custom_flag_formatter > clone() const override
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.4 - AlignedAudioFrame
AlignedAudioFrame
AlignedAudioFrame is a frame of interleaved floating point audio samples with a given number of channels.
#include <AlignedFrame.h>
Public Attributes
Name | |
---|---|
std::vector< float > | mSamples |
uint8_t | mNumberOfChannels |
uint32_t | mNumberOfSamplesPerChannel |
Public Attributes Documentation
variable mSamples
std::vector< float > mSamples;
variable mNumberOfChannels
uint8_t mNumberOfChannels = 0;
variable mNumberOfSamplesPerChannel
uint32_t mNumberOfSamplesPerChannel = 0;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.5 - AlignedFrame
AlignedFrame
A frame of aligned data that is passed to the rendering engine from the MediaReceiver. A DataFrame contains a time stamped frame of media, which might be video, audio and auxiliary data such as subtitles. A single DataFrame can contain one or multiple types of media. Which media types are included can be probed by nullptr-checking/size checking the data members. The struct has ownership of all data pointers included. The struct includes all logic for freeing the resources held by this struct and the user should therefore just make sure the struct itself is deallocated to ensure all resources are freed.
#include <AlignedFrame.h>
Public Functions
Name | |
---|---|
AlignedFrame() =default | |
~AlignedFrame() =default | |
AlignedFrame(AlignedFrame const & ) =delete | |
AlignedFrame & | operator=(AlignedFrame const & ) =delete |
std::shared_ptr< AlignedFrame > | makeShallowCopy() const Make a shallow copy of this AlignedFrame (video and audio pointers will point to the same video and audio memory buffers as in the frame copied from) |
Public Attributes
Name | |
---|---|
int64_t | mCaptureTimestamp |
int64_t | mRenderingTimestamp |
std::shared_ptr< DeviceMemory > | mVideoFrame |
PixelFormat | mPixelFormat |
uint32_t | mFrameRateN |
uint32_t | mFrameRateD |
uint32_t | mWidth |
uint32_t | mHeight |
AlignedAudioFramePtr | mAudioFrame |
uint32_t | mAudioSamplingFrequency |
Public Functions Documentation
function AlignedFrame
AlignedFrame() =default
function ~AlignedFrame
~AlignedFrame() =default
function AlignedFrame
AlignedFrame(
AlignedFrame const &
) =delete
function operator=
AlignedFrame & operator=(
AlignedFrame const &
) =delete
function makeShallowCopy
std::shared_ptr< AlignedFrame > makeShallowCopy() const
Make a shallow copy of this AlignedFrame (video and audio pointers will point to the same video and audio memory buffers as in the frame copied from)
Return: A pointer to a new frame, which is a shallow copy of the old one, pointing to the same video and audio memory buffers
Public Attributes Documentation
variable mCaptureTimestamp
int64_t mCaptureTimestamp = 0;
The TAI timestamp in microseconds since the TAI epoch when this frame was captured by the ingest
variable mRenderingTimestamp
int64_t mRenderingTimestamp = 0;
The TAI timestamp in microseconds since the TAI epoch when this frame should be delivered to the rendering engine
variable mVideoFrame
std::shared_ptr< DeviceMemory > mVideoFrame = nullptr;
variable mPixelFormat
PixelFormat mPixelFormat = PixelFormat::kUnknown;
variable mFrameRateN
uint32_t mFrameRateN = 0;
variable mFrameRateD
uint32_t mFrameRateD = 0;
variable mWidth
uint32_t mWidth = 0;
variable mHeight
uint32_t mHeight = 0;
variable mAudioFrame
AlignedAudioFramePtr mAudioFrame = nullptr;
variable mAudioSamplingFrequency
uint32_t mAudioSamplingFrequency = 0;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.6 - ControlDataCommon::ConnectionStatus
ControlDataCommon::ConnectionStatus
Connection status struct containing information about a connection event.
#include <ControlDataCommon.h>
Public Functions
Name | |
---|---|
ConnectionStatus() =default | |
ConnectionStatus(ConnectionType mConnectionType, const std::string & mIp, uint16_t mPort) |
Public Attributes
Name | |
---|---|
ConnectionType | mConnectionType |
std::string | mIP |
uint16_t | mPort |
Public Functions Documentation
function ConnectionStatus
ConnectionStatus() =default
function ConnectionStatus
inline ConnectionStatus(
ConnectionType mConnectionType,
const std::string & mIp,
uint16_t mPort
)
Public Attributes Documentation
variable mConnectionType
ConnectionType mConnectionType = ConnectionType::DISCONNECTED;
variable mIP
std::string mIP;
variable mPort
uint16_t mPort = 0;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.7 - ControlDataCommon::Response
ControlDataCommon::Response
A response from a ControlDataReceiver to a request. The UUID tells which receiver the response is sent from.
#include <ControlDataCommon.h>
Public Attributes
Name | |
---|---|
std::vector< uint8_t > | mMessage |
uint64_t | mRequestId The actual message. |
std::string | mFromUUID The ID of the request this is a response to. |
Public Attributes Documentation
variable mMessage
std::vector< uint8_t > mMessage;
variable mRequestId
uint64_t mRequestId = 0;
The actual message.
variable mFromUUID
std::string mFromUUID;
The ID of the request this is a response to.
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.8 - ControlDataCommon::StatusMessage
ControlDataCommon::StatusMessage
A status message from a ControlDataReceiver. The UUID tells which receiver the message is sent from.
#include <ControlDataCommon.h>
Public Attributes
Name | |
---|---|
std::vector< uint8_t > | mMessage |
std::string | mFromUUID The actual message. |
Public Attributes Documentation
variable mMessage
std::vector< uint8_t > mMessage;
variable mFromUUID
std::string mFromUUID;
The actual message.
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.9 - ControlDataSender
ControlDataSender
A ControlDataSender can send control signals to one or more receivers using a network connection. A single ControlDataSender can connect to multiple receivers, all identified by a UUID. The class is controlled using an ISystemControllerInterface; this interface is responsible for setting up connections to receivers. The ControlDataSender can send asynchronous requests to (all) the receivers and get a response back. Each response is identified with a request ID as well as the UUID of the responding receiver. The ControlDataSender can also receive status messages from the receivers.
#include <ControlDataSender.h>
Public Classes
Name | |
---|---|
struct | Settings Settings for a ControlDataSender. |
Public Functions
Name | |
---|---|
ControlDataSender() Default constructor, creates an empty object. | |
~ControlDataSender() Destructor. Will disconnect from the connected receivers and close the System controller connection. | |
bool | configure(const std::shared_ptr< ISystemControllerInterface > & controllerInterface, const Settings & settings) Configure this ControlDataSender and connect it to the System Controller. This method will fail in case the ISystemControllerInterface has already been connected to the controller by another component, as such interface can only be used by one component. |
bool | sendRequestToReceivers(const std::vector< uint8_t > & request, uint64_t & requestId) Send a request to all the connected ControlDataReceivers asynchronously. The responses will be sent to the response callback. |
bool | sendMultiRequestToReceivers(const std::vector< std::vector< uint8_t » & request, uint64_t & requestId) Send a multi-message request to all the connected ControlDataReceivers asynchronously. The responses will be sent to the response callback. |
ControlDataSender(ControlDataSender const & ) =delete | |
ControlDataSender & | operator=(ControlDataSender const & ) =delete |
std::string | getVersion() Get application version. |
Public Functions Documentation
function ControlDataSender
ControlDataSender()
Default constructor, creates an empty object.
function ~ControlDataSender
~ControlDataSender()
Destructor. Will disconnect from the connected receivers and close the System controller connection.
function configure
bool configure(
const std::shared_ptr< ISystemControllerInterface > & controllerInterface,
const Settings & settings
)
Configure this ControlDataSender and connect it to the System Controller. This method will fail in case the ISystemControllerInterface has already been connected to the controller by another component, as such interface can only be used by one component.
Parameters:
- controllerInterface The interface to the System controller, used for communicating with this ControlDataSender
- settings The Settings to use for this ControlDataSender
Return: True on success, false otherwise
function sendRequestToReceivers
bool sendRequestToReceivers(
const std::vector< uint8_t > & request,
uint64_t & requestId
)
Send a request to all the connected ControlDataReceivers asynchronously. The responses will be sent to the response callback.
Parameters:
- request The request message
- requestId The unique identifier of this request. Used to identify the async responses.
Return: True if the request was successfully sent, false otherwise
function sendMultiRequestToReceivers
bool sendMultiRequestToReceivers(
const std::vector< std::vector< uint8_t >> & request,
uint64_t & requestId
)
Send a multi-message request to all the connected ControlDataReceivers asynchronously. The responses will be sent to the response callback.
Parameters:
- request The request message
- requestId The unique identifier of this request. Used to identify the async responses.
Return: True if the request was successfully sent, false otherwise
function ControlDataSender
ControlDataSender(
ControlDataSender const &
) =delete
function operator=
ControlDataSender & operator=(
ControlDataSender const &
) =delete
function getVersion
static std::string getVersion()
Get application version.
Return: a string with the current version, e.g. “6.0.0-39-g60a35937”
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.10 - ControlDataSender::Settings
ControlDataSender::Settings
Settings for a ControlDataSender.
#include <ControlDataSender.h>
Public Attributes
Name | |
---|---|
std::function< void(const ControlDataCommon::ConnectionStatus &)> | mConnectionStatusCallback |
std::function< void(const ControlDataCommon::Response &)> | mResponseCallback |
std::function< void(const ControlDataCommon::StatusMessage &)> | mStatusMessageCallback |
Public Attributes Documentation
variable mConnectionStatusCallback
std::function< void(const ControlDataCommon::ConnectionStatus &)> mConnectionStatusCallback;
variable mResponseCallback
std::function< void(const ControlDataCommon::Response &)> mResponseCallback;
variable mStatusMessageCallback
std::function< void(const ControlDataCommon::StatusMessage &)> mStatusMessageCallback;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.11 - DeviceMemory
DeviceMemory
RAII class for a CUDA memory buffer.
#include <DeviceMemory.h>
Public Functions
Name | |
---|---|
DeviceMemory() =default Default constructor, creates an empty object, without allocating any memory on the device. | |
DeviceMemory(size_t numberOfBytes) Constructor allocating the required number of bytes. | |
DeviceMemory(size_t numberOfBytes, cudaStream_t cudaStream) Constructor allocating the required number of bytes by making an async allocation. The allocation will be put in the given CUDA stream. | |
DeviceMemory(void * deviceMemory) Constructor taking ownership of an already allocated CUDA memory pointer. This class will free the pointer once it goes out of scope. | |
bool | allocateMemory(size_t numberOfBytes) Allocates device memory. The memory allocated will automatically be freed by the destructor. |
bool | allocateMemoryAsync(size_t numberOfBytes, cudaStream_t cudaStream) Allocates device memory. The memory allocated will automatically be freed by the destructor. |
bool | reallocateMemory(size_t numberOfBytes) Reallocates device memory. Already existing memory allocation will be freed before the new allocation is made. In case this DeviceMemory has no earlier memory allocation, this method will just allocate new CUDA memory and return a success status. |
bool | reallocateMemoryAsync(size_t numberOfBytes, cudaStream_t cudaStream) Asynchronously reallocate device memory. Already existing memory allocation will be freed before the new allocation is made. In case this DeviceMemory has no earlier memory allocation, this method will just allocate new CUDA memory and return a success status. |
bool | allocateAndResetMemory(size_t numberOfBytes) Allocates device memory and resets all bytes to zeroes. The memory allocated will automatically be freed by the destructor. |
bool | allocateAndResetMemoryAsync(size_t numberOfBytes, cudaStream_t cudaStream) Allocates device memory and resets all bytes to zeroes. The memory allocated will automatically be freed by the destructor. |
bool | freeMemory() Free the device memory held by this class. Calling this when no memory is allocated is a no-op. |
bool | freeMemoryAsync(cudaStream_t cudaStream) Deallocate memory asynchronously, in a given CUDA stream. Calling this when no memory is allocated is a no-op. |
void | setFreeingCudaStream(cudaStream_t cudaStream) Set which CUDA stream to use for freeing this DeviceMemory. In case the DeviceMemory already holds a CUDA stream to use for freeing the memory, this will be overwritten. |
~DeviceMemory() Destructor, frees the internal CUDA memory. | |
template T * | getDevicePointer() const |
size_t | getSize() const |
DeviceMemory(DeviceMemory && other) | |
DeviceMemory & | operator=(DeviceMemory && other) |
void | swap(DeviceMemory & other) |
DeviceMemory(DeviceMemory const & ) =delete DeviceMemory is not copyable. | |
DeviceMemory | operator=(DeviceMemory const & ) =delete |
Public Functions Documentation
function DeviceMemory
DeviceMemory() =default
Default constructor, creates an empty object, without allocating any memory on the device.
function DeviceMemory
explicit DeviceMemory(
size_t numberOfBytes
)
Constructor allocating the required number of bytes.
Parameters:
- numberOfBytes Number of bytes to allocate
Exceptions:
- std::runtime_error In case the allocation failed
function DeviceMemory
explicit DeviceMemory(
size_t numberOfBytes,
cudaStream_t cudaStream
)
Constructor allocating the required number of bytes by making an async allocation. The allocation will be put in the given CUDA stream.
Parameters:
- size Number of bytes to allocate
- cudaStream The CUDA stream to put the async allocation in. A reference to this stream will also be saved internally to be used to asynchronously free them memory when the instance goes out of scope.
Exceptions:
- std::runtime_error In case the async allocation failed to be put in queue.
See: freeMemoryAsync method is not explicitly called, the memory will be freed synchronously when this DeviceMemory instance goes out of scope, meaning that the entire GPU is synchronized, which will impact performance negatively.
Note:
- The method will return as soon as the allocation is put in queue in the CUDA stream, i.e. before the actual allocation is made. Using this DeviceMemory is only valid as long as it is used in the same CUDA stream, or in case another stream is used, only if that stream is synchronized first with respect to
cudaStream
. - In case the
function DeviceMemory
explicit DeviceMemory(
void * deviceMemory
)
Constructor taking ownership of an already allocated CUDA memory pointer. This class will free the pointer once it goes out of scope.
Parameters:
- deviceMemory CUDA memory pointer to take ownership over.
function allocateMemory
bool allocateMemory(
size_t numberOfBytes
)
Allocates device memory. The memory allocated will automatically be freed by the destructor.
Parameters:
- numberOfBytes Number of bytes to allocate
Return: True on success, false if there is already memory allocated by this instance, or if the CUDA malloc failed.
function allocateMemoryAsync
bool allocateMemoryAsync(
size_t numberOfBytes,
cudaStream_t cudaStream
)
Allocates device memory. The memory allocated will automatically be freed by the destructor.
Parameters:
- numberOfBytes Number of bytes to allocate
- cudaStream The CUDA stream to use for the allocation. A reference to this stream will also be saved internally to be used to asynchronously free them memory when the instance goes out of scope.
Return: True on success, false if there is already memory allocated by this instance, or if the CUDA malloc failed.
function reallocateMemory
bool reallocateMemory(
size_t numberOfBytes
)
Reallocates device memory. Already existing memory allocation will be freed before the new allocation is made. In case this DeviceMemory has no earlier memory allocation, this method will just allocate new CUDA memory and return a success status.
Parameters:
- numberOfBytes Number of bytes to allocate in the new allocation
Return: True on success, false if CUDA free or CUDA malloc failed.
function reallocateMemoryAsync
bool reallocateMemoryAsync(
size_t numberOfBytes,
cudaStream_t cudaStream
)
Asynchronously reallocate device memory. Already existing memory allocation will be freed before the new allocation is made. In case this DeviceMemory has no earlier memory allocation, this method will just allocate new CUDA memory and return a success status.
Parameters:
- numberOfBytes Number of bytes to allocate in the new allocation
- cudaStream The CUDA stream to use for the allocation and freeing of memory. A reference to this stream will also be saved internally to be used to asynchronously free them memory when this instance goes out of scope.
Return: True on success, false if CUDA free or CUDA malloc failed.
function allocateAndResetMemory
bool allocateAndResetMemory(
size_t numberOfBytes
)
Allocates device memory and resets all bytes to zeroes. The memory allocated will automatically be freed by the destructor.
Parameters:
- numberOfBytes Number of bytes to allocate
Return: True on success, false if there is already memory allocated by this instance, or if any of the CUDA operations failed.
function allocateAndResetMemoryAsync
bool allocateAndResetMemoryAsync(
size_t numberOfBytes,
cudaStream_t cudaStream
)
Allocates device memory and resets all bytes to zeroes. The memory allocated will automatically be freed by the destructor.
Parameters:
- numberOfBytes Number of bytes to allocate
- cudaStream The CUDA stream to use for the allocation and resetting. A reference to this stream will also be saved internally to be used to asynchronously free them memory when the instance goes out of scope.
Return: True on success, false if there is already memory allocated by this instance, or if any of the CUDA operations failed.
function freeMemory
bool freeMemory()
Free the device memory held by this class. Calling this when no memory is allocated is a no-op.
See: freeMemoryAsync instead.
Return: True in case the memory was successfully freed (or not allocated to begin with), false otherwise.
Note: This method will free the memory in an synchronous fashion, synchronizing the entire CUDA context and ignoring the internally saved CUDA stream reference in case one exist. For async freeing of the memory, use
function freeMemoryAsync
bool freeMemoryAsync(
cudaStream_t cudaStream
)
Deallocate memory asynchronously, in a given CUDA stream. Calling this when no memory is allocated is a no-op.
Parameters:
- cudaStream The CUDA stream to free the memory asynchronously in
Return: True in case the memory deallocation request was successfully put in queue in the CUDA stream.
Note:
- The method will return as soon as the deallocation is put in queue in the CUDA stream, i.e. before the actual deallocation is made.
- It is the programmer’s responsibility to ensure this DeviceMemory is not used in another CUDA stream before this method is called. In case it is used in another CUDA stream, sufficient synchronization must be made before calling this method (and a very good reason given for not freeing the memory in that CUDA stream instead)
function setFreeingCudaStream
void setFreeingCudaStream(
cudaStream_t cudaStream
)
Set which CUDA stream to use for freeing this DeviceMemory. In case the DeviceMemory already holds a CUDA stream to use for freeing the memory, this will be overwritten.
Parameters:
- cudaStream The new CUDA stream to use for freeing the memory when the destructor is called.
Note: It is the programmer’s responsibility to ensure this DeviceMemory is not used in another CUDA stream before this instance is destructed. In case it is used in another CUDA stream, sufficient synchronization must be made before setting this as the new CUDA stream to use when freeing the memory.
function ~DeviceMemory
~DeviceMemory()
Destructor, frees the internal CUDA memory.
function getDevicePointer
template <typename T =uint8_t>
inline T * getDevicePointer() const
Template Parameters:
- T The pointer type to return
Return: the CUDA memory pointer handled by this class. Nullptr in case no memory is allocated.
function getSize
size_t getSize() const
Return: The size of the CUDA memory allocation held by this class.
function DeviceMemory
DeviceMemory(
DeviceMemory && other
)
function operator=
DeviceMemory & operator=(
DeviceMemory && other
)
function swap
void swap(
DeviceMemory & other
)
function DeviceMemory
DeviceMemory(
DeviceMemory const &
) =delete
DeviceMemory is not copyable.
function operator=
DeviceMemory operator=(
DeviceMemory const &
) =delete
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.12 - IControlDataReceiver
IControlDataReceiver
IControlDataReceiver is the interface class for the control data receiver. An IControlDataReceiver can receive messages from a sender or other IControlDataReceivers using a network connection. It can also connect to and forward the incoming request messages to other receivers. The connections to the sender and the other receivers are controlled by an ISystemControllerInterface instance. The ControlDataReceiver has a receiving
or listening
side, as well as a sending
side. The listening side can listen to one single network port and have multiple ControlDataSenders and ControlDataReceivers connected to that port to receive requests from them. On the sending side of the ControlDataReceiver, it can be connected to the listening side of other ControlDataReceivers, used to forward all incoming messages to that receiver, as well as sending its own requests. More…
#include <IControlDataReceiver.h>
Public Classes
Name | |
---|---|
struct | IncomingRequest An incoming request to this ControlDataReceiver. |
struct | ReceiverResponse A response message to a request. |
struct | Settings Settings for a ControlDataReceiver. |
Public Functions
Name | |
---|---|
virtual | ~IControlDataReceiver() =default Destructor. |
virtual bool | configure(const Settings & settings) =0 Configure this instance. |
virtual bool | sendStatusMessageToSender(const std::vector< uint8_t > & message) =0 Send a status message to the (directly or indirectly) connected ControlDataSender. In case this ControlDataReceiver has another ControlDataReceiver as sender, that receiver will forward the status message to the sender. |
virtual bool | sendRequestToReceivers(const std::vector< uint8_t > & request, uint64_t & requestId) =0 Send a request to the connected ControlDataReceivers asynchronously. This request will only be sent to ControlDataReceivers on the sending side of this receiver. In case a receiver is located between this ControlDataReceiver and the sender, neither of those will see this request. The response will be sent to the response callback asynchronously. |
virtual bool | sendMultiRequestToReceivers(const std::vector< std::vector< uint8_t » & request, uint64_t & requestId) =0 Send a multi-message request to the connected ControlDataReceivers asynchronously. This request will only be sent to ControlDataReceivers on the sending side of this receiver. In case a receiver is located between this ControlDataReceiver and the sender, neither of those will see this request. The response will be sent to the response callback asynchronously. |
Detailed Description
class IControlDataReceiver;
IControlDataReceiver is the interface class for the control data receiver. An IControlDataReceiver can receive messages from a sender or other IControlDataReceivers using a network connection. It can also connect to and forward the incoming request messages to other receivers. The connections to the sender and the other receivers are controlled by an ISystemControllerInterface instance. The ControlDataReceiver has a receiving
or listening
side, as well as a sending
side. The listening side can listen to one single network port and have multiple ControlDataSenders and ControlDataReceivers connected to that port to receive requests from them. On the sending side of the ControlDataReceiver, it can be connected to the listening side of other ControlDataReceivers, used to forward all incoming messages to that receiver, as well as sending its own requests.
Each ControlDataReceiver can be configured to have a certain message delay. This delay parameter is set when the System controller instructs the ControlDataReceiver to start listening for incoming connections from senders (or sending ControlDataReceiver). Each incoming request will then be delayed and delivered according to the parameter, compared to the send timestamp in the message. In case multiple receivers are chained, like sender->receiver1->receiver2, receiver1 will delay the incoming messages from the sender based on when they were sent from the sender. Furthermore, receiver2 will delay them compared to when they were sent from receiver1.
An IControlDataReceiver can send status messages back to the sender. In case of chained receivers, the message will be forwarded back to the sender. A user of the ControlDataReceiver can register callbacks to receive requests and status messages. There is also an optional “preview” callback that is useful in case the incoming messages are delayed (have a delay > 0). This callback will then be called as soon as the request message arrives, to allow the user to prepare for when the actual delayed request callback is called.
Public Functions Documentation
function ~IControlDataReceiver
virtual ~IControlDataReceiver() =default
Destructor.
function configure
virtual bool configure(
const Settings & settings
) =0
Configure this instance.
Parameters:
- settings The settings to use for this receiver
Return: True on success, false otherwise
function sendStatusMessageToSender
virtual bool sendStatusMessageToSender(
const std::vector< uint8_t > & message
) =0
Send a status message to the (directly or indirectly) connected ControlDataSender. In case this ControlDataReceiver has another ControlDataReceiver as sender, that receiver will forward the status message to the sender.
Parameters:
- message The status message
Return: True in case the message was sent successfully, false otherwise.
function sendRequestToReceivers
virtual bool sendRequestToReceivers(
const std::vector< uint8_t > & request,
uint64_t & requestId
) =0
Send a request to the connected ControlDataReceivers asynchronously. This request will only be sent to ControlDataReceivers on the sending side of this receiver. In case a receiver is located between this ControlDataReceiver and the sender, neither of those will see this request. The response will be sent to the response callback asynchronously.
Parameters:
- request The request message
- requestId The unique identifier of this request. Used to identify the async response.
Return: True if the request was successfully sent, false otherwise
function sendMultiRequestToReceivers
virtual bool sendMultiRequestToReceivers(
const std::vector< std::vector< uint8_t >> & request,
uint64_t & requestId
) =0
Send a multi-message request to the connected ControlDataReceivers asynchronously. This request will only be sent to ControlDataReceivers on the sending side of this receiver. In case a receiver is located between this ControlDataReceiver and the sender, neither of those will see this request. The response will be sent to the response callback asynchronously.
Parameters:
- request The request message
- requestId The unique identifier of this request. Used to identify the async response.
Return: True if the request was successfully sent, false otherwise
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.13 - IControlDataReceiver::IncomingRequest
IControlDataReceiver::IncomingRequest
An incoming request to this ControlDataReceiver.
#include <IControlDataReceiver.h>
Public Attributes
Name | |
---|---|
std::vector< std::vector< uint8_t > > | mMessages |
std::string | mSenderUUID The actual messages. |
std::string | mRequesterUUID UUID of the sender/forwarder that sent the request to this ControlDataReceiver. |
uint64_t | mRequestID UUID of the requester (the original sender, creating the request) |
int64_t | mSenderTimestampUs The requester’s unique id of this request. |
int64_t | mDeliveryTimestampUs The TAI timestamp when this message was sent from the sender/forwarder in micro sec since TAI epoch. |
Public Attributes Documentation
variable mMessages
std::vector< std::vector< uint8_t > > mMessages;
variable mSenderUUID
std::string mSenderUUID;
The actual messages.
variable mRequesterUUID
std::string mRequesterUUID;
UUID of the sender/forwarder that sent the request to this ControlDataReceiver.
variable mRequestID
uint64_t mRequestID = 0;
UUID of the requester (the original sender, creating the request)
variable mSenderTimestampUs
int64_t mSenderTimestampUs =
0;
The requester’s unique id of this request.
variable mDeliveryTimestampUs
int64_t mDeliveryTimestampUs =
0;
The TAI timestamp when this message was sent from the sender/forwarder in micro sec since TAI epoch.
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.14 - IControlDataReceiver::ReceiverResponse
IControlDataReceiver::ReceiverResponse
A response message to a request.
#include <IControlDataReceiver.h>
Public Attributes
Name | |
---|---|
std::vector< uint8_t > | mMessage |
Public Attributes Documentation
variable mMessage
std::vector< uint8_t > mMessage;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.15 - IControlDataReceiver::Settings
IControlDataReceiver::Settings
Settings for a ControlDataReceiver.
#include <IControlDataReceiver.h>
Public Attributes
Name | |
---|---|
std::string | mProductionPipelineUUID |
std::function< void(const IncomingRequest &)> | mPreviewIncomingRequestCallback |
std::function< ReceiverResponse(const IncomingRequest &)> | mIncomingRequestCallback |
std::function< void(const ControlDataCommon::ConnectionStatus &)> | mConnectionStatusCallback |
std::function< void(const ControlDataCommon::Response &)> | mResponseCallback Callback for connection events. |
Public Attributes Documentation
variable mProductionPipelineUUID
std::string mProductionPipelineUUID;
variable mPreviewIncomingRequestCallback
std::function< void(const IncomingRequest &)> mPreviewIncomingRequestCallback;
UUID of the Production Pipeline component this Receiver is a part of Callback called as soon as this ControlDataReceiver gets the request, as a “preview” of what requests will be delivered.
variable mIncomingRequestCallback
std::function< ReceiverResponse(const IncomingRequest &)> mIncomingRequestCallback;
The actual callback used when this ControlDataReceiver gets a request and wants a response back. This callback will delay and deliver the request according to this receiver’s configured delay.
variable mConnectionStatusCallback
std::function< void(const ControlDataCommon::ConnectionStatus &)> mConnectionStatusCallback;
variable mResponseCallback
std::function< void(const ControlDataCommon::Response &)> mResponseCallback;
Callback for connection events.
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.16 - IMediaStreamer
IMediaStreamer
IMediaStreamer is an interface class for MediaStreamers, that can take a single stream of uncompressed video and/or audio frames and encode and output it in some way. This output can either be a stream to a network or writing down the data to a file on the hard drive. This class is configured from two interfaces. The input configuration (input video resolution, frame rate, pixel format, number of audio channels…) is made through this C++ API. The output stream is then started from the System Controller. Any of these configurations can be made first. The actual stream to output will start once the first call to. More…
#include <IMediaStreamer.h>
Public Classes
Name | |
---|---|
struct | Configuration The input configuration of the frames that will be sent to this MediaStreamer. The output stream configuration is made from the System controller via the ISystemControllerInterface. |
struct | Settings Settings used when creating a new MediaStreamer. |
Public Functions
Name | |
---|---|
virtual | ~IMediaStreamer() =default Destructor. |
virtual bool | configure(const std::string & uuid, const Settings & settings, CUcontext cudaContext) =0 Configure this MediaStreamer. This must be called before any call to the. |
virtual bool | setInputFormatAndStart(const Configuration & configuration) =0 Set the input format of this MediaStreamer and start the streamer. The. |
virtual bool | stopAndResetFormat() =0 Stop streaming and reset the format. A call to this method will stop any output streams set up by the ISystemControllerInterface and reset the input format set by the. |
virtual bool | hasFormatAndIsRunning() const =0 |
virtual bool | hasOpenOutputStream() const =0 |
virtual bool | outputData(const AlignedFramePtr & frame) =0 Output data through this streamer. The AlignedFrame::mRenderingTimestamp of the frame will be used as PTS when encoding the uncompressed frame. |
Detailed Description
class IMediaStreamer;
IMediaStreamer is an interface class for MediaStreamers, that can take a single stream of uncompressed video and/or audio frames and encode and output it in some way. This output can either be a stream to a network or writing down the data to a file on the hard drive. This class is configured from two interfaces. The input configuration (input video resolution, frame rate, pixel format, number of audio channels…) is made through this C++ API. The output stream is then started from the System Controller. Any of these configurations can be made first. The actual stream to output will start once the first call to.
See: outputData is made.
Public Functions Documentation
function ~IMediaStreamer
virtual ~IMediaStreamer() =default
Destructor.
function configure
virtual bool configure(
const std::string & uuid,
const Settings & settings,
CUcontext cudaContext
) =0
Configure this MediaStreamer. This must be called before any call to the.
Parameters:
- uuid UUID of this MediaStreamer
- settings Settings for this MediaStreamer
- cudaContext The CUDA context to use for this MediaStreamer. The frames passed to this instance should be valid in this CUDA context. The streamer will use this context for preprocessing and encoding.
See:
Return: True if the streamer was successfully configured
function setInputFormatAndStart
virtual bool setInputFormatAndStart(
const Configuration & configuration
) =0
Set the input format of this MediaStreamer and start the streamer. The.
Parameters:
- configuration The configuration with the format of the frames that will be sent to this MediaReceiver
See:
- configure method must be called before this method is called. This method must be called before any call to
- outputData. If the format should be reset, the
- stopAndResetFormat method should be called first and then this method can be called again to reset the format.
Return: True if the streamer was successfully started, false otherwise
function stopAndResetFormat
virtual bool stopAndResetFormat() =0
Stop streaming and reset the format. A call to this method will stop any output streams set up by the ISystemControllerInterface and reset the input format set by the.
See: setInputFormatAndStart method. The connection to the ISystemControllerInterface will be kept.
Return: True if the stream was successfully stopped and the format reset, or if the format was not set before this method was called, false on error.
function hasFormatAndIsRunning
virtual bool hasFormatAndIsRunning() const =0
Return: True if the input format is set and the OutputStreamer is running, false otherwise
function hasOpenOutputStream
virtual bool hasOpenOutputStream() const =0
See:
- outputData will be discarded without being encoded, as there is no stream to output them to. This method can be used to check if the frame even needs to be produced by the rendering engine. Note however, that the
- outputData method will still log the frames sent to it as received, even if they are not encoded when the output stream is closed.
Return: True if the output stream of the MediaStreamer is currently open and outputting the data, false otherwise. In case this returns false all frames passed to
function outputData
virtual bool outputData(
const AlignedFramePtr & frame
) =0
Output data through this streamer. The AlignedFrame::mRenderingTimestamp of the frame will be used as PTS when encoding the uncompressed frame.
Parameters:
- frame The data frame to output, with video data in CUDA memory
See: setInputFormatAndStart method, or if the format has not been set by a call to that method.
Return: True if the frame was accepted (but not necessarily streamed, in case the output stream has not been set up by the ISystemControllerInterface), false in case the frame did not match the configuration made in the
Note: The DeviceMemory in the AlignedFrame passed to this method must not be in use by any CUDA stream. In case the memory has been used in kernels in another CUDA stream, make sure to first synchronize with the stream, before passing it over to the MediaStreamer.
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.17 - IMediaStreamer::Configuration
IMediaStreamer::Configuration
The input configuration of the frames that will be sent to this MediaStreamer. The output stream configuration is made from the System controller via the ISystemControllerInterface.
#include <IMediaStreamer.h>
Public Attributes
Name | |
---|---|
PixelFormat | mIncomingPixelFormat |
uint32_t | mWidth |
uint32_t | mHeight |
uint32_t | mFrameRateN |
uint32_t | mFrameRateD |
uint32_t | mAudioSampleRate |
uint32_t | mNumAudioChannels |
Public Attributes Documentation
variable mIncomingPixelFormat
PixelFormat mIncomingPixelFormat = PixelFormat::kUnknown;
variable mWidth
uint32_t mWidth = 0;
variable mHeight
uint32_t mHeight = 0;
variable mFrameRateN
uint32_t mFrameRateN = 0;
variable mFrameRateD
uint32_t mFrameRateD = 0;
variable mAudioSampleRate
uint32_t mAudioSampleRate = 0;
variable mNumAudioChannels
uint32_t mNumAudioChannels = 0;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.18 - IMediaStreamer::Settings
IMediaStreamer::Settings
Settings used when creating a new MediaStreamer.
#include <IMediaStreamer.h>
Public Attributes
Name | |
---|---|
std::string | mName |
Public Attributes Documentation
variable mName
std::string mName;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.19 - IngestApplication
IngestApplication
Public Classes
Name | |
---|---|
struct | Settings |
Public Functions
Name | |
---|---|
IngestApplication() Constructor, creates an empty IngestApplication without starting it. | |
~IngestApplication() Destructor. | |
bool | start(const std::shared_ptr< ISystemControllerInterface > & controllerInterface, const Settings & settings) Start this IngestApplication given an interface to the System Controller and an UUID. |
bool | stop() Stop this IngestApplication. |
std::string | getVersion() Get application version. |
std::string | getLibraryVersions() Get the versions of the libraries available at runtime, among others, CUDA version, BMD and NDI versions. |
Public Functions Documentation
function IngestApplication
IngestApplication()
Constructor, creates an empty IngestApplication without starting it.
function ~IngestApplication
~IngestApplication()
Destructor.
function start
bool start(
const std::shared_ptr< ISystemControllerInterface > & controllerInterface,
const Settings & settings
)
Start this IngestApplication given an interface to the System Controller and an UUID.
Parameters:
- controllerInterface The interface for communication with the System Controller
- settings The settings for the IngestApplication
Return: True if the IngestApplication was successfully started, false otherwise
function stop
bool stop()
Stop this IngestApplication.
Return: True if the IngestApplication was successfully stopped, false otherwise
function getVersion
static std::string getVersion()
Get application version.
Return: a string with the current version, e.g. “6.0.0-39-g60a35937”
function getLibraryVersions
static std::string getLibraryVersions()
Get the versions of the libraries available at runtime, among others, CUDA version, BMD and NDI versions.
Return: a string with the currently found versions of the libraries used by this application
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.20 - IngestApplication::Settings
IngestApplication::Settings
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.21 - ISystemControllerInterface
ISystemControllerInterface
An ISystemControllerInterface is the interface between a component and the System controller controlling the component. The interface allows for two-way communication between the component and the system controller by means of sending requests and getting responses. Classes deriving from the ISystemControllerInterface should provide the component side implementation of the communication with the system controller. This interface can be inherited and implemented by developers to connect to custom system controllers, or to directly control a component programmatically, without the need for connecting to a remote server.
#include <ISystemControllerInterface.h>
Inherited by SystemControllerConnection
Public Classes
Name | |
---|---|
struct | Callbacks A struct containing the callbacks that needs to be registered by the component using this interface. |
struct | Response A response to a request, consists of a status code and an (optional) parameters JSON object. |
Public Types
Name | |
---|---|
enum class uint32_t | StatusCode { SUCCESS = 3001, TOO_MANY_REQUESTS = 3101, UUID_ALREADY_REGISTERED = 3201, FORMAT_ERROR = 3202, ALREADY_CONFIGURED = 3203, OUT_OF_RESOURCES = 3204, NOT_FOUND = 3205, INTERNAL_ERROR = 3206, CONNECTION_FAILED = 3207, TIMEOUT_EXCEEDED = 3208, KEY_MISMATCH = 3209, UNKNOWN_REQUEST = 3210, MALFORMED_REQUEST = 3211, ALREADY_IN_USE = 3212, VERSION_MISMATCH = 3213} Status codes used in JSON response messages for Websockets. These are starting at 3000 since the 1000 - 2000 range is taken up by the Spec: https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1. |
Public Functions
Name | |
---|---|
virtual | ~ISystemControllerInterface() =default Virtual destructor. |
virtual std::optional< std::string > | sendMessage(const std::string & messageTitle, const nlohmann::json & parameters) =0 Send a message containing a JSON object to the controller. |
virtual bool | registerRequestCallback(const Callbacks & callbacks) =0 Register the callbacks to call for events in this class. |
virtual bool | connect() =0 Connect to the System controller. |
virtual bool | disconnect() =0 Disconnect from the System controller. |
virtual bool | isConnected() const =0 |
virtual std::string | getUUID() const =0 |
Public Types Documentation
enum StatusCode
Enumerator | Value | Description |
---|---|---|
SUCCESS | 3001 | 3000-3099 Info/Notifications |
TOO_MANY_REQUESTS | 3101 | 3100-3199 Warnings |
UUID_ALREADY_REGISTERED | 3201 | 3200-3299 Error |
FORMAT_ERROR | 3202 | |
ALREADY_CONFIGURED | 3203 | |
OUT_OF_RESOURCES | 3204 | |
NOT_FOUND | 3205 | |
INTERNAL_ERROR | 3206 | |
CONNECTION_FAILED | 3207 | |
TIMEOUT_EXCEEDED | 3208 | |
KEY_MISMATCH | 3209 | |
UNKNOWN_REQUEST | 3210 | |
MALFORMED_REQUEST | 3211 | |
ALREADY_IN_USE | 3212 | |
VERSION_MISMATCH | 3213 |
Status codes used in JSON response messages for Websockets. These are starting at 3000 since the 1000 - 2000 range is taken up by the Spec: https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1.
Public Functions Documentation
function ~ISystemControllerInterface
virtual ~ISystemControllerInterface() =default
Virtual destructor.
function sendMessage
virtual std::optional< std::string > sendMessage(
const std::string & messageTitle,
const nlohmann::json & parameters
) =0
Send a message containing a JSON object to the controller.
Parameters:
- messageTitle The title of the status type or request
- parameters The parameters part of the JSON message
Return: Optional containing an error message on error, else nullopt in case the message was successfully sent
Reimplemented by: SystemControllerConnection::sendMessage
function registerRequestCallback
virtual bool registerRequestCallback(
const Callbacks & callbacks
) =0
Register the callbacks to call for events in this class.
Parameters:
- callbacks The callbacks to use when events in this class happen
Return: True on successful registration, false if some callback is not set or if already connected
Reimplemented by: SystemControllerConnection::registerRequestCallback
function connect
virtual bool connect() =0
Connect to the System controller.
Return: True on successful connection, false on error or if already connected
Reimplemented by: SystemControllerConnection::connect
function disconnect
virtual bool disconnect() =0
Disconnect from the System controller.
Return: True on successful disconnection, false on error or if not connected
Reimplemented by: SystemControllerConnection::disconnect
function isConnected
virtual bool isConnected() const =0
Return: True if connected to the System controller, false otherwise
Reimplemented by: SystemControllerConnection::isConnected
function getUUID
virtual std::string getUUID() const =0
Return: The UUID of this interface to the System controller
Reimplemented by: SystemControllerConnection::getUUID
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.22 - ISystemControllerInterface::Callbacks
ISystemControllerInterface::Callbacks
A struct containing the callbacks that needs to be registered by the component using this interface.
#include <ISystemControllerInterface.h>
Public Attributes
Name | |
---|---|
std::function< Response(const std::string &, const nlohmann::json &)> | mRequestCallback |
std::function< void(uint32_t, const std::string &, const std::error_code &)> | mConnectionClosedCallback |
Public Attributes Documentation
variable mRequestCallback
std::function< Response(const std::string &, const nlohmann::json &)> mRequestCallback;
variable mConnectionClosedCallback
std::function< void(uint32_t, const std::string &, const std::error_code &)> mConnectionClosedCallback;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.23 - ISystemControllerInterface::Response
ISystemControllerInterface::Response
A response to a request, consists of a status code and an (optional) parameters JSON object.
#include <ISystemControllerInterface.h>
Public Attributes
Name | |
---|---|
StatusCode | mCode |
nlohmann::json | mParameters |
Public Attributes Documentation
variable mCode
StatusCode mCode;
variable mParameters
nlohmann::json mParameters;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.24 - MediaReceiver
MediaReceiver
A MediaReceiver contains the logic for receiving, decoding and aligning incoming media sources from the Ingests. The aligned data is then delivered to the Rendering Engine which is also responsible for setting up the MediaReceiver. The MediaReceiver has a builtin multi view generator, which can create output streams containing composited subsets of the incoming video sources. This class is controlled using an ISystemControllerInterface provided when starting it.
#include <MediaReceiver.h>
Public Classes
Name | |
---|---|
struct | NewStreamParameters A struct containing information on the format of an incoming stream. |
struct | Settings Settings for a MediaReceiver. |
Public Types
Name | |
---|---|
enum class uint32_t | TallyBorderColor { kNone, kRed, kGreen, kYellow} Available colors for tally border in multi view. |
Public Functions
Name | |
---|---|
MediaReceiver() Default constructor. | |
~MediaReceiver() Default destructor. | |
bool | start(const std::shared_ptr< ISystemControllerInterface > & controllerInterface, CUcontext cudaContext, const Settings & settings, const ControlDataReceiver::Settings & receiverSettings) Start the MediaReceiver. This method will call connect on the System controller interface and set up the callbacks from the interface to call internal methods. |
void | stop() Stop the MediaReceiver. |
std::function< void(const AlignedFramePtr &)> | getCustomMultiViewSourceInput(uint32_t inputSlot, const std::string & name ="") This method allows the Rendering Engine to provide custom input sources to the Multi-view generator to send video streams that can be added to the multi-views. This could for instance be used for adding a “preview” of the video stream the rendering engine is about to cut to. |
bool | removeCustomMultiViewSourceInput(uint32_t inputSlot) Remove a custom multi-view generator source input earlier registered using the getCustomMultiViewSourceInput method. |
std::shared_ptr< IMediaStreamer > | createMediaStreamerOutput(const MediaStreamer::Settings & settings) Create a new MediaStreamer instance to output data from this MediaReceiver. |
bool | removeMediaStreamerOutput(const std::string & uuid) Remove an MediaStreamer created via the. |
std::shared_ptr< IControlDataReceiver > | getControlDataReceiver() Get a pointer to the ControlDataReceiver instance of this MediaReceiver. A call to this method will always return the same instance. |
void | setTallyBorder(uint32_t inputSlot, TallyBorderColor color) Set tally border color in the multi-views for a specific input slot. |
void | clearTallyBorder(uint32_t inputSlot) Remove tally border in the multi-views for a specific input slot. |
void | clearAllTallyBorders() Remove all tally borders. |
MediaReceiver::TallyBorderColor | getTallyBorder(uint32_t inputSlot) const get tally border color for an input slot |
MediaReceiver(MediaReceiver const & ) =delete MediaReceiver is neither copyable nor movable. | |
MediaReceiver(MediaReceiver && ) =delete | |
MediaReceiver & | operator=(MediaReceiver const & ) =delete |
MediaReceiver & | operator=(MediaReceiver && ) =delete |
std::string | getVersion() Get application version. |
std::string | getLibraryVersions() Get versions of the libraries available at runtime, among others, CUDA runtime and driver versions. |
Public Types Documentation
enum TallyBorderColor
Enumerator | Value | Description |
---|---|---|
kNone | ||
kRed | ||
kGreen | ||
kYellow |
Available colors for tally border in multi view.
Public Functions Documentation
function MediaReceiver
MediaReceiver()
Default constructor.
function ~MediaReceiver
~MediaReceiver()
Default destructor.
function start
bool start(
const std::shared_ptr< ISystemControllerInterface > & controllerInterface,
CUcontext cudaContext,
const Settings & settings,
const ControlDataReceiver::Settings & receiverSettings
)
Start the MediaReceiver. This method will call connect on the System controller interface and set up the callbacks from the interface to call internal methods.
Parameters:
- controllerInterface The ISystemControllerInterface to use for this MediaReceiver. The interface should be configured (Such as setting the IP address and port of the System Controller if a server based System Controller is used) but not connected before passed to this method. This method will internally set the callbacks before connecting to the controller. If the controller is already connected or if the controller is not configured, this method will return false. This class will take ownership of the smart pointer.
- cudaContext The CUDA context to use for this MediaReceiver. The frames will delivered as CUDA pointers, valid in this context, and the multi-view generator will use this context for rendering and encoding.
- settings The settings to use for the MediaReceiver.
- receiverSettings The settings to use for the ControlDataReceiver.
Return: True if the MediaReceiver was started successfully, false otherwise.
function stop
void stop()
Stop the MediaReceiver.
function getCustomMultiViewSourceInput
std::function< void(const AlignedFramePtr &)> getCustomMultiViewSourceInput(
uint32_t inputSlot,
const std::string & name =""
)
This method allows the Rendering Engine to provide custom input sources to the Multi-view generator to send video streams that can be added to the multi-views. This could for instance be used for adding a “preview” of the video stream the rendering engine is about to cut to.
Parameters:
- inputSlot The input slot this source will be “plugged in” to. The custom input sources share the input slots with the streams connected from Ingests. This means that it is probably a good idea to use higher numbered slots for these custom inputs, such as numbers from 1000, so that the lower numbers, 1 and up, can be used by the connected video cameras, as the input slot number will also be used when cutting.
- name Optional human readable name of this stream, to be presented to the System Controller.
See: MediaReceiver::Settings::mDecodedFormat.
Return: A function to where the Rendering Engine should send the frames. In case the requested inputSlot is already used by another custom input source, or a stream from an ingest, the returned function will be nullptr.
Note: Make sure no CUDA stream will write to the DeviceMemory in the AlignedFrame passed to the function. Failing to do so will lead to undefined behavior. In case another CUDA stream has written to the DeviceMemory, make sure to synchronize with the stream before passing the AlignedFrame.
Precondition: The AlignedFrame sent to this function must have the same pixel format as this MediaReceiver is configured to deliver to the Rendering Engine,
function removeCustomMultiViewSourceInput
bool removeCustomMultiViewSourceInput(
uint32_t inputSlot
)
Remove a custom multi-view generator source input earlier registered using the getCustomMultiViewSourceInput
method.
Parameters:
- inputSlot The input slot the custom multi-view source input is connected to, that should be removed
Return: True if the custom multi-view source input was successfully removed, false in case there was no custom input registered for the given input slot, or in case of an internal error.
function createMediaStreamerOutput
std::shared_ptr< IMediaStreamer > createMediaStreamerOutput(
const MediaStreamer::Settings & settings
)
Create a new MediaStreamer instance to output data from this MediaReceiver.
Parameters:
- settings The settings for the new MediaStreamer
Return: A shared pointer to the MediaStreamer instance in case of a successful creation, otherwise nullptr
function removeMediaStreamerOutput
bool removeMediaStreamerOutput(
const std::string & uuid
)
Remove an MediaStreamer created via the.
Parameters:
- uuid The UUID of the MediaStreamer to remove
See: createMediaStreamerOutput method by its UUID.
Return: True if the MediaStreamer was successfully removed, false in case no MediaStreamer with the given UUID was found.
function getControlDataReceiver
std::shared_ptr< IControlDataReceiver > getControlDataReceiver()
Get a pointer to the ControlDataReceiver instance of this MediaReceiver. A call to this method will always return the same instance.
See: start.
Return: Pointer to the ControlDataReceiver instance. Nullptr if called before a successful call to
function setTallyBorder
void setTallyBorder(
uint32_t inputSlot,
TallyBorderColor color
)
Set tally border color in the multi-views for a specific input slot.
Parameters:
- inputSlot the input slot for a source
- color the color to set
function clearTallyBorder
void clearTallyBorder(
uint32_t inputSlot
)
Remove tally border in the multi-views for a specific input slot.
Parameters:
- inputSlot the input slot for a source
function clearAllTallyBorders
void clearAllTallyBorders()
Remove all tally borders.
function getTallyBorder
MediaReceiver::TallyBorderColor getTallyBorder(
uint32_t inputSlot
) const
get tally border color for an input slot
Parameters:
- inputSlot the input slot to get color for
Return: the tally border color for the given input slot
function MediaReceiver
MediaReceiver(
MediaReceiver const &
) =delete
MediaReceiver is neither copyable nor movable.
function MediaReceiver
MediaReceiver(
MediaReceiver &&
) =delete
function operator=
MediaReceiver & operator=(
MediaReceiver const &
) =delete
function operator=
MediaReceiver & operator=(
MediaReceiver &&
) =delete
function getVersion
static std::string getVersion()
Get application version.
Return: a string with the current version, e.g. “6.0.0-39-g60a35937”
function getLibraryVersions
static std::string getLibraryVersions()
Get versions of the libraries available at runtime, among others, CUDA runtime and driver versions.
Return: a string with the currently found versions of the libraries used by this application
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.25 - MediaReceiver::NewStreamParameters
MediaReceiver::NewStreamParameters
A struct containing information on the format of an incoming stream.
#include <MediaReceiver.h>
Public Attributes
Name | |
---|---|
uint32_t | mVideoHeight |
uint32_t | mVideoWidth |
uint32_t | mFrameRateN |
uint32_t | mFrameRateD |
uint32_t | mAudioSampleRate |
Public Attributes Documentation
variable mVideoHeight
uint32_t mVideoHeight = 0;
variable mVideoWidth
uint32_t mVideoWidth = 0;
variable mFrameRateN
uint32_t mFrameRateN = 0;
variable mFrameRateD
uint32_t mFrameRateD = 1;
variable mAudioSampleRate
uint32_t mAudioSampleRate = 0;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.26 - MediaReceiver::Settings
MediaReceiver::Settings
Settings for a MediaReceiver.
#include <MediaReceiver.h>
Public Attributes
Name | |
---|---|
PixelFormat | mDecodedFormat The pixel format delivered to the rendering engine. |
std::function< std::function< void(const AlignedFramePtr &)>uint32_t inputSlot, const std::string &streamID, const NewStreamParameters &newStreamParameters)> | mNewConnectionCallback |
std::function< void(uint32_t inputSlot)> | mClosedConnectionCallback |
std::function< std::optional< std::string >)> | mResetCallback |
bool | mUseMultiViewer |
bool | mDeliverOld Set to true if this MediaReceiver should have a multi-view generator. |
CUstream | mAlignedFrameFreeStream |
Public Attributes Documentation
variable mDecodedFormat
PixelFormat mDecodedFormat = PixelFormat::kRgba64Le;
The pixel format delivered to the rendering engine.
variable mNewConnectionCallback
std::function< std::function< void(const AlignedFramePtr &)>uint32_t inputSlot, const std::string &streamID, const NewStreamParameters &newStreamParameters)> mNewConnectionCallback;
Parameters:
- inputSlot can be seen as a virtual SDI input on a hardware mixer and should be used by the Rendering engine to identify sources. For example, if a source is connected to input slot 5, the button “Cut to camera 5” on the control panel ought to cut to this stream. The MediaReceiver is responsible for making sure only one stream can be connected to an input slot at a time. This callback might be called multiple times with the same input slot, however, in case the stream has been disconnected and the mClosedConnectionCallback has been called for that input slot earlier.
- streamID is the identification string of the video/audio source on the Ingest. A Rendering engine might have the same StreamID connected multiple times (for instance if two different alignments are used for the same source) so this should not be used as an unique identifier for the stream.
- newStreamParameters contains information on the pending incoming stream to allow the Rendering engine to decide if it can receive it or not.
Note: The internal CUDA stream is synchronized before delivering the AlignedFrames, meaning the DeviceMemory in the AlignedFrame will always contain the written data; no further synchronization is needed before the rendering engine can read the data.
A callback called by the MediaReceiver whenever a new connection from an Ingest is set up. The callback should return a function to which the data will be delivered. In case the Rendering engine does not want to accept the pending incoming stream (due to a format not supported, etc) the Rendering engine can return nullptr.
variable mClosedConnectionCallback
std::function< void(uint32_t inputSlot)> mClosedConnectionCallback;
Parameters:
- inputSlot The input slot that was disconnected
A callback called whenever a connection from an Ingest has been stopped. When receiving this callback, the callback function returned by the mNewConnectionCallback will not be called anymore and can be discarded if required by the application. After this function has been called, the input slot might be reused by another incoming stream after a call to the mNewConnectionCallback.
variable mResetCallback
std::function< std::optional< std::string >)> mResetCallback;
See: mNewConnectionCallback should be kept, however).
Return: True on successful reset, false in case something unexpected happened
A callback called whenever the System Controller has requested that the Rendering Engine should be reset to its initial state (connections made via the
variable mUseMultiViewer
bool mUseMultiViewer = false;
variable mDeliverOld
bool mDeliverOld = false;
Set to true if this MediaReceiver should have a multi-view generator.
variable mAlignedFrameFreeStream
CUstream mAlignedFrameFreeStream = nullptr;
Set to true if the media frames should be delivered, even if the delivery time has already passed, otherwise they are discarded The CUDA stream to use for asynchronously freeing the AlignedFrames’ DeviceMemory. Set this to the rendering engine’s CUDA stream, to ensure memory is not freed before the rendering engine is done using it asynchronously in its CUDA stream. Note that the MediaReceiver instance and all AlignedFrames delivered from it must be destroyed before the CUDA stream can be destroyed.
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.27 - SystemControllerConnection
SystemControllerConnection
An implementation of the ISystemControllerInterface for a System controller residing in a remote server. The connection to the server uses a Websocket.
#include <SystemControllerConnection.h>
Inherits from ISystemControllerInterface
Public Classes
Name | |
---|---|
struct | Settings Settings for a SystemControllerConnection. |
Public Types
Name | |
---|---|
enum class uint32_t | ComponentType { kIngest, kPipeline, kControlPanel} Enumeration of component types the component using this SystemControllerConnection can tell the System Controller to be seen as. |
Public Functions
Name | |
---|---|
SystemControllerConnection() | |
~SystemControllerConnection() override | |
bool | configure(const Settings & settings) Configure this connection. This method should be called before calling. |
virtual bool | connect() override Connect to the server using the settings set with the. |
virtual bool | isConnected() const override |
virtual std::string | getUUID() const override |
virtual std::optional< std::string > | sendMessage(const std::string & messageTitle, const nlohmann::json & parameters) override Send a message containing a JSON object to the system controller server. |
virtual bool | disconnect() override Disconnect from the server. |
virtual bool | registerRequestCallback(const Callbacks & callbacks) override Register callbacks to call when getting requests from the server or the server connection is lost. |
SystemControllerConnection(SystemControllerConnection const & ) =delete | |
SystemControllerConnection(SystemControllerConnection && ) =delete | |
SystemControllerConnection & | operator=(SystemControllerConnection const & ) =delete |
SystemControllerConnection & | operator=(SystemControllerConnection && ) =delete |
Additional inherited members
Public Classes inherited from ISystemControllerInterface
Name | |
---|---|
struct | Callbacks A struct containing the callbacks that needs to be registered by the component using this interface. |
struct | Response A response to a request, consists of a status code and an (optional) parameters JSON object. |
Public Types inherited from ISystemControllerInterface
Name | |
---|---|
enum class uint32_t | StatusCode { SUCCESS, TOO_MANY_REQUESTS, UUID_ALREADY_REGISTERED, FORMAT_ERROR, ALREADY_CONFIGURED, OUT_OF_RESOURCES, NOT_FOUND, INTERNAL_ERROR, CONNECTION_FAILED, TIMEOUT_EXCEEDED, KEY_MISMATCH, UNKNOWN_REQUEST, MALFORMED_REQUEST, ALREADY_IN_USE, VERSION_MISMATCH} Status codes used in JSON response messages for Websockets. These are starting at 3000 since the 1000 - 2000 range is taken up by the Spec: https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1. |
Public Functions inherited from ISystemControllerInterface
Name | |
---|---|
virtual | ~ISystemControllerInterface() =default Virtual destructor. |
Public Types Documentation
enum ComponentType
Enumerator | Value | Description |
---|---|---|
kIngest | ||
kPipeline | ||
kControlPanel |
Enumeration of component types the component using this SystemControllerConnection can tell the System Controller to be seen as.
Public Functions Documentation
function SystemControllerConnection
SystemControllerConnection()
function ~SystemControllerConnection
~SystemControllerConnection() override
function configure
bool configure(
const Settings & settings
)
Configure this connection. This method should be called before calling.
Parameters:
- settings The settings to use when connecting to the server
See: connect.
Return: True if successfully configured, false otherwise
function connect
virtual bool connect() override
Connect to the server using the settings set with the.
See: configure method.
Return: True if connection was successful, false otherwise
Reimplements: ISystemControllerInterface::connect
function isConnected
virtual bool isConnected() const override
Return: True if this class is connected to the server, false otherwise
Reimplements: ISystemControllerInterface::isConnected
function getUUID
virtual std::string getUUID() const override
Return: The UUID of this interface to the System controller
Reimplements: ISystemControllerInterface::getUUID
function sendMessage
virtual std::optional< std::string > sendMessage(
const std::string & messageTitle,
const nlohmann::json & parameters
) override
Send a message containing a JSON object to the system controller server.
Parameters:
- messageTitle The title of the status type or request
- parameters The parameters part of the JSON message
Return: Optional containing an error message on error, else nullopt in case the message was successfully sent
Reimplements: ISystemControllerInterface::sendMessage
function disconnect
virtual bool disconnect() override
Disconnect from the server.
Return: True if successfully disconnected, false on internal error
Reimplements: ISystemControllerInterface::disconnect
function registerRequestCallback
virtual bool registerRequestCallback(
const Callbacks & callbacks
) override
Register callbacks to call when getting requests from the server or the server connection is lost.
Parameters:
- callbacks The callbacks to set
Return: True if successfully registered, false if a callback is not set, or if already connected to the server
Reimplements: ISystemControllerInterface::registerRequestCallback
function SystemControllerConnection
SystemControllerConnection(
SystemControllerConnection const &
) =delete
function SystemControllerConnection
SystemControllerConnection(
SystemControllerConnection &&
) =delete
function operator=
SystemControllerConnection & operator=(
SystemControllerConnection const &
) =delete
function operator=
SystemControllerConnection & operator=(
SystemControllerConnection &&
) =delete
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.28 - SystemControllerConnection::Settings
SystemControllerConnection::Settings
Settings for a SystemControllerConnection.
#include <SystemControllerConnection.h>
Public Attributes
Name | |
---|---|
std::string | mSystemControllerIP |
uint16_t | mSystemControllerPort |
std::string | mSystemControllerPostfix |
std::string | mPSK |
std::string | mUUID |
ComponentType | mType |
std::string | mName |
std::string | mMyIP |
std::chrono::milliseconds | mConnectTimeout |
bool | mEnableHTTPS |
bool | mInsecureHTTPS |
std::string | mCustomCaCertFile |
Public Attributes Documentation
variable mSystemControllerIP
std::string mSystemControllerIP;
variable mSystemControllerPort
uint16_t mSystemControllerPort;
variable mSystemControllerPostfix
std::string mSystemControllerPostfix;
variable mPSK
std::string mPSK;
variable mUUID
std::string mUUID;
variable mType
ComponentType mType;
variable mName
std::string mName;
variable mMyIP
std::string mMyIP;
variable mConnectTimeout
std::chrono::milliseconds mConnectTimeout {
3000};
variable mEnableHTTPS
bool mEnableHTTPS;
variable mInsecureHTTPS
bool mInsecureHTTPS;
variable mCustomCaCertFile
std::string mCustomCaCertFile;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.29 - TimeCommon::TAIStatus
TimeCommon::TAIStatus
Public Attributes
Name | |
---|---|
StratumLevel | mStratum |
bool | mHasLock |
double | mTimeDiffS |
Public Attributes Documentation
variable mStratum
StratumLevel mStratum = StratumLevel::UnknownStratum;
variable mHasLock
bool mHasLock = false;
variable mTimeDiffS
double mTimeDiffS = 0.0;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.1.30 - TimeCommon::TimeStructure
TimeCommon::TimeStructure
Public Attributes
Name | |
---|---|
uint64_t | t1 |
uint64_t | t2 |
uint64_t | t3 |
uint64_t | t4 |
uint64_t | token |
uint64_t | dummy1 |
uint64_t | dummy2 |
uint64_t | dummy3 |
Public Attributes Documentation
variable t1
uint64_t t1 = 0;
variable t2
uint64_t t2 = 0;
variable t3
uint64_t t3 = 0;
variable t4
uint64_t t4 = 0;
variable token
uint64_t token = 0;
variable dummy1
uint64_t dummy1 = 0;
variable dummy2
uint64_t dummy2 = 0;
variable dummy3
uint64_t dummy3 = 0;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2 - Files
Files
- dir build
- dir install
- dir include
- file include/AclLog.h
- file include/AlignedFrame.h
- file include/Base64.h
- file include/ControlDataCommon.h
- file include/ControlDataSender.h
- file include/DeviceMemory.h
- file include/IControlDataReceiver.h
- file include/IMediaStreamer.h
- file include/ISystemControllerInterface.h
- file include/IngestApplication.h
- file include/IngestUtils.h
- file include/MediaReceiver.h
- file include/PixelFormat.h
- file include/SystemControllerConnection.h
- file include/TimeCommon.h
- file include/UUIDUtils.h
- dir include
- dir install
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.1 - include/AclLog.h
include/AclLog.h
Namespaces
Name |
---|
AclLog A namespace for logging utilities. |
Classes
Name | |
---|---|
class | AclLog::CommandLogFormatter This class is used to format log entries for Rendering Engine commands. |
class | AclLog::ThreadNameFormatterFlag |
class | AclLog::FileLocationFormatterFlag A custom flag formatter which logs the source file location between a par of “[]”, in case the location is provided with the log call. |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <filesystem>
#include <string>
#include <spdlog/details/log_msg.h>
#include <spdlog/fmt/fmt.h>
#include <spdlog/formatter.h>
#include <spdlog/pattern_formatter.h>
#include <sys/prctl.h>
namespace AclLog {
enum class Level {
kTrace, // Detailed diagnostics (for development only)
kDebug, // Messages intended for debugging only
kInfo, // Messages about normal behavior (default log level)
kWarning, // Warnings (functionality intact)
kError, // Recoverable errors (functionality impaired)
kCritical, // Unrecoverable errors (application must stop)
kOff // Turns off all logging
};
void init(const std::string& name);
void initCommandLog(const std::string& name);
void setLevel(Level level);
void logCommand(const std::string& origin, const std::string& command);
AclLog::Level getLogLevel();
size_t getMaxFileSize();
size_t getMaxLogRotations();
std::filesystem::path getLogFileFullPath(const std::string& name);
class CommandLogFormatter : public spdlog::formatter {
public:
CommandLogFormatter();
void format(const spdlog::details::log_msg& msg, spdlog::memory_buf_t& dest) override;
[[nodiscard]] std::unique_ptr<spdlog::formatter> clone() const override;
private:
std::unique_ptr<spdlog::pattern_formatter> mCommandLogPattern;
};
inline std::string getThreadName() {
static const size_t RECOMMENDED_BUFFER_SIZE = 20;
static thread_local std::string name;
if (name.empty()) {
char buffer[RECOMMENDED_BUFFER_SIZE];
int retval = prctl(PR_GET_NAME, buffer);
if (retval == -1) {
throw spdlog::spdlog_ex("Failed to get thread name: ", errno);
}
name = std::string(buffer);
}
return name;
}
class ThreadNameFormatterFlag : public spdlog::custom_flag_formatter {
public:
void format(const spdlog::details::log_msg&, const std::tm&, spdlog::memory_buf_t& dest) override {
std::string threadName = getThreadName();
dest.append(threadName.data(), threadName.data() + threadName.size());
}
[[nodiscard]] std::unique_ptr<custom_flag_formatter> clone() const override {
return spdlog::details::make_unique<ThreadNameFormatterFlag>();
}
};
class FileLocationFormatterFlag : public spdlog::custom_flag_formatter {
public:
void format(const spdlog::details::log_msg& msg, const std::tm&, spdlog::memory_buf_t& dest) override {
if (!msg.source.empty()) {
using namespace spdlog::details;
dest.push_back('[');
const char* filename = short_filename_formatter<null_scoped_padder>::basename(msg.source.filename);
fmt_helper::append_string_view(filename, dest);
dest.push_back(':');
fmt_helper::append_int(msg.source.line, dest);
dest.push_back(']');
}
}
[[nodiscard]] std::unique_ptr<custom_flag_formatter> clone() const override {
return spdlog::details::make_unique<FileLocationFormatterFlag>();
}
};
} // namespace AclLog
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.2 - include/AlignedFrame.h
include/AlignedFrame.h
Classes
Name | |
---|---|
struct | AlignedAudioFrame AlignedAudioFrame is a frame of interleaved floating point audio samples with a given number of channels. |
struct | AlignedFrame A frame of aligned data that is passed to the rendering engine from the MediaReceiver. A DataFrame contains a time stamped frame of media, which might be video, audio and auxiliary data such as subtitles. A single DataFrame can contain one or multiple types of media. Which media types are included can be probed by nullptr-checking/size checking the data members. The struct has ownership of all data pointers included. The struct includes all logic for freeing the resources held by this struct and the user should therefore just make sure the struct itself is deallocated to ensure all resources are freed. |
Types
Name | |
---|---|
using std::shared_ptr< AlignedAudioFrame > | AlignedAudioFramePtr |
using std::shared_ptr< const AlignedAudioFrame > | AlignedAudioFrameConstPtr |
using std::shared_ptr< AlignedFrame > | AlignedFramePtr |
Functions
Name | |
---|---|
AlignedAudioFramePtr | createAlignedAudioFrame(uint8_t numberOfChannels, uint32_t numberOfSamplesPerChannel) Create a new AudioFrame with a given amount of samples allocated and all samples initialized to 0.0f. |
AlignedAudioFramePtr | extractChannel(AlignedAudioFrameConstPtr sourceFrame, uint8_t channelIndex) Extract one channel from an AlignedAudioFrame as a new AlignedAudioFrame. The new AlignedAudioFrame will share no data with the original frame, meaning the new frame can be edited without corrupting the old one. |
Types Documentation
using AlignedAudioFramePtr
using AlignedAudioFramePtr = std::shared_ptr<AlignedAudioFrame>;
using AlignedAudioFrameConstPtr
using AlignedAudioFrameConstPtr = std::shared_ptr<const AlignedAudioFrame>;
using AlignedFramePtr
using AlignedFramePtr = std::shared_ptr<AlignedFrame>;
Functions Documentation
function createAlignedAudioFrame
AlignedAudioFramePtr createAlignedAudioFrame(
uint8_t numberOfChannels,
uint32_t numberOfSamplesPerChannel
)
Create a new AudioFrame with a given amount of samples allocated and all samples initialized to 0.0f.
Parameters:
- numberOfChannels Number of channels to allocate space for
- numberOfSamplesPerChannel Number of samples per channel to allocate space for
Return: Shared pointer to the new AudioFrame
function extractChannel
AlignedAudioFramePtr extractChannel(
AlignedAudioFrameConstPtr sourceFrame,
uint8_t channelIndex
)
Extract one channel from an AlignedAudioFrame as a new AlignedAudioFrame. The new AlignedAudioFrame will share no data with the original frame, meaning the new frame can be edited without corrupting the old one.
Parameters:
- sourceFrame The frame to extract a channel from
- channelIndex The zero-based channel index to extract
Return: Shared pointer to a new mono AlignedAudioFrame, or nullptr if channel is out of bounds
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <array>
#include <iostream>
#include <memory>
#include <vector>
#include <cuda_runtime_api.h>
#include "DeviceMemory.h"
#include "PixelFormat.h"
struct AlignedAudioFrame {
// The samples interleaved
std::vector<float> mSamples;
uint8_t mNumberOfChannels = 0;
uint32_t mNumberOfSamplesPerChannel = 0;
};
using AlignedAudioFramePtr = std::shared_ptr<AlignedAudioFrame>;
using AlignedAudioFrameConstPtr = std::shared_ptr<const AlignedAudioFrame>;
AlignedAudioFramePtr createAlignedAudioFrame(uint8_t numberOfChannels, uint32_t numberOfSamplesPerChannel);
AlignedAudioFramePtr extractChannel(AlignedAudioFrameConstPtr sourceFrame, uint8_t channelIndex);
struct AlignedFrame {
AlignedFrame() = default;
~AlignedFrame() = default;
AlignedFrame(AlignedFrame const&) = delete; // Copy construct
AlignedFrame& operator=(AlignedFrame const&) = delete; // Copy assign
[[nodiscard]] std::shared_ptr<AlignedFrame> makeShallowCopy() const;
int64_t mCaptureTimestamp = 0;
int64_t mRenderingTimestamp = 0;
// Video
std::shared_ptr<DeviceMemory> mVideoFrame = nullptr;
PixelFormat mPixelFormat = PixelFormat::kUnknown;
uint32_t mFrameRateN = 0;
uint32_t mFrameRateD = 0;
uint32_t mWidth = 0;
uint32_t mHeight = 0;
// Audio
AlignedAudioFramePtr mAudioFrame = nullptr;
uint32_t mAudioSamplingFrequency = 0;
};
using AlignedFramePtr = std::shared_ptr<AlignedFrame>;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.3 - include/Base64.h
include/Base64.h
Functions
Name | |
---|---|
std::string | encodeBase64(const uint8_t * data, size_t size) Base64 encode some data. |
std::string | encodeBase64(const std::vector< uint8_t > & data) Base64 encode some data. |
std::vector< uint8_t > | decodeBase64(const std::string & data) Decode some Base64 encoded data. |
Functions Documentation
function encodeBase64
std::string encodeBase64(
const uint8_t * data,
size_t size
)
Base64 encode some data.
Parameters:
- data Pointer to the data to encode with Base64
- size The length of the data to encode with Base64
Return: The resulting Base64 encoded string
function encodeBase64
std::string encodeBase64(
const std::vector< uint8_t > & data
)
Base64 encode some data.
Parameters:
- data The data to encode with Base64
Return: The resulting Base64 encoded string
function decodeBase64
std::vector< uint8_t > decodeBase64(
const std::string & data
)
Decode some Base64 encoded data.
Parameters:
- data The Base64 encoded string to decode
Return: The resulting decoded data
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <string>
#include <vector>
std::string encodeBase64(const uint8_t* data, size_t size);
std::string encodeBase64(const std::vector<uint8_t>& data);
std::vector<uint8_t> decodeBase64(const std::string& data);
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.4 - include/ControlDataCommon.h
include/ControlDataCommon.h
Namespaces
Name |
---|
ControlDataCommon |
Classes
Name | |
---|---|
struct | ControlDataCommon::ConnectionStatus Connection status struct containing information about a connection event. |
struct | ControlDataCommon::Response A response from a ControlDataReceiver to a request. The UUID tells which receiver the response is sent from. |
struct | ControlDataCommon::StatusMessage A status message from a ControlDataReceiver. The UUID tells which receiver the message is sent from. |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <cstdint>
#include <string>
#include <vector>
namespace ControlDataCommon {
enum class ConnectionType { CONNECTED, DISCONNECTED };
struct ConnectionStatus {
ConnectionStatus() = default;
ConnectionStatus(ConnectionType mConnectionType, const std::string& mIp, uint16_t mPort)
: mConnectionType(mConnectionType)
, mIP(mIp)
, mPort(mPort) {
}
ConnectionType mConnectionType = ConnectionType::DISCONNECTED;
std::string mIP;
uint16_t mPort = 0;
};
struct Response {
std::vector<uint8_t> mMessage;
uint64_t mRequestId = 0;
std::string mFromUUID;
};
struct StatusMessage {
std::vector<uint8_t> mMessage;
std::string mFromUUID;
};
} // namespace ControlDataCommon
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.5 - include/ControlDataSender.h
include/ControlDataSender.h
Classes
Name | |
---|---|
class | ControlDataSender A ControlDataSender can send control signals to one or more receivers using a network connection. A single ControlDataSender can connect to multiple receivers, all identified by a UUID. The class is controlled using an ISystemControllerInterface; this interface is responsible for setting up connections to receivers. The ControlDataSender can send asynchronous requests to (all) the receivers and get a response back. Each response is identified with a request ID as well as the UUID of the responding receiver. The ControlDataSender can also receive status messages from the receivers. |
struct | ControlDataSender::Settings Settings for a ControlDataSender. |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <functional>
#include <memory>
#include <vector>
#include <ISystemControllerInterface.h>
#include "ControlDataCommon.h"
class ControlDataSender final {
public:
struct Settings {
std::function<void(const ControlDataCommon::ConnectionStatus&)>
mConnectionStatusCallback; // Callback for receiver connection/disconnection events
std::function<void(const ControlDataCommon::Response&)>
mResponseCallback; // Callback for response messages from receivers
std::function<void(const ControlDataCommon::StatusMessage&)>
mStatusMessageCallback; // Callback for status messages from receivers
};
ControlDataSender();
~ControlDataSender();
bool configure(const std::shared_ptr<ISystemControllerInterface>& controllerInterface, const Settings& settings);
bool sendRequestToReceivers(const std::vector<uint8_t>& request, uint64_t& requestId);
bool sendMultiRequestToReceivers(const std::vector<std::vector<uint8_t>>& request, uint64_t& requestId);
static std::string getVersion();
// ControlDataSender is not copyable
ControlDataSender(ControlDataSender const&) = delete; // Copy construct
ControlDataSender& operator=(ControlDataSender const&) = delete; // Copy assign
private:
class Impl;
std::unique_ptr<Impl> pImpl;
};
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.6 - include/DeviceMemory.h
include/DeviceMemory.h
Classes
Name | |
---|---|
class | DeviceMemory RAII class for a CUDA memory buffer. |
Types
Name | |
---|---|
using std::shared_ptr< DeviceMemory > | DeviceMemoryPtr |
Types Documentation
using DeviceMemoryPtr
using DeviceMemoryPtr = std::shared_ptr<DeviceMemory>;
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <cstddef>
#include <cstdint>
#include <memory>
#include <cuda_runtime.h>
class DeviceMemory {
public:
DeviceMemory() = default;
explicit DeviceMemory(size_t numberOfBytes);
explicit DeviceMemory(size_t numberOfBytes, cudaStream_t cudaStream);
explicit DeviceMemory(void* deviceMemory) noexcept;
bool allocateMemory(size_t numberOfBytes);
bool allocateMemoryAsync(size_t numberOfBytes, cudaStream_t cudaStream);
bool reallocateMemory(size_t numberOfBytes);
bool reallocateMemoryAsync(size_t numberOfBytes, cudaStream_t cudaStream);
bool allocateAndResetMemory(size_t numberOfBytes);
bool allocateAndResetMemoryAsync(size_t numberOfBytes, cudaStream_t cudaStream);
bool freeMemory();
bool freeMemoryAsync(cudaStream_t cudaStream);
void setFreeingCudaStream(cudaStream_t cudaStream);
~DeviceMemory();
template <typename T = uint8_t> [[nodiscard]] T* getDevicePointer() const {
return reinterpret_cast<T*>(dMemory);
}
[[nodiscard]] size_t getSize() const;
DeviceMemory(DeviceMemory&& other) noexcept;
DeviceMemory& operator=(DeviceMemory&& other) noexcept;
void swap(DeviceMemory& other) noexcept;
DeviceMemory(DeviceMemory const&) = delete;
DeviceMemory operator=(DeviceMemory const&) = delete;
private:
uint8_t* dMemory = nullptr;
size_t mAllocatedBytes = 0;
cudaStream_t mCudaStream = nullptr; // Stream to use for Async free
};
using DeviceMemoryPtr = std::shared_ptr<DeviceMemory>;
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.7 - include/IControlDataReceiver.h
include/IControlDataReceiver.h
Classes
Name | |
---|---|
class | IControlDataReceiver IControlDataReceiver is the interface class for the control data receiver. An IControlDataReceiver can receive messages from a sender or other IControlDataReceivers using a network connection. It can also connect to and forward the incoming request messages to other receivers. The connections to the sender and the other receivers are controlled by an ISystemControllerInterface instance. The ControlDataReceiver has a receiving or listening side, as well as a sending side. The listening side can listen to one single network port and have multiple ControlDataSenders and ControlDataReceivers connected to that port to receive requests from them. On the sending side of the ControlDataReceiver, it can be connected to the listening side of other ControlDataReceivers, used to forward all incoming messages to that receiver, as well as sending its own requests. |
struct | IControlDataReceiver::IncomingRequest An incoming request to this ControlDataReceiver. |
struct | IControlDataReceiver::ReceiverResponse A response message to a request. |
struct | IControlDataReceiver::Settings Settings for a ControlDataReceiver. |
Source code
// Copyright (c) 2024, Edgeware AB. All rights reserved.
#pragma once
#include <memory>
#include <ISystemControllerInterface.h>
#include "ControlDataCommon.h"
class IControlDataReceiver {
public:
struct IncomingRequest {
std::vector<std::vector<uint8_t>> mMessages;
std::string mSenderUUID;
std::string mRequesterUUID;
uint64_t mRequestID = 0;
int64_t mSenderTimestampUs =
0;
int64_t mDeliveryTimestampUs =
0;
};
struct ReceiverResponse {
std::vector<uint8_t> mMessage;
};
struct Settings {
std::string mProductionPipelineUUID;
std::function<void(const IncomingRequest&)> mPreviewIncomingRequestCallback;
std::function<ReceiverResponse(const IncomingRequest&)> mIncomingRequestCallback;
std::function<void(const ControlDataCommon::ConnectionStatus&)>
mConnectionStatusCallback;
std::function<void(const ControlDataCommon::Response&)>
mResponseCallback;
};
virtual ~IControlDataReceiver() = default;
virtual bool configure(const Settings& settings) = 0;
virtual bool sendStatusMessageToSender(const std::vector<uint8_t>& message) = 0;
virtual bool sendRequestToReceivers(const std::vector<uint8_t>& request, uint64_t& requestId) = 0;
virtual bool sendMultiRequestToReceivers(const std::vector<std::vector<uint8_t>>& request, uint64_t& requestId) = 0;
};
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.8 - include/IMediaStreamer.h
include/IMediaStreamer.h
Classes
Name | |
---|---|
class | IMediaStreamer IMediaStreamer is an interface class for MediaStreamers, that can take a single stream of uncompressed video and/or audio frames and encode and output it in some way. This output can either be a stream to a network or writing down the data to a file on the hard drive. This class is configured from two interfaces. The input configuration (input video resolution, frame rate, pixel format, number of audio channels…) is made through this C++ API. The output stream is then started from the System Controller. Any of these configurations can be made first. The actual stream to output will start once the first call to. |
struct | IMediaStreamer::Settings Settings used when creating a new MediaStreamer. |
struct | IMediaStreamer::Configuration The input configuration of the frames that will be sent to this MediaStreamer. The output stream configuration is made from the System controller via the ISystemControllerInterface. |
Source code
// Copyright (c) 2024, Edgeware AB. All rights reserved.
#pragma once
#include <memory>
#include <ISystemControllerInterface.h>
#include <cuda.h>
#include "AlignedFrame.h"
class IMediaStreamer {
public:
struct Settings {
std::string mName; // Human-readable name, presented in the REST API
};
struct Configuration {
// Video
PixelFormat mIncomingPixelFormat = PixelFormat::kUnknown;
uint32_t mWidth = 0; // Width of the incoming video frames in pixels
uint32_t mHeight = 0; // Height of the incoming video frames in pixels
uint32_t mFrameRateN = 0; // Frame rate numerator of the incoming video frames
uint32_t mFrameRateD = 0; // Frame rate denominator of the incoming video frames
// Audio
uint32_t mAudioSampleRate = 0; // Audio sample rate of the incoming frames in Hz
uint32_t mNumAudioChannels = 0; // Number of audio channels in the incoming frames
};
virtual ~IMediaStreamer() = default;
virtual bool configure(const std::string& uuid, const Settings& settings, CUcontext cudaContext) = 0;
virtual bool setInputFormatAndStart(const Configuration& configuration) = 0;
virtual bool stopAndResetFormat() = 0;
[[nodiscard]] virtual bool hasFormatAndIsRunning() const = 0;
[[nodiscard]] virtual bool hasOpenOutputStream() const = 0;
virtual bool outputData(const AlignedFramePtr& frame) = 0;
};
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.9 - include/IngestApplication.h
include/IngestApplication.h
Classes
Name | |
---|---|
class | IngestApplication |
struct | IngestApplication::Settings |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <memory>
#include "ISystemControllerInterface.h"
class IngestApplication {
public:
struct Settings {};
IngestApplication();
~IngestApplication();
bool start(const std::shared_ptr<ISystemControllerInterface>& controllerInterface, const Settings& settings);
bool stop();
static std::string getVersion();
static std::string getLibraryVersions();
private:
class Impl;
std::unique_ptr<Impl> pImpl;
};
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.10 - include/IngestUtils.h
include/IngestUtils.h
Namespaces
Name |
---|
IngestUtils |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
namespace IngestUtils {
bool isRunningWithRootPrivileges();
} // namespace IngestUtils
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.11 - include/ISystemControllerInterface.h
include/ISystemControllerInterface.h
Classes
Name | |
---|---|
class | ISystemControllerInterface An ISystemControllerInterface is the interface between a component and the System controller controlling the component. The interface allows for two-way communication between the component and the system controller by means of sending requests and getting responses. Classes deriving from the ISystemControllerInterface should provide the component side implementation of the communication with the system controller. This interface can be inherited and implemented by developers to connect to custom system controllers, or to directly control a component programmatically, without the need for connecting to a remote server. |
struct | ISystemControllerInterface::Response A response to a request, consists of a status code and an (optional) parameters JSON object. |
struct | ISystemControllerInterface::Callbacks A struct containing the callbacks that needs to be registered by the component using this interface. |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <functional>
#include <json.hpp>
#include <optional>
#include <string>
class ISystemControllerInterface {
public:
enum class StatusCode : uint32_t {
SUCCESS = 3001, // Accept / Success
TOO_MANY_REQUESTS = 3101, // Too many requests, try again later
UUID_ALREADY_REGISTERED = 3201, // UUID is already registered
FORMAT_ERROR = 3202, // Message formatting error
ALREADY_CONFIGURED = 3203, // The requested thing to configure is already configured
OUT_OF_RESOURCES = 3204, // Out of resources (CPU/GPU close to max utilization, all available slots used, etc.)
NOT_FOUND = 3205, // The requested thing was not found
INTERNAL_ERROR = 3206, // Internal error when trying to serve the request
CONNECTION_FAILED = 3207, // Connection failure
TIMEOUT_EXCEEDED = 3208, // Timeout exceeded
KEY_MISMATCH = 3209, // Key mismatch (might be a timeout, 3007 in the future)
UNKNOWN_REQUEST = 3210, // The name of the request was not known
MALFORMED_REQUEST = 3211, // The request is not correctly formatted
ALREADY_IN_USE = 3212, // The requested resource is already in use
VERSION_MISMATCH = 3213, // The version of the request is not supported
// None, yet
};
struct Response {
StatusCode mCode;
nlohmann::json mParameters; // Can be empty
};
struct Callbacks {
std::function<Response(const std::string&, const nlohmann::json&)>
mRequestCallback; // Callback called when then controller has sent a request
std::function<void(uint32_t, const std::string&, const std::error_code&)>
mConnectionClosedCallback; // Callback called when the connection to the controller is closed
};
virtual ~ISystemControllerInterface() = default;
virtual std::optional<std::string> sendMessage(const std::string& messageTitle,
const nlohmann::json& parameters) = 0;
virtual bool registerRequestCallback(const Callbacks& callbacks) = 0;
virtual bool connect() = 0;
virtual bool disconnect() = 0;
[[nodiscard]] virtual bool isConnected() const = 0;
[[nodiscard]] virtual std::string getUUID() const = 0;
};
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.12 - include/MediaReceiver.h
include/MediaReceiver.h
Classes
Name | |
---|---|
class | MediaReceiver A MediaReceiver contains the logic for receiving, decoding and aligning incoming media sources from the Ingests. The aligned data is then delivered to the Rendering Engine which is also responsible for setting up the MediaReceiver. The MediaReceiver has a builtin multi view generator, which can create output streams containing composited subsets of the incoming video sources. This class is controlled using an ISystemControllerInterface provided when starting it. |
struct | MediaReceiver::NewStreamParameters A struct containing information on the format of an incoming stream. |
struct | MediaReceiver::Settings Settings for a MediaReceiver. |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <memory>
#include <utility>
#include <cuda.h>
#include "AlignedFrame.h"
#include "ControlDataReceiver.h"
#include "ISystemControllerInterface.h"
#include "MediaStreamer.h"
class MediaReceiver {
public:
struct NewStreamParameters {
uint32_t mVideoHeight = 0; // Height of the video in pixels. 0 if the stream does not contain any video
uint32_t mVideoWidth = 0; // Width of the video in pixels. 0 if the stream does not contain any video
uint32_t mFrameRateN = 0; // Frame rate numerator
uint32_t mFrameRateD = 1; // Frame rate denominator
uint32_t mAudioSampleRate = 0; // Sample rate of the audio in Hz. 0 if the stream does not contain any audio
};
struct Settings {
PixelFormat mDecodedFormat = PixelFormat::kRgba64Le;
std::function<std::function<void(const AlignedFramePtr&)>(uint32_t inputSlot,
const std::string& streamID,
const NewStreamParameters& newStreamParameters)>
mNewConnectionCallback;
std::function<void(uint32_t inputSlot)> mClosedConnectionCallback;
std::function<std::optional<std::string>()> mResetCallback;
bool mUseMultiViewer = false;
bool mDeliverOld = false;
CUstream mAlignedFrameFreeStream = nullptr;
};
enum class TallyBorderColor : uint32_t { kNone, kRed, kGreen, kYellow };
MediaReceiver();
~MediaReceiver();
bool start(const std::shared_ptr<ISystemControllerInterface>& controllerInterface,
CUcontext cudaContext,
const Settings& settings,
const ControlDataReceiver::Settings& receiverSettings);
void stop();
std::function<void(const AlignedFramePtr&)> getCustomMultiViewSourceInput(uint32_t inputSlot,
const std::string& name = "");
bool removeCustomMultiViewSourceInput(uint32_t inputSlot);
std::shared_ptr<IMediaStreamer> createMediaStreamerOutput(const MediaStreamer::Settings& settings);
bool removeMediaStreamerOutput(const std::string& uuid);
std::shared_ptr<IControlDataReceiver> getControlDataReceiver();
void setTallyBorder(uint32_t inputSlot, TallyBorderColor color);
void clearTallyBorder(uint32_t inputSlot);
void clearAllTallyBorders();
[[nodiscard]] MediaReceiver::TallyBorderColor getTallyBorder(uint32_t inputSlot) const;
MediaReceiver(MediaReceiver const&) = delete; // Copy construct
MediaReceiver(MediaReceiver&&) = delete; // Move construct
MediaReceiver& operator=(MediaReceiver const&) = delete; // Copy assign
MediaReceiver& operator=(MediaReceiver&&) = delete; // Move assign
static std::string getVersion();
static std::string getLibraryVersions();
private:
class Impl;
std::unique_ptr<Impl> pImpl;
};
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.13 - include/PixelFormat.h
include/PixelFormat.h
Namespaces
Name |
---|
ACL |
Types
Name | |
---|---|
enum uint32_t | PixelFormat { kUnknown = 0, kNv12 = ACL::makeFourCC(‘N’, ‘V’, ‘1’, ‘2’), kUyvy = ACL::makeFourCC(‘U’, ‘Y’, ‘V’, ‘Y’), kP010 = ACL::makeFourCC(‘P’, ‘0’, ‘1’, ‘0’), kP016 = ACL::makeFourCC(‘P’, ‘0’, ‘1’, ‘6’), kP210 = ACL::makeFourCC(‘P’, ‘2’, ‘1’, ‘0’), kP216 = ACL::makeFourCC(‘P’, ‘2’, ‘1’, ‘6’), kRgba = |
ACL::makeFourCC(‘R’, ‘G’, ‘B’, ‘A’), kV210 = ACL::makeFourCC(‘V’, ‘2’, ‘1’, ‘0’), kRgba64Le = | |
ACL::makeFourCC(‘R’, ‘G’, ‘B’, 64)} Enumeration of FourCC formats. |
Functions
Name | |
---|---|
std::ostream & | operator«(std::ostream & stream, PixelFormat pixelFormat) Add the string representation of a PixelFormat to the output stream. |
Types Documentation
enum PixelFormat
Enumerator | Value | Description |
---|---|---|
kUnknown | 0 | |
kNv12 | ACL::makeFourCC(‘N’, ‘V’, ‘1’, ‘2’) | |
kUyvy | ACL::makeFourCC(‘U’, ‘Y’, ‘V’, ‘Y’) | |
kP010 | ACL::makeFourCC(‘P’, ‘0’, ‘1’, ‘0’) | |
kP016 | ACL::makeFourCC(‘P’, ‘0’, ‘1’, ‘6’) | |
kP210 | ACL::makeFourCC(‘P’, ‘2’, ‘1’, ‘0’) | |
kP216 | ACL::makeFourCC(‘P’, ‘2’, ‘1’, ‘6’) | |
kRgba | = | |
ACL::makeFourCC(‘R’, ‘G’, ‘B’, ‘A’) | ||
kV210 | ACL::makeFourCC(‘V’, ‘2’, ‘1’, ‘0’) | |
kRgba64Le | = | |
ACL::makeFourCC(‘R’, ‘G’, ‘B’, 64) |
Enumeration of FourCC formats.
Functions Documentation
function operator«
std::ostream & operator<<(
std::ostream & stream,
PixelFormat pixelFormat
)
Add the string representation of a PixelFormat to the output stream.
Parameters:
- pixelFormat The PixelFormat to get as a string
Return: The ostream with the string representation of the PixelFormat appended
Source code
// Copyright (c) 2023, Edgeware AB. All rights reserved.
#pragma once
#include <cstdint>
#include <ostream>
namespace ACL {
constexpr uint32_t makeFourCC(char a, char b, char c, char d) {
return static_cast<uint32_t>(a) + (static_cast<uint32_t>(b) << 8) + (static_cast<uint32_t>(c) << 16) +
(static_cast<uint32_t>(d) << 24);
}
} // namespace ACL
enum PixelFormat : uint32_t {
kUnknown = 0, // Unknown format
kNv12 = ACL::makeFourCC('N', 'V', '1', '2'), // 4:2:0 format with Y plane followed by UVUVUV.. plane
kUyvy = ACL::makeFourCC('U', 'Y', 'V', 'Y'), // 4:2:2 format with packed UYVY macropixels
kP010 = ACL::makeFourCC('P', '0', '1', '0'), // 4:2:0 P016 with data in the 10 MSB
kP016 = ACL::makeFourCC('P', '0', '1', '6'), // 4:2:0 format with 16 bit words, Y plane followed by UVUVUV.. plane
kP210 = ACL::makeFourCC('P', '2', '1', '0'), // 4:2:2 P216 with data in the 10 MSB
kP216 = ACL::makeFourCC('P', '2', '1', '6'), // 4:2:2 format with 16 bit words, Y plane followed by UVUVUV.. plane
kRgba =
ACL::makeFourCC('R', 'G', 'B', 'A'), // 4:4:4:4 RGB format, 8 bit per channel, ordered RGBARGBARG.. in memory
kV210 = ACL::makeFourCC('V', '2', '1', '0'), // 4:2:2 packed format with 10 bit per component
kRgba64Le =
ACL::makeFourCC('R', 'G', 'B', 64) // 16 bit per component, ordered as RGBA in memory with little endian words.
};
std::ostream& operator<<(std::ostream& stream, PixelFormat pixelFormat);
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.14 - include/SystemControllerConnection.h
include/SystemControllerConnection.h
Classes
Name | |
---|---|
class | SystemControllerConnection An implementation of the ISystemControllerInterface for a System controller residing in a remote server. The connection to the server uses a Websocket. |
struct | SystemControllerConnection::Settings Settings for a SystemControllerConnection. |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <chrono>
#include "ISystemControllerInterface.h"
#include "json.hpp"
class SystemControllerConnection final : public ISystemControllerInterface {
public:
enum class ComponentType : uint32_t {
kIngest,
kPipeline,
kControlPanel,
};
struct Settings {
std::string mSystemControllerIP; // IP of the server
uint16_t mSystemControllerPort; // Port of the server
std::string mSystemControllerPostfix; // Postfix of the address that the backend uses if any
std::string mPSK; // The pre shared key used for authorization with the system controller server
std::string mUUID; // The UUID of the device using this library
ComponentType mType; // The component type of the component using this SystemControllerConnection
std::string mName; // The component name (optional)
std::string mMyIP; // The external IP of the system the component is running on. Will be sent to the system
// controller server in the announce message (optional)
std::chrono::milliseconds mConnectTimeout{
3000}; // Max time to wait on an announcement response from the server during connection
bool mEnableHTTPS; // Enable the communication between the system controller and a component encrypted
bool mInsecureHTTPS; // Disable the verification of the TLS certificate if requested.
std::string mCustomCaCertFile; // Custom CA certificate
};
SystemControllerConnection();
~SystemControllerConnection() override;
bool configure(const Settings& settings);
bool connect() override;
[[nodiscard]] bool isConnected() const override;
[[nodiscard]] std::string getUUID() const override;
std::optional<std::string> sendMessage(const std::string& messageTitle, const nlohmann::json& parameters) override;
bool disconnect() override;
bool registerRequestCallback(const Callbacks& callbacks) override;
SystemControllerConnection(SystemControllerConnection const&) = delete; // Copy construct
SystemControllerConnection(SystemControllerConnection&&) = delete; // Move construct
SystemControllerConnection& operator=(SystemControllerConnection const&) = delete; // Copy assign
SystemControllerConnection& operator=(SystemControllerConnection&&) = delete; // Move assign
private:
class Impl;
std::unique_ptr<Impl> pImpl;
};
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.15 - include/TimeCommon.h
include/TimeCommon.h
Namespaces
Name |
---|
TimeCommon |
Classes
Name | |
---|---|
struct | TimeCommon::TAIStatus |
struct | TimeCommon::TimeStructure |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <chrono>
#include <cstdint>
#include <fstream>
#include <sstream>
#include "expected.hpp"
namespace TimeCommon {
enum class StratumLevel {
UnknownStratum,
stratum0,
stratum1,
stratum2,
stratum3,
stratum4,
};
struct TAIStatus {
StratumLevel mStratum = StratumLevel::UnknownStratum;
bool mHasLock = false;
double mTimeDiffS = 0.0; // Time diff vs NTP in seconds, +/- value means slow/faster than NTP time
};
// Little endian
// Minimum size packet is 64 - bytes
struct TimeStructure {
uint64_t t1 = 0; // 8-bytes / Total 8-bytes == Client time T1
uint64_t t2 = 0; // 8-bytes / Total 16-bytes == Server
uint64_t t3 = 0; // 8-bytes / Total 24-bytes
uint64_t t4 = 0; // 8-bytes / Total 32-bytes
uint64_t token = 0; // 8-bytes / Total 40-bytes == t1 ^ key
uint64_t dummy1 = 0; // 8-bytes / Total 48-bytes == for future use
uint64_t dummy2 = 0; // 8-bytes / Total 56-bytes == for future use
uint64_t dummy3 = 0; // 8-bytes / Total 64-bytes == for future use
};
uint64_t getMonotonicClockMicro();
tl::expected<TimeCommon::TAIStatus, std::string> getStatus();
int64_t getTAIMicro();
std::string taiMicroToString(int64_t taiTimestamp);
} // namespace TimeCommon
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.2.16 - include/UUIDUtils.h
include/UUIDUtils.h
Namespaces
Name |
---|
UUIDUtils A namespace for UUID utility functions. |
Source code
// Copyright (c) 2022, Edgeware AB. All rights reserved.
#pragma once
#include <string>
namespace UUIDUtils {
std::string generateRandomUUID();
bool isValidUUIDString(const std::string& uuid);
} // namespace UUIDUtils
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.3 - Namespaces
Namespaces
- namespace ACL
- namespace AclLog
A namespace for logging utilities. - namespace ControlDataCommon
- namespace IngestUtils
- namespace TimeCommon
- namespace UUIDUtils
A namespace for UUID utility functions. - namespace spdlog
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.3.1 - ACL
ACL
Functions
Name | |
---|---|
constexpr uint32_t | makeFourCC(char a, char b, char c, char d) Helper function to create a FourCC code out of four characters. |
Functions Documentation
function makeFourCC
constexpr uint32_t makeFourCC(
char a,
char b,
char c,
char d
)
Helper function to create a FourCC code out of four characters.
Return: The characters a-d encoded as a 32 bit FourCC code.
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.3.2 - AclLog
AclLog
A namespace for logging utilities.
Classes
Name | |
---|---|
class | AclLog::CommandLogFormatter This class is used to format log entries for Rendering Engine commands. |
class | AclLog::ThreadNameFormatterFlag |
class | AclLog::FileLocationFormatterFlag A custom flag formatter which logs the source file location between a par of “[]”, in case the location is provided with the log call. |
Types
Name | |
---|---|
enum class | Level { kTrace, kDebug, kInfo, kWarning, kError, kCritical, kOff} Log levels. |
Functions
Name | |
---|---|
void | init(const std::string & name) Initialize logging. |
void | initCommandLog(const std::string & name) Init the Rendering Engine command log. |
void | setLevel(Level level) Set global logging level. |
void | logCommand(const std::string & origin, const std::string & command) Log a command to the command log. |
AclLog::Level | getLogLevel() Internal helper function for getting the setting for the log level. |
size_t | getMaxFileSize() Internal helper function for getting the setting for the maximum size for a log file. |
size_t | getMaxLogRotations() Internal helper function for getting the setting for the maximum number of rotated log files. |
std::filesystem::path | getLogFileFullPath(const std::string & name) Internal helper function for constructing the full path name for the log file. |
std::string | getThreadName() |
Types Documentation
enum Level
Enumerator | Value | Description |
---|---|---|
kTrace | ||
kDebug | ||
kInfo | ||
kWarning | ||
kError | ||
kCritical | ||
kOff |
Log levels.
Functions Documentation
function init
void init(
const std::string & name
)
Initialize logging.
Parameters:
- name The name of the component (used for log prefix and filename)
By default two sinks are created. The first sink writes messages to the console. The second sink attempts to create a log file in the path set by the environment variable ACL_LOG_PATH (’/tmp’ if unset). The name of the log file is ’name-
function initCommandLog
void initCommandLog(
const std::string & name
)
Init the Rendering Engine command log.
Parameters:
- name The name of the Rendering Engine (used for filename)
function setLevel
void setLevel(
Level level
)
Set global logging level.
Parameters:
- level Logging level to set
function logCommand
void logCommand(
const std::string & origin,
const std::string & command
)
Log a command to the command log.
Parameters:
- origin The origin of the command to log, usually the UUID of the control panel sending the command
- command The command to log
function getLogLevel
AclLog::Level getLogLevel()
Internal helper function for getting the setting for the log level.
Return: The wanted log level fetched from an environment variable if set, or else a default value
function getMaxFileSize
size_t getMaxFileSize()
Internal helper function for getting the setting for the maximum size for a log file.
Return: The wanted maximum size of the log file fetched from an environment variable if set, or else a default value
function getMaxLogRotations
size_t getMaxLogRotations()
Internal helper function for getting the setting for the maximum number of rotated log files.
Return: The wanted number of log files to rotate fetched from an environment variable if set, or else a default value
function getLogFileFullPath
std::filesystem::path getLogFileFullPath(
const std::string & name
)
Internal helper function for constructing the full path name for the log file.
Parameters:
- name The name of the component
Return: The full path to the log file
function getThreadName
inline std::string getThreadName()
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.3.3 - ControlDataCommon
ControlDataCommon
Classes
Name | |
---|---|
struct | ControlDataCommon::ConnectionStatus Connection status struct containing information about a connection event. |
struct | ControlDataCommon::Response A response from a ControlDataReceiver to a request. The UUID tells which receiver the response is sent from. |
struct | ControlDataCommon::StatusMessage A status message from a ControlDataReceiver. The UUID tells which receiver the message is sent from. |
Types
Name | |
---|---|
enum class | ConnectionType { CONNECTED, DISCONNECTED} The connection types. |
Types Documentation
enum ConnectionType
Enumerator | Value | Description |
---|---|---|
CONNECTED | ||
DISCONNECTED |
The connection types.
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.3.4 - IngestUtils
IngestUtils
Functions
Name | |
---|---|
bool | isRunningWithRootPrivileges() |
Functions Documentation
function isRunningWithRootPrivileges
bool isRunningWithRootPrivileges()
Return: true if this application is executing with root privileges, false otherwise
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.3.5 - spdlog
spdlog
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.3.6 - TimeCommon
TimeCommon
Classes
Name | |
---|---|
struct | TimeCommon::TAIStatus |
struct | TimeCommon::TimeStructure |
Types
Name | |
---|---|
enum class | StratumLevel { UnknownStratum, stratum0, stratum1, stratum2, stratum3, stratum4} |
Functions
Name | |
---|---|
uint64_t | getMonotonicClockMicro() Get current time since epoch. |
tl::expected< TimeCommon::TAIStatus, std::string > | getStatus() Get TAI status. |
int64_t | getTAIMicro() Get current TAI time. |
std::string | taiMicroToString(int64_t taiTimestamp) Converts the input TAI timestamp to a human readable string. Timestamp is converted to local time including leap seconds. |
Types Documentation
enum StratumLevel
Enumerator | Value | Description |
---|---|---|
UnknownStratum | ||
stratum0 | ||
stratum1 | ||
stratum2 | ||
stratum3 | ||
stratum4 |
Functions Documentation
function getMonotonicClockMicro
uint64_t getMonotonicClockMicro()
Get current time since epoch.
Return: Return current time since epoch in microseconds
function getStatus
tl::expected< TimeCommon::TAIStatus, std::string > getStatus()
Get TAI status.
Return: Expected with the TAI status if successful or an error string in case something went wrong
function getTAIMicro
int64_t getTAIMicro()
Get current TAI time.
Return: Return current TAI time in microseconds
function taiMicroToString
std::string taiMicroToString(
int64_t taiTimestamp
)
Converts the input TAI timestamp to a human readable string. Timestamp is converted to local time including leap seconds.
Parameters:
- taiTimestamp A TAI timestamp with microseconds resolution
Return: Return a human readable timestamp
Updated on 2024-04-08 at 15:15:27 +0200
6.1.4.3.7 - UUIDUtils
UUIDUtils
A namespace for UUID utility functions.
Functions
Name | |
---|---|
std::string | generateRandomUUID() Generates a completely randomized UUID string. |
bool | isValidUUIDString(const std::string & uuid) Checks if a string is a valid UUID string. |
Functions Documentation
function generateRandomUUID
std::string generateRandomUUID()
Generates a completely randomized UUID string.
Return: A random generated UUID string
function isValidUUIDString
bool isValidUUIDString(
const std::string & uuid
)
Checks if a string is a valid UUID string.
Parameters:
- uuid The string to check
Return: True if uuid is a valid UUID string, otherwise false
Updated on 2024-04-08 at 15:15:27 +0200
6.1.5 - System controller config
This page describes the configuration settings possible to set via the acl_sc_settings.json
file.
Expanded Config name | Config name | Description | Default value |
---|---|---|---|
psk | psk | The Pre-shared key used to authorize components to connect to the System Controller. Must be 32 characters long | "" |
security.salt | salt | Salt string for the generation of tokens and encryption keys | “Some random string” |
security.n | n | Parameter for token and encryption generation. N is CPU/memory cost parameter, must be a power of 2, greater than 1. 1 | 65536 |
security.r | r | Parameter for token and encryption generation. R is block size factor. R constrain must satisfy R*P < 2^30. 1 | 8 |
security.p | p | Parameter for token and encryption generation. P is the parallelization parameter. P constrain must satisfy R*P < 2^30, if this is exceeded, an error will be returned 1 | 1 |
security.keylen | keylen | The length of the generated tokens | 32 |
client_auth.enabled | enabled | Switch to enable basic authentication for clients in the REST API | true |
client_auth.username | username | The username that will grant access to the REST API | “admin” |
client_auth.password | password | Password that will grant access to the REST API | “changeme” |
https.enabled | enabled | Switch to enable encryption of the REST API as well as the connections between the System Controller and the connected components | true |
https.certificate_file | certificate_file | Path to the certificate file. In pem format | "" |
https.private_key_file | private_key_file | Path to the private key file. In pem format | "" |
logger.level | level | The level that the logging will produce output, available in ascending order: debug , info , warn , error | info |
logger.file_name_prefix | file_name_prefix | The prefix of the log filename. A unique timecode and “.log” will automatically be appended to this prefix. The prefix can contain both the path and the prefix of the log file. | "" |
site.port | port | Port on which the service is accessible | 8080 |
site.host | host | Hostname on which the service is accessible | localhost |
rate_limit.requests_per_sec | RateLimit.RequestsPerSec | The average number of requests that are handled per second, before requests are queued up. (Number of tokens added to the token bucket per second 2) | 300 |
rate_limit.burst_limit | RateLimit.BurstLimit | The number of requests that can be handled in a short burst before rate limiting kicks in (Maximum number of tokens in the token bucket 2). | 10 |
response_timeout | response_timeout | The maximum time of a request between the System Controller and a component before the request is timed out | 5000ms |
cors.allowed_origins | allowed_origins | Comma-separated list of origin addresses that are allowed | ["*"] |
cors.allowed_methods | allowed_methods | Comma-separated list of HTTP methods that the service accepts | [“GET”, “POST”, “PUT”, “PATCH”, “DELETE”] |
cors.allowed_headers | allowed_headers | Comma-separated list of headers that are allowed | ["*"] |
cors.exposed_headers | exposed_headers | Comma-separated list of headers that are allowed for exposure | [""] |
cors.allow_credentials | allow_credentials | Allow the xKHR to set credentials cookies | false |
cors.max_age | max_age | How long the preflight cache is stored before a new must be made | 300 |
custom_headers.[N].key | key | Custom headers, the key of the header, e.g.: Cache-Control | None |
custom_headers.[N].value | value | Custom headers, the value to the key of the header, e.g.: no-cache | None |
allow_any_version | allow_any_version | Allow components of any version to connect. When set to false, components with another major version than the System Controller will be rejected | false |
6.2 - Agile Live 6.0.0 developer reference
6.2.1 - REST API v2
The following is a rendering of the OpenAPI for the Agile Live System Controller using Swagger. It does not have a backend server, so it cannot be run interactively.
This API is available from the server at the base URL /api/v2
.
6.2.2 - Agile Live Rendering Engine command documentation
This page describes all the commands available in the Agile Live Rendering Engine.
Command protocol
All commands to the Agile Live Rendering Engine is sent as human readable strings and are listed below. The Rendering Engine expect no line termination, meaning that you do not need to append any type of new line (\n
) characters, or string termination (\0
) characters.
Video commands
Background
The data path of the video mixer in the Rendering Engine looks as follows:
There are eight different node types used in this mixer.
- The
Transition
node is used to pick which input slots to use for the program and preview output. The node also supports making transitions between the program and preview. - The
Select
node simply forwards one of the available sources to its output. - The
Transform
node takes one input stream and transforms it (scaling and translating) to one output stream. - The
Alpha combine
node is used to pick two input slots (or the same input slot) to copy the color from one input and combine it with some information from the other input to form the alpha. The color will just be copied from the color input frame, but there are several modes that can be used to produce the alpha channel in the output frame in different ways. This is known as “key & fill” in broadcasting. - The
Alpha over
node is used to perform an “Alpha Over” operation, that is to put the overlay video stream on top of the background video stream, and let the background be seen through the overlay depending on the alpha of the overlay. The node also features fading the graphics in and out by multiplying the alpha channel by a constant factor. - The
Chroma Key
node takes one input stream, and by setting appropriate parameters for the keying, it will remove areas with the key color from the incoming video stream both affecting the alpha and color channels - The
Fade to black
node takes one input stream, which it can fade to or from black gradually, and then outputs that stream. - The
Output
node has one input stream and will output that stream out from the video mixer, back to the Rendering Engine
There are six different flows into this mixer. There is the one starting with the Transition
node, which represents the main
program that is being produced. Overlaying on top of that is one key+fill graphics overlay (starting with an Alpha Combine
node which can combine two video inputs with key form one and fill from another, or both from the same source),
then there is a chroma key overlay, which can be used to add green screen effects (Starting with a Select
node), then there are two instances of picture-in-picture flows (also starting with select
nodes) and on top of that another graphics overlay, called HTML graphics in the illustration, but which can be used with any source where the key and fill comes from the same source.
The overlay flows are inserted into the main program using Alpha over
nodes.
Each video command in the Rendering Engine command protocol targets one of these nodes.
Transitions
The main flow of the video mixer can be controlled with the following commands:
transition cut <input_slot>
- Cut (hot punch) the program output to the given input slot. This won’t affect the preview output.transition preview <input_slot>
- Preview the given input slot. This won’t affect the program output.transition cut
- Cut between the current program and preview outputs, effectively swapping their place.transition panic <input_slot>
- Cut (hot punch) both the preview and program output to the given input slot.
Some transition commands last over a duration of time, for example wipes. These can be performed either automatically or manually. The automatic mode works by the operator first selecting the type of transition, for instance a fade, setting the preview to the input slot to fade to and then trigger the transition at the right time with a set duration for the transition. In manual mode the exact “position” of the transition is set by the control panel. This is used for implementing T-bars, where the T-bar repeatedly sends the current position of the bar. In the manual mode, the transition type is set before the transition begins, just as in the automatic mode. Note that an automatic transition will be overridden in case the transition “position” is manually set, by interrupting the automatic transition and jumping to the manually set position.
transition type <type>
- Set the transition type to use for transitions hereafter. Valid modes are:fade
- Fade (mix) from one source to the nextwipe_left
- Wipe from one source to the next, wipe going from right side of the screen to the leftwipe_right
- Wipe from one source to the next, wipe going from left side of the screen to the right
transition auto <duration_ms>
- Perform an automatic transition from the current program output to the current preview output, lasting for<duration_ms>
milliseconds. The currently set transition type is used.transition factor <factor>
- Manually set the position/factor of the transition.<factor>
should be between 0.0 and 1.0, where 0.0 is the value before the transition starts and 1.0 is the end of the transition. Note that setting this value to 1.0 will effectively swap place of program and preview output, resetting the transition factor to 0.0 internally. Control panels using this command must take this into consideration to not cause a sudden jump in the transition once the physical control is moved again. Sending this command will interrupt any ongoing automatic transition.
Key and fill
The key and fill graphics flow will be inserted before the picture-in-picture flows, and will therefore be rendered under them. The graphics flow can be controlled with these commands:
alpha_combine color <input_slot>
- Set which input slot to copy the color data from in the Alpha combine nodealpha_combine alpha <input_slot>
- Set which input slot to use to create the alpha channel of the output frame in the Alpha combine nodealpha_combine mode <mode>
- Set which way the alpha channel of the output frame will be produced. Valid modes are:copy-r
- Copy the R-channel of the input “alpha frame” and use it as alpha for the outputcopy-g
- Copy the G-channel of the input “alpha frame” and use it as alpha for the outputcopy-b
- Copy the B-channel of the input “alpha frame” and use it as alpha for the outputcopy-a
- Copy the A-channel of the input “alpha frame” and use it as alpha for the outputaverage-rgb
- Take the average of the R, G and B channels of the input “alpha frame” and use it as alpha channel for the output
alpha_over factor <factor>
- Manually set the transparency multiplier for the Alpha over node. The value<factor>
should be a float between 0.0 and 1.0, where 0.0 means that the overlay is completely invisible and 1.0 means that the overlay is fully visible (where it should be visible according to the overlay image’s alpha channel). Default is 0.0.alpha_over fade to <duration_ms>
- Start an automatic fade from the current factor to a fully visible overlay over the duration of<duration_ms>
milliseconds.alpha_over fade from <duration_ms>
- Start an automatic fade from the current factor to a fully invisible overlay over the duration of<duration_ms>
milliseconds.
Picture-in-Picture
The two picture-in-picture boxes can be controlled with these commands:
pip1_select cut <input_slot>
- Choose which input slot to forward. Changing looks like a cut.pip1_transform scale <factor>
- Set the scale of the picture-in-picture box, relative the original size. Parameter factor must be > 0. Default is 1.0.pip1_transform x <position>
- Set the X offset of the top left corner of the picture-in-picture box, relative the top left corner of the frame. Position is measured in percent of frame width, where 0.0 is the left side of the frame and 1.0 is the right side of the frame. Values outside of the range 0, 1 are valid. Default is 0.0.pip1_transform y <position>
- Set the Y offset of the top left corner of the picture-in-picture box, relative the top left corner of the frame. Position is measured in percent of frame height, where 0.0 is the top of the frame and 1.0 is the bottom of the frame. Values outside of the range 0, 1 are valid. Default is 0.0.pip1_alpha_over factor <factor>
- Manually set the transparency multiplier for the picture-in-picture 1 Alpha over node. The value<factor>
should be a float between 0.0 and 1.0, where 0.0 means that the picture-in-picture box is completely invisible and 1.0 means that the box is fully visible. Default is 0.0.pip1_alpha_over fade to <duration_ms>
- Start an automatic fade from the current factor to a fully visible picture-in-picture box over the duration of<duration_ms>
milliseconds.pip1_alpha_over fade from <duration_ms>
- Start an automatic fade from the current factor to a fully invisible picture-in-picture box over the duration of<duration_ms>
milliseconds.
For the second picture-in-picture flow, replace pip1
with pip2
.
Chroma Key
The nodes of the chroma key branch of the graph can be controlled with these commands:
chroma_key_select cut <input_slot>
- Choose which input slot to forward. Changing looks like a cut.chroma_key r <value>
- Set the red component of the chroma key color. Should be a float between 0.0 and 1.0.chroma_key g <value>
- Set the green component of the chroma key color. Should be a float between 0.0 and 1.0.chroma_key b <value>
- Set the blue component of the chroma key color. Should be a float between 0.0 and 1.0.chroma_key distance <value>
- Set the distance parameter, ranging from 0.0 to 1.0, telling how far from the key color is also considered a part of the key. Larger values will result in more colors being included, 0.0 means only the exact key color is considered.chroma_key falloff <value>
- Set the falloff parameter, ranging from 0.0 to 1.0, which makes edges smoother. A lower value means harder edges between the key color and the parts to keep, a higher value means a more smooth falloff between removed and kept parts.chroma_key spill <value>
- Set the color spill reduction parameter, ranging from 0.0 to 1.0, which will desaturate the key color in the parts of the image that are left. Useful for hiding green rims around objects in from of a green screen etc. 0.0 means off and a higher value means more colors further away from the chroma key color will be desaturated.chroma_key color_picker <x_factor> <y_factor> <square_size>
- Set the chroma key by sampling the pixels covered by the square area of size square_size x square_size. The position x,y is set by<x_factor>
* width of video and<y_factor>
* height of video.<x_factor>
&<y_factor>
should be a float between 0.0 and 1.0. The<square_size>
should be an integer > 0.0.chroma_key alpha_as_video <value>
- Instead of writing an alpha mask, set the alpha value as the RGB value, creating a grayscale image of the mask. Value should be 0.0 or 1.0.chroma_key show_key_color <value>
- Draw a 100x100 box in upper left corner with the current chroma key (RGB) value. Value should be 0.0 or 1.chroma_key show_color_picker <value>
- Draw a pink border around the area where the color picker will sample a new chroma key. Value should be 0.0 or 1.chroma_key_transform scale <factor>
- Set the scale of the chroma key output, relative the original size. Parameter factor must be > 0.0. Default is 1.0.chroma_key_transform x <position>
- Set the X offset of the top left corner of the chroma key output, relative the top left corner of the frame. Position is measured in percent of frame width, where 0.0 is the left side of the frame and 1.0 is the right side of the frame. Values outside of the range 0.0, 1.0 are valid. Default is 0.0.chroma_key_transform y <position>
- Set the Y offset of the top left corner of the chroma key output, relative the top left corner of the frame. Position is measured in percent of frame height, where 0.0 is the top of the frame and 1.0 is the bottom of the frame. Values outside of the range 0.0, 1.0 are valid. Default is 0.0.chroma_key_alpha_over factor <factor>
- Manually set the transparency multiplier for the chroma key Alpha over node. The value<factor>
should be a float between 0.0 and 1.0, where 0.0 means that the chroma key output is completely invisible and 1.0 means that the chroma key output is fully visible. Default is 0.0.chroma_key_alpha_over fade to <duration_ms>
- Start an automatic fade from the current factor to a fully visible chroma key output over the duration of<duration_ms>
milliseconds.chroma_key_alpha_over fade from <duration_ms>
- Start an automatic fade from the current factor to a fully invisible chroma key output over the duration of<duration_ms>
milliseconds.
Graphics overlay (HTML)
The overlay graphics flow, marked with HTML in the illustration. This will be inserted on top of both the key and fill and picture-in-picture. It can be controlled with these commands:
html_select cut <input_slot>
- Choose which input slot to use for the graphics. It must contain both fill and key (such as the HTML graphics), otherwise, no parts of the layer will be transparent. Changing looks like a cut.html_alpha_over factor <factor>
- Manually set the transparency multiplier for the graphics overlay Alpha over node. The value<factor>
should be a float between 0.0 and 1.0, where 0.0 means that the graphics is completely invisible and 1.0 means that it is fully visible. Default is 0.0.html_alpha_over fade to <duration_ms>
- Start an automatic fade from the current factor to a fully visible graphics over the duration of<duration_ms>
milliseconds.html_alpha_over fade from <duration_ms>
- Start an automatic fade from the current factor to a fully invisible graphics over the duration of<duration_ms>
milliseconds.
Fade to black
The fade to black node can be controlled via the following commands:
fade_to_black factor <factor>
- Manually set the blackness factor on a scale from 0.0 to 1.0, where 0.0 means not black and 1.0 means fully black output.fade_to_black fade to <duration_ms>
- Start an automatic fade to a fully black output over the duration of<duration_ms>
milliseconds.fade_to_black fade from <duration_ms>
- Start an automatic fade to a fully visible (non-black) output over the duration of<duration_ms>
milliseconds.
Audio Router commands
The same audio router is used to route specific audio channels from an input slot of the Rendering Engine to an input channel strip on the audio mixer. By default, it has no audio routes set up. The router can be controlled with the following commands:
audio map <input_slot> <channel> mono <input_strip>
- Map the<channel>
from input slot<input_slot>
as a mono channel into input strip<input_strip>
of the audio mixeraudio map <input_slot> <left_channel> stereo <input_strip>
- Map the<left_channel>
and<left_channel> + 1
from input slot<input_slot>
as a stereo pair into input strip<input_strip>
of the audio mixer ’audio map reset <input_strip>
- Remove the current mapping from<input_strip>
of the audio mixer.
Audio commands
The audio mixer and audio router have the following data path:
In the example picture above, three sources are connected to the input slots and four mono or stereo audio streams are picked as input to the audio mixer. The audio mixer is completely separate from the video mixer, so switching image in the video mixer will not change the audio mixer in any way. The internal audio mixer can be controlled using the following commands:
To control the pre-gain of each input strip use:
audio gain <input_strip> <level_dB>
- Set the pre-gain level of a specific input in dB. Parameter<level>
should be a floating point number. Default is 0.0.
To control the low-pass filter of each input strip use:
audio lpf <input_strip> freq <value>
- Set the cutoff frequency for the low-pass filter in an input strip.<value>
is the frequency in Hz as a floating point number. The frequency range is 20 Hz to 20 kHz. Setting a value of 20 kHz or higher will bypass the filter.audio lpf <input_strip> q <value>
- Set the q-value for the low-pass filter in an input strip. The q-value controls the steepness of cutoff curve with a higher value being a steeper slope. The value is a floating point number ranging from 0.4 to 6.0. The default value is 0.7.
To control the high-pass filter of each input strip use:
audio hpf <input_strip> freq <value>
- Set the cutoff frequency for the high-pass filter in an input strip.<value>
is the frequency in Hz as a floating point number. The frequency range is 20 Hz to 20 kHz. Setting a value of 20 Hz or lower will bypass the filter.audio hpf <input_strip> q <value>
- Set the q-value for the high-pass filter in an input strip. The q-value controls the steepness of cutoff curve with a higher value being a steeper slope. The value is a floating point number ranging from 0.4 to 6.0. The default value is 0.7.
To control the parametric equalizer of each input strip use (the equalizer has three bands):
audio eq <input_strip> <band> freq <value>
- Set the frequency of a specific band in the parametric equalizer. Here<band>
is a zero based index of the band, where 0 is the first band with the lowest frequency.<value>
is a floating point number with the frequency in Hzaudio eq <input_strip> <band> gain <value>
- Set the gain of a specific band in the parametric equalizer. Here<band>
is a zero based index of the band, where 0 is the first band with the lowest frequency.<value>
is a floating point number with the gain in dB.audio eq <input_strip> <band> q <value>
- Set the q-value of a specific band in the parametric equalizer to shape the falloff of the band. Here<band>
is a zero based index of the band, where 0 is the first band with the lowest frequency.<value>
is a floating point number with the q-value where a higher value means a more pointy curve. Default value is 0.7.audio eq <input_strip> <band> type <type>
- Set the type of falloff to use for a specific band in the parametric equalizer. Here<band>
is a zero based index of the band, where 0 is the first band with the lowest frequency.<type>
is either oflow
,band
orhigh
, where low and high is a low and high shelf filter and band is a band pass filter. By default, the first band is low shelf, the last band is high shelf and any bands inbetween are band pass.
To control the panning of each input strip use:
audio pan <input_strip> <factor>
- Pan the mono or stereo input of an input strip to the left or right. The range of<factor>
is between -1.0 and +1.0, where 0.0 means center (no panning), negative values means more to the left and positive value more to the right. Values outside this range will be clamped. A value of +/- 1.0 means that there will only be audio in the left/right channel.
To control the dynamic range compressor of each input strip use:
audio comp <input_strip> threshold <value>
- Set the threshold for activation of the compressor. The threshold is a negative dB value ranging from -30 dB to 0 dB. The volume of audio which is above the threshold value will be reduced (compressed). The default value is 0 dB, i.e. only compression if the audio signal is overloaded.audio comp <input_strip> ratio <value>
- Set the compression ratio for audio exceeding the loudness threshold. The value is the numerator in the compression ratio<value>:1
. For instance, if the value is set to 4, the compression ratio is 4:1 and volume overshoot above the threshold will be scaled down to 25 %. The ratio numerator is a floating point number in the range from 1.0 to 24.0, with 4.0 as the default value.audio comp <input_strip> knee <value>
- Set the width of the soft knee in decibels. Instead of simply turning the compression completely on or off at the threshold, the knee defines a volume range in which the compression ratio follows a curve, the “knee”. The knee is a floating point number between 0 dB and 30 dB, with a default value of 2.5 dB.audio comp <input_strip> attack <value>
- Set the attack time of the compressor in milliseconds. The attack time determines how long it takes to reach the full compression after the threshold has been exceeded. The value is a floating point number in the range from 0.1 ms to 120 ms. The default value is 50 ms.audio comp <input_strip> release <value>
- Set the release time of the compressor in milliseconds. The release time determines how long it takes to return to zero compression when the volume is below the compression threshold. The value is a floating point number in the range from 10 ms to 1000 ms. The default value is 200 ms.audio comp <input_strip> gain <value>
- Set the make-up gain i decibels. Since the compression filter lowers the volume of louder audio sections it can be desirable to increase the gain after the filtering. The gain value increases the audio volume with the specified number of decibels. The value is a floating point number in the range from 0 dB to 24 dB. The default value is 0 dB.
To control the volume faders and master volume faders use:
audio volume <input_strip> <output> <level>
- Set the volume level of a specific input in proportion to its original strength for a given output. Parameter<level>
should be a floating point number, where 0.0 means that the input slot is not mixed into the output at all and 1.0 means that the volume of the input slot is kept as is for the output. The signal can also be amplified by using values > 1.0. Default is 0.0.audio mastervolume <output> <level>
- Set the master volume level of an output, before the audio is clamped and converted from floating point samples to 16-bit integer samples. This can be used to avoid clipping of the audio when mixing multiple input sources. Parameter<level>
is a floating point number where 0.0 means that all audio output is off, 1.0 means that the volume is not changed and any value above 1.0 means that the master volume is amplified. Default is 1.0.
HTML rendering
The Rendering Engine also features a built-in HTML renderer which uses the Chromium web browser engine to render HTML pages. The following commands can be used to create and control the HTML browser instances:
html create <input_slot> <width> <height>
- Create a new HTML browser instance with the canvas size ofwidth
xheight
pixels and output the rendered frames to the given input slothtml close <input_slot>
- Close the HTML browser instance on the given input slothtml load <input_slot> <url>
- Load a new URL in the HTML browser on the given input slothtml execute <input_slot> <java script>
- Execute JavaScript in the HTML browser on the given input slot. The JavaScript snippet might span over multiple lines and may contain spaces.
Media player
The Rendering Engine can create media player instances to play video and audio files from the hard drive of the machine running the Rendering Engine. It is up to the user of the API to ensure the files are uploaded
to the machines running the Rendering Engine(s) before trying to run them. The media players use the FFmpeg library to demux and decode the media files, so most files supported by FFmpeg should work.
The media players each have three state parameters: start
, duration
and looping
, to mark a section of the video to loop, or a last frame to stop at in case looping is set to false.
The following commands can be used to create and control the media player instances:
media create <input_slot> <filename>
- Create a new media player instance with the given media file and output the rendered frames to the given input slot.media load <input_slot> <filename>
- Load another file into an existing media player on the given input slot. This will reset the start, duration and looping parameters of that media player.media close <input_slot>
- Close the media player instance on the given input slotmedia play <input_slot>
- Start/resume playing the currently loaded media file in the media player on the given input slotmedia pause <input_slot>
- Pause the media player on the given input slotmedia seek <input_slot> <time_ms>
- Seek the media player on the given input slot to a time pointtime_ms
milliseconds from the start of the media file and pause the playback at the first frame. Usemedia play <input_slot>
to resume the playbackmedia set_start <input_slot> <start_time_ms>
- Set the start time of the looping section of the media player on the given input slot to a time pointstart_time_ms
milliseconds from the start of the media file. It will still be possible to seek before this time point and play the video from there.media set_duration <input_slot> <duration_ms>
- Set the duration of the playing/looping section of the media player on the given input slot toduration_ms
milliseconds from thestart_time_ms
of the media file. The duration will be clamped in case it spans past the end of the media file. A running media player will either stop at this point, or loop the video depending on the state of the looping parametermedia set_looping <input_slot> <true/false>
- Set if the media should loop from the start time when the media player on the given input slot hits either the end of the looping section defined by the start time and duration, or the end of the media file. If set to false, the media player will stop at the end of the section or at the end of the file instead of looping it from the start point.
6.2.3 - C++ SDK
Agile Content Live C++ SDK reference.
6.2.3.1 - Classes
Classes
- namespace ACL
- namespace AclLog
A namespace for logging utilities.- class CommandLogFormatter
This class is used to format log entries for Rendering Engine commands. - class FileLocationFormatterFlag
A custom flag formatter which logs the source file location between a par of “[]”, in case the location is provided with the log call. - class ThreadNameFormatterFlag
- class CommandLogFormatter
- struct AlignedAudioFrame
AlignedAudioFrame is a frame of interleaved floating point audio samples with a given number of channels. - struct AlignedFrame
A frame of aligned data that is passed to the rendering engine from the MediaReceiver. A DataFrame contains a time stamped frame of media, which might be video, audio and auxiliary data such as subtitles. A single DataFrame can contain one or multiple types of media. Which media types are included can be probed by nullptr-checking/size checking the data members. The struct has ownership of all data pointers included. The struct includes all logic for freeing the resources held by this struct and the user should therefore just make sure the struct itself is deallocated to ensure all resources are freed. - namespace ControlDataCommon
- struct ConnectionStatus
Connection status struct containing information about a connection event. - struct Response
A response from a ControlDataReceiver to a request. The UUID tells which receiver the response is sent from. - struct StatusMessage
A status message from a ControlDataReceiver. The UUID tells which receiver the message is sent from.
- struct ConnectionStatus
- class ControlDataReceiver
A ControlDataReceiver can receive messages from a sender or another ControlDataReceiver using a network connection. It can also connect to and forward the incoming request messages to other receivers. The connections to the sender and the other receivers are controlled by an ISystemControllerInterface instance. The ControlDataReceiver has areceiving
orlistening
side, as well as asending
side. The listening side can listen to one single network port and have multiple ControlDataSenders and ControlDataReceivers connected to that port to receive requests from them. On the sending side of the ControlDataReceiver, it can be connected to the listening side of other ControlDataReceivers, used to forward all incoming messages to that receiver, as well as sending its own requests.- struct IncomingRequest
An incoming request to this ControlDataReceiver. - struct ReceiverResponse
A response message to a request. - struct Settings
Settings for a ControlDataReceiver.
- struct IncomingRequest
- class ControlDataSender
A ControlDataSender can send control signals to one or more receivers using a network connection. A single ControlDataSender can connect to multiple receivers, all identified by a UUID. The class is controlled using an ISystemControllerInterface; this interface is responsible for setting up connections to receivers. The ControlDataSender can send asynchronous requests to (all) the receivers and get a response back. Each response is identified with a request ID as well as the UUID of the responding receiver. The ControlDataSender can also receive status messages from the receivers.- struct Settings
Settings for a ControlDataSender.
- struct Settings
- class DeviceMemory
RAII class for a CUDA memory buffer. - class ISystemControllerInterface
An ISystemControllerInterface is the interface between a component and the System controller controlling the component. The interface allows for two-way communication between the component and the system controller by means of sending requests and getting responses. Classes deriving from the ISystemControllerInterface should provide the component side implementation of the communication with the system controller. This interface can be inherited and implemented by developers to connect to custom system controllers, or to directly control a component programmatically, without the need for connecting to a remote server. An ISystemControllerInterface can have a parent-child relationship with other interfaces, where the children are aware of which their parent interface is. This is useful for components that provides extra features for another component, but where the components are still to be considered as two separate entities with separate connections to the System controller. - class IngestApplication
- struct Settings
- namespace IngestUtils
- class MediaReceiver
A MediaReceiver contains the logic for receiving, decoding and aligning incoming media sources from the Ingests. The aligned data is then delivered to the Rendering Engine which is also responsible for setting up the MediaReceiver. The MediaReceiver has a builtin multi view generator, which can create output streams containing composited subsets of the incoming video sources. This class is controlled using an ISystemControllerInterface provided when starting it.- struct NewStreamParameters
A struct containing information on the format of an incoming stream. - struct Settings
Settings for a MediaReceiver.
- struct NewStreamParameters
- class MediaStreamer
MediaStreamer is a class that can take a single stream of uncompressed video and/or audio frames and encode and output it in some way to some interface. This interface can either be a stream to a network or writing down the data to a file on the hard drive. This class is configured from two interfaces. The input configuration (input video resolution, frame rate, pixel format, number of audio channels…) is made through this C++ API. The output stream is then started from the System Controller. Any of these configurations can be made first. The actual stream to output will start once the first call to.- struct Configuration
The input configuration of the frames that will be sent to this MediaStreamer. The output stream configuration is made from the System controller via the ISystemControllerInterface.
- struct Configuration
- class PipelineSystemControllerInterfaceFactory
- class SystemControllerConnection
An implementation of the ISystemControllerInterface for a System controller residing in a remote server. The connection to the server uses a Websocket.- struct Settings
Settings for a SystemControllerConnection.
- struct Settings
- namespace TimeCommon
- struct TAIStatus
- struct TimeStructure
- namespace UUIDUtils
A namespace for UUID utility functions. - namespace spdlog
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.1 - AclLog::CommandLogFormatter
AclLog::CommandLogFormatter
This class is used to format log entries for Rendering Engine commands.
#include <AclLog.h>
Inherits from spdlog::formatter
Public Functions
Name | |
---|---|
CommandLogFormatter() | |
void | format(const spdlog::details::log_msg & msg, spdlog::memory_buf_t & dest) override |
std::unique_ptr< spdlog::formatter > | clone() const override |
Public Functions Documentation
function CommandLogFormatter
CommandLogFormatter()
function format
void format(
const spdlog::details::log_msg & msg,
spdlog::memory_buf_t & dest
) override
function clone
std::unique_ptr< spdlog::formatter > clone() const override
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.2 - AclLog::FileLocationFormatterFlag
AclLog::FileLocationFormatterFlag
A custom flag formatter which logs the source file location between a par of “[]”, in case the location is provided with the log call.
#include <AclLog.h>
Inherits from spdlog::custom_flag_formatter
Public Functions
Name | |
---|---|
void | format(const spdlog::details::log_msg & msg, const std::tm & , spdlog::memory_buf_t & dest) override |
std::unique_ptr< custom_flag_formatter > | clone() const override |
Public Functions Documentation
function format
inline void format(
const spdlog::details::log_msg & msg,
const std::tm & ,
spdlog::memory_buf_t & dest
) override
function clone
inline std::unique_ptr< custom_flag_formatter > clone() const override
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.3 - AclLog::ThreadNameFormatterFlag
AclLog::ThreadNameFormatterFlag
Inherits from spdlog::custom_flag_formatter
Public Functions
Name | |
---|---|
void | format(const spdlog::details::log_msg & , const std::tm & , spdlog::memory_buf_t & dest) override |
std::unique_ptr< custom_flag_formatter > | clone() const override |
Public Functions Documentation
function format
inline void format(
const spdlog::details::log_msg & ,
const std::tm & ,
spdlog::memory_buf_t & dest
) override
function clone
inline std::unique_ptr< custom_flag_formatter > clone() const override
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.4 - AlignedAudioFrame
AlignedAudioFrame
AlignedAudioFrame is a frame of interleaved floating point audio samples with a given number of channels.
#include <AlignedFrame.h>
Public Attributes
Name | |
---|---|
std::vector< float > | mSamples |
uint8_t | mNumberOfChannels |
uint32_t | mNumberOfSamplesPerChannel |
Public Attributes Documentation
variable mSamples
std::vector< float > mSamples;
variable mNumberOfChannels
uint8_t mNumberOfChannels = 0;
variable mNumberOfSamplesPerChannel
uint32_t mNumberOfSamplesPerChannel = 0;
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.5 - AlignedFrame
AlignedFrame
A frame of aligned data that is passed to the rendering engine from the MediaReceiver. A DataFrame contains a time stamped frame of media, which might be video, audio and auxiliary data such as subtitles. A single DataFrame can contain one or multiple types of media. Which media types are included can be probed by nullptr-checking/size checking the data members. The struct has ownership of all data pointers included. The struct includes all logic for freeing the resources held by this struct and the user should therefore just make sure the struct itself is deallocated to ensure all resources are freed.
#include <AlignedFrame.h>
Public Functions
Name | |
---|---|
AlignedFrame() =default | |
~AlignedFrame() =default | |
AlignedFrame(AlignedFrame const & ) =delete | |
AlignedFrame & | operator=(AlignedFrame const & ) =delete |
std::shared_ptr< AlignedFrame > | makeShallowCopy() const Make a shallow copy of this AlignedFrame (video and audio pointers will point to the same video and audio memory buffers as in the frame copied from) |
Public Attributes
Name | |
---|---|
int64_t | mCaptureTimestamp |
int64_t | mRenderingTimestamp |
std::shared_ptr< DeviceMemory > | mVideoFrame |
PixelFormat | mPixelFormat |
uint32_t | mFrameRateN |
uint32_t | mFrameRateD |
uint32_t | mWidth |
uint32_t | mHeight |
AlignedAudioFramePtr | mAudioFrame |
uint32_t | mAudioSamplingFrequency |
Public Functions Documentation
function AlignedFrame
AlignedFrame() =default
function ~AlignedFrame
~AlignedFrame() =default
function AlignedFrame
AlignedFrame(
AlignedFrame const &
) =delete
function operator=
AlignedFrame & operator=(
AlignedFrame const &
) =delete
function makeShallowCopy
std::shared_ptr< AlignedFrame > makeShallowCopy() const
Make a shallow copy of this AlignedFrame (video and audio pointers will point to the same video and audio memory buffers as in the frame copied from)
Return: A pointer to a new frame, which is a shallow copy of the old one, pointing to the same video and audio memory buffers
Public Attributes Documentation
variable mCaptureTimestamp
int64_t mCaptureTimestamp = 0;
The TAI timestamp in microseconds since the TAI epoch when this frame was captured by the ingest
variable mRenderingTimestamp
int64_t mRenderingTimestamp = 0;
The TAI timestamp in microseconds since the TAI epoch when this frame should be delivered to the rendering engine
variable mVideoFrame
std::shared_ptr< DeviceMemory > mVideoFrame = nullptr;
variable mPixelFormat
PixelFormat mPixelFormat = PixelFormat::kUnknown;
variable mFrameRateN
uint32_t mFrameRateN = 0;
variable mFrameRateD
uint32_t mFrameRateD = 0;
variable mWidth
uint32_t mWidth = 0;
variable mHeight
uint32_t mHeight = 0;
variable mAudioFrame
AlignedAudioFramePtr mAudioFrame = nullptr;
variable mAudioSamplingFrequency
uint32_t mAudioSamplingFrequency = 0;
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.6 - ControlDataCommon::ConnectionStatus
ControlDataCommon::ConnectionStatus
Connection status struct containing information about a connection event.
#include <ControlDataCommon.h>
Public Functions
Name | |
---|---|
ConnectionStatus() =default | |
ConnectionStatus(ConnectionType mConnectionType, const std::string & mIp, uint16_t mPort) |
Public Attributes
Name | |
---|---|
ConnectionType | mConnectionType |
std::string | mIP |
uint16_t | mPort |
Public Functions Documentation
function ConnectionStatus
ConnectionStatus() =default
function ConnectionStatus
inline ConnectionStatus(
ConnectionType mConnectionType,
const std::string & mIp,
uint16_t mPort
)
Public Attributes Documentation
variable mConnectionType
ConnectionType mConnectionType = ConnectionType::DISCONNECTED;
variable mIP
std::string mIP;
variable mPort
uint16_t mPort = 0;
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.7 - ControlDataCommon::Response
ControlDataCommon::Response
A response from a ControlDataReceiver to a request. The UUID tells which receiver the response is sent from.
#include <ControlDataCommon.h>
Public Attributes
Name | |
---|---|
std::vector< uint8_t > | mMessage |
uint64_t | mRequestId The actual message. |
std::string | mFromUUID The ID of the request this is a response to. |
Public Attributes Documentation
variable mMessage
std::vector< uint8_t > mMessage;
variable mRequestId
uint64_t mRequestId = 0;
The actual message.
variable mFromUUID
std::string mFromUUID;
The ID of the request this is a response to.
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.8 - ControlDataCommon::StatusMessage
ControlDataCommon::StatusMessage
A status message from a ControlDataReceiver. The UUID tells which receiver the message is sent from.
#include <ControlDataCommon.h>
Public Attributes
Name | |
---|---|
std::vector< uint8_t > | mMessage |
std::string | mFromUUID The actual message. |
Public Attributes Documentation
variable mMessage
std::vector< uint8_t > mMessage;
variable mFromUUID
std::string mFromUUID;
The actual message.
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.9 - ControlDataReceiver
ControlDataReceiver
A ControlDataReceiver can receive messages from a sender or another ControlDataReceiver using a network connection. It can also connect to and forward the incoming request messages to other receivers. The connections to the sender and the other receivers are controlled by an ISystemControllerInterface instance. The ControlDataReceiver has a receiving
or listening
side, as well as a sending
side. The listening side can listen to one single network port and have multiple ControlDataSenders and ControlDataReceivers connected to that port to receive requests from them. On the sending side of the ControlDataReceiver, it can be connected to the listening side of other ControlDataReceivers, used to forward all incoming messages to that receiver, as well as sending its own requests. More…
#include <ControlDataReceiver.h>
Public Classes
Name | |
---|---|
struct | IncomingRequest An incoming request to this ControlDataReceiver. |
struct | ReceiverResponse A response message to a request. |
struct | Settings Settings for a ControlDataReceiver. |
Public Functions
Name | |
---|---|
ControlDataReceiver() Default constructor, creates an empty object. | |
~ControlDataReceiver() Destructor. Will disconnect from the connected receivers and close the System controller connection. | |
bool | configure(const std::shared_ptr< ISystemControllerInterface > & controllerInterface, const Settings & settings) Configure this ControlDataReceiver and connect it to the System Controller. This method will fail in case the ISystemControllerInterface has already been connected to the controller by another component, as such interface can only be used by one component. |
bool | sendStatusMessageToSender(const std::vector< uint8_t > & message) Send a status message to the (directly or indirectly) connected ControlDataSender. In case this ControlDataReceiver has another ControlDataReceiver as sender, that receiver will forward the status message to the sender. |
bool | sendRequestToReceivers(const std::vector< uint8_t > & request, uint64_t & requestId) Send a request to the connected ControlDataReceivers asynchronously. This request will only be sent to ControlDataReceivers on the sending side of this receiver. In case a receiver is located between this ControlDataReceiver and the sender, neither of those will see this request. The response will be sent to the response callback asynchronously. |
bool | sendMultiRequestToReceivers(const std::vector< std::vector< uint8_t » & request, uint64_t & requestId) Send a multi-message request to the connected ControlDataReceivers asynchronously. This request will only be sent to ControlDataReceivers on the sending side of this receiver. In case a receiver is located between this ControlDataReceiver and the sender, neither of those will see this request. The response will be sent to the response callback asynchronously. |
ControlDataReceiver(ControlDataReceiver const & ) =delete | |
ControlDataReceiver & | operator=(ControlDataReceiver const & ) =delete |
std::string | getVersion() Get application version. |
std::string | getBuildInfo() Get application build information. |
Detailed Description
class ControlDataReceiver;
A ControlDataReceiver can receive messages from a sender or another ControlDataReceiver using a network connection. It can also connect to and forward the incoming request messages to other receivers. The connections to the sender and the other receivers are controlled by an ISystemControllerInterface instance. The ControlDataReceiver has a receiving
or listening
side, as well as a sending
side. The listening side can listen to one single network port and have multiple ControlDataSenders and ControlDataReceivers connected to that port to receive requests from them. On the sending side of the ControlDataReceiver, it can be connected to the listening side of other ControlDataReceivers, used to forward all incoming messages to that receiver, as well as sending its own requests.
Each ControlDataReceiver can be configured to have a certain message delay. This delay parameter is set when the System controller instructs the ControlDataReceiver to start listening for incoming connections from senders (or sending ControlDataReceiver). Each incoming request will then be delayed and delivered according to the parameter, compared to the send timestamp in the message. In case multiple receivers are chained, like sender->receiver1->receiver2, receiver1 will delay the incoming messages from the sender based on when they were sent from the sender. Furthermore, receiver2 will delay them compared to when they were sent from receiver1.
A ControlDataReceiver can send status messages back to the sender. In case of chained receivers, the message will be forwarded back to the sender. A user of the ControlDataReceiver can register callbacks to receive requests and status messages. There is also an optional “preview” callback that is useful in case the incoming messages are delayed (have a delay > 0). This callback will then be called as soon as the request message arrives, to allow the user to prepare for when the actual delayed request callback is called.
Public Functions Documentation
function ControlDataReceiver
ControlDataReceiver()
Default constructor, creates an empty object.
function ~ControlDataReceiver
~ControlDataReceiver()
Destructor. Will disconnect from the connected receivers and close the System controller connection.
function configure
bool configure(
const std::shared_ptr< ISystemControllerInterface > & controllerInterface,
const Settings & settings
)
Configure this ControlDataReceiver and connect it to the System Controller. This method will fail in case the ISystemControllerInterface has already been connected to the controller by another component, as such interface can only be used by one component.
Parameters:
- controllerInterface The interface to the System controller, used for communicating with this ControlDataReceiver
- settings The settings to use for this ControlDataReceiver
Return: True on success, false otherwise
function sendStatusMessageToSender
bool sendStatusMessageToSender(
const std::vector< uint8_t > & message
)
Send a status message to the (directly or indirectly) connected ControlDataSender. In case this ControlDataReceiver has another ControlDataReceiver as sender, that receiver will forward the status message to the sender.
Parameters:
- message The status message
Return: True in case the message was sent successfully, false otherwise.
function sendRequestToReceivers
bool sendRequestToReceivers(
const std::vector< uint8_t > & request,
uint64_t & requestId
)
Send a request to the connected ControlDataReceivers asynchronously. This request will only be sent to ControlDataReceivers on the sending side of this receiver. In case a receiver is located between this ControlDataReceiver and the sender, neither of those will see this request. The response will be sent to the response callback asynchronously.
Parameters:
- request The request message
- requestId The unique identifier of this request. Used to identify the async response.
Return: True if the request was successfully sent, false otherwise
function sendMultiRequestToReceivers
bool sendMultiRequestToReceivers(
const std::vector< std::vector< uint8_t >> & request,
uint64_t & requestId
)
Send a multi-message request to the connected ControlDataReceivers asynchronously. This request will only be sent to ControlDataReceivers on the sending side of this receiver. In case a receiver is located between this ControlDataReceiver and the sender, neither of those will see this request. The response will be sent to the response callback asynchronously.
Parameters:
- request The request message
- requestId The unique identifier of this request. Used to identify the async response.
Return: True if the request was successfully sent, false otherwise
function ControlDataReceiver
ControlDataReceiver(
ControlDataReceiver const &
) =delete
function operator=
ControlDataReceiver & operator=(
ControlDataReceiver const &
) =delete
function getVersion
static std::string getVersion()
Get application version.
Return: a string with the current version, e.g. “1.0.0”
function getBuildInfo
static std::string getBuildInfo()
Get application build information.
Return: a string with the current build information such as git hashes of all direct dependencies
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.10 - ControlDataReceiver::IncomingRequest
ControlDataReceiver::IncomingRequest
An incoming request to this ControlDataReceiver.
#include <ControlDataReceiver.h>
Public Attributes
Name | |
---|---|
std::vector< std::vector< uint8_t > > | mMessages |
std::string | mSenderUUID The actual messages. |
std::string | mRequesterUUID UUID of the sender/forwarder that sent the request to this ControlDataReceiver. |
uint64_t | mRequestID UUID of the requester (the original sender, creating the request) |
int64_t | mSenderTimestampUs The requester’s unique id of this request. |
int64_t | mDeliveryTimestampUs The TAI timestamp when this message was sent from the sender/forwarder in micro sec since TAI epoch. |
Public Attributes Documentation
variable mMessages
std::vector< std::vector< uint8_t > > mMessages;
variable mSenderUUID
std::string mSenderUUID;
The actual messages.
variable mRequesterUUID
std::string mRequesterUUID;
UUID of the sender/forwarder that sent the request to this ControlDataReceiver.
variable mRequestID
uint64_t mRequestID = 0;
UUID of the requester (the original sender, creating the request)
variable mSenderTimestampUs
int64_t mSenderTimestampUs =
0;
The requester’s unique id of this request.
variable mDeliveryTimestampUs
int64_t mDeliveryTimestampUs =
0;
The TAI timestamp when this message was sent from the sender/forwarder in micro sec since TAI epoch.
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.11 - ControlDataReceiver::ReceiverResponse
ControlDataReceiver::ReceiverResponse
A response message to a request.
#include <ControlDataReceiver.h>
Public Attributes
Name | |
---|---|
std::vector< uint8_t > | mMessage |
Public Attributes Documentation
variable mMessage
std::vector< uint8_t > mMessage;
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.12 - ControlDataReceiver::Settings
ControlDataReceiver::Settings
Settings for a ControlDataReceiver.
#include <ControlDataReceiver.h>
Public Attributes
Name | |
---|---|
std::function< void(const IncomingRequest &)> | mPreviewIncomingRequestCallback |
std::function< ReceiverResponse(const IncomingRequest &)> | mIncomingRequestCallback |
std::function< void(const ControlDataCommon::ConnectionStatus &)> | mConnectionStatusCallback |
std::function< void(const ControlDataCommon::Response &)> | mResponseCallback Callback for connection events. |
Public Attributes Documentation
variable mPreviewIncomingRequestCallback
std::function< void(const IncomingRequest &)> mPreviewIncomingRequestCallback;
Callback called as soon as this ControlDataReceiver gets the request, as a “preview” of what requests will be delievered.
variable mIncomingRequestCallback
std::function< ReceiverResponse(const IncomingRequest &)> mIncomingRequestCallback;
The actual callback used when this ControlDataReceiver gets a request and wants a response back. This callback will delay and deliver the request according to this receiver’s configured delay.
variable mConnectionStatusCallback
std::function< void(const ControlDataCommon::ConnectionStatus &)> mConnectionStatusCallback;
variable mResponseCallback
std::function< void(const ControlDataCommon::Response &)> mResponseCallback;
Callback for connection events.
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.13 - ControlDataSender
ControlDataSender
A ControlDataSender can send control signals to one or more receivers using a network connection. A single ControlDataSender can connect to multiple receivers, all identified by a UUID. The class is controlled using an ISystemControllerInterface; this interface is responsible for setting up connections to receivers. The ControlDataSender can send asynchronous requests to (all) the receivers and get a response back. Each response is identified with a request ID as well as the UUID of the responding receiver. The ControlDataSender can also receive status messages from the receivers.
#include <ControlDataSender.h>
Public Classes
Name | |
---|---|
struct | Settings Settings for a ControlDataSender. |
Public Functions
Name | |
---|---|
ControlDataSender() Default constructor, creates an empty object. | |
~ControlDataSender() Destructor. Will disconnect from the connected receivers and close the System controller connection. | |
bool | configure(const std::shared_ptr< ISystemControllerInterface > & controllerInterface, const Settings & settings) Configure this ControlDataSender and connect it to the System Controller. This method will fail in case the ISystemControllerInterface has already been connected to the controller by another component, as such interface can only be used by one component. |
bool | sendRequestToReceivers(const std::vector< uint8_t > & request, uint64_t & requestId) Send a request to all the connected ControlDataReceivers asynchronously. The responses will be sent to the response callback. |
bool | sendMultiRequestToReceivers(const std::vector< std::vector< uint8_t » & request, uint64_t & requestId) Send a multi-message request to all the connected ControlDataReceivers asynchronously. The responses will be sent to the response callback. |
ControlDataSender(ControlDataSender const & ) =delete | |
ControlDataSender & | operator=(ControlDataSender const & ) =delete |
std::string | getVersion() Get application version. |
std::string | getGitRevision() Get git revision which application was built on. |
std::string | getBuildInfo() Get application build information. |
Public Functions Documentation
function ControlDataSender
ControlDataSender()
Default constructor, creates an empty object.
function ~ControlDataSender
~ControlDataSender()
Destructor. Will disconnect from the connected receivers and close the System controller connection.
function configure
bool configure(
const std::shared_ptr< ISystemControllerInterface > & controllerInterface,
const Settings & settings
)
Configure this ControlDataSender and connect it to the System Controller. This method will fail in case the ISystemControllerInterface has already been connected to the controller by another component, as such interface can only be used by one component.
Parameters:
- controllerInterface The interface to the System controller, used for communicating with this ControlDataSender
- settings The Settings to use for this ControlDataSender
Return: True on success, false otherwise
function sendRequestToReceivers
bool sendRequestToReceivers(
const std::vector< uint8_t > & request,
uint64_t & requestId
)
Send a request to all the connected ControlDataReceivers asynchronously. The responses will be sent to the response callback.
Parameters:
- request The request message
- requestId The unique identifier of this request. Used to identify the async responses.
Return: True if the request was successfully sent, false otherwise
function sendMultiRequestToReceivers
bool sendMultiRequestToReceivers(
const std::vector< std::vector< uint8_t >> & request,
uint64_t & requestId
)
Send a multi-message request to all the connected ControlDataReceivers asynchronously. The responses will be sent to the response callback.
Parameters:
- request The request message
- requestId The unique identifier of this request. Used to identify the async responses.
Return: True if the request was successfully sent, false otherwise
function ControlDataSender
ControlDataSender(
ControlDataSender const &
) =delete
function operator=
ControlDataSender & operator=(
ControlDataSender const &
) =delete
function getVersion
static std::string getVersion()
Get application version.
Return: a string with the current version, e.g. “1.0.0”
function getGitRevision
static std::string getGitRevision()
Get git revision which application was built on.
Return: a string with the current git revision, e.g. “v2.1.0-231-gcd012ab”, followed by “-dirty” in case there are changes made to the Git repo.
function getBuildInfo
static std::string getBuildInfo()
Get application build information.
Return: a string with the current build information such as git hashes of all direct dependencies
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.14 - ControlDataSender::Settings
ControlDataSender::Settings
Settings for a ControlDataSender.
#include <ControlDataSender.h>
Public Attributes
Name | |
---|---|
std::function< void(const ControlDataCommon::ConnectionStatus &)> | mConnectionStatusCallback |
std::function< void(const ControlDataCommon::Response &)> | mResponseCallback |
std::function< void(const ControlDataCommon::StatusMessage &)> | mStatusMessageCallback |
Public Attributes Documentation
variable mConnectionStatusCallback
std::function< void(const ControlDataCommon::ConnectionStatus &)> mConnectionStatusCallback;
variable mResponseCallback
std::function< void(const ControlDataCommon::Response &)> mResponseCallback;
variable mStatusMessageCallback
std::function< void(const ControlDataCommon::StatusMessage &)> mStatusMessageCallback;
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.15 - DeviceMemory
DeviceMemory
RAII class for a CUDA memory buffer.
#include <DeviceMemory.h>
Public Functions
Name | |
---|---|
DeviceMemory() =default Default constructor, creates an empty object, without allocating any memory on the device. | |
DeviceMemory(size_t numberOfBytes) Constructor allocating the required number of bytes. | |
DeviceMemory(size_t numberOfBytes, cudaStream_t cudaStream) Constructor allocating the required number of bytes by making an async allocation. The allocation will be put in the given CUDA stream. | |
DeviceMemory(void * deviceMemory) Constructor taking ownership of an already allocated CUDA memory pointer. This class will free the pointer once it goes out of scope. | |
bool | allocateMemory(size_t numberOfBytes) Allocates device memory. The memory allocated will automatically be freed by the destructor. |
bool | allocateMemoryAsync(size_t numberOfBytes, cudaStream_t cudaStream) Allocates device memory. The memory allocated will automatically be freed by the destructor. |
bool | reallocateMemory(size_t numberOfBytes) Reallocates device memory. Already existing memory allocation will be freed before the new allocation is made. In case this DeviceMemory has no earlier memory allocation, this method will just allocate new CUDA memory and return a success status. |
bool | reallocateMemoryAsync(size_t numberOfBytes, cudaStream_t cudaStream) Asynchronously reallocate device memory. Already existing memory allocation will be freed before the new allocation is made. In case this DeviceMemory has no earlier memory allocation, this method will just allocate new CUDA memory and return a success status. |
bool | allocateAndResetMemory(size_t numberOfBytes) Allocates device memory and resets all bytes to zeroes. The memory allocated will automatically be freed by the destructor. |
bool | allocateAndResetMemoryAsync(size_t numberOfBytes, cudaStream_t cudaStream) Allocates device memory and resets all bytes to zeroes. The memory allocated will automatically be freed by the destructor. |
bool | freeMemory() Free the device memory held by this class. Calling this when no memory is allocated is a no-op. |
bool | freeMemoryAsync(cudaStream_t cudaStream) Deallocate memory asynchronously, in a given CUDA stream. Calling this when no memory is allocated is a no-op. |
void | setFreeingCudaStream(cudaStream_t cudaStream) Set which CUDA stream to use for freeing this DeviceMemory. In case the DeviceMemory already holds a CUDA stream to use for freeing the memory, this will be overwritten. |
~DeviceMemory() Destructor, frees the internal CUDA memory. | |
template T * | getDevicePointer() const |
size_t | getSize() const |
DeviceMemory(DeviceMemory && other) | |
DeviceMemory & | operator=(DeviceMemory && other) |
void | swap(DeviceMemory & other) |
DeviceMemory(DeviceMemory const & ) =delete DeviceMemory is not copyable. | |
DeviceMemory | operator=(DeviceMemory const & ) =delete |
Public Functions Documentation
function DeviceMemory
DeviceMemory() =default
Default constructor, creates an empty object, without allocating any memory on the device.
function DeviceMemory
explicit DeviceMemory(
size_t numberOfBytes
)
Constructor allocating the required number of bytes.
Parameters:
- numberOfBytes Number of bytes to allocate
Exceptions:
- std::runtime_error In case the allocation failed
function DeviceMemory
explicit DeviceMemory(
size_t numberOfBytes,
cudaStream_t cudaStream
)
Constructor allocating the required number of bytes by making an async allocation. The allocation will be put in the given CUDA stream.
Parameters:
- size Number of bytes to allocate
- cudaStream The CUDA stream to put the async allocation in. A reference to this stream will also be saved internally to be used to asynchronously free them memory when the instance goes out of scope.
Exceptions:
- std::runtime_error In case the async allocation failed to be put in queue.
See: freeMemoryAsync method is not explicitly called, the memory will be freed synchronously when this DeviceMemory instance goes out of scope, meaning that the entire GPU is synchronized, which will impact performance negatively.
Note:
- The method will return as soon as the allocation is put in queue in the CUDA stream, i.e. before the actual allocation is made. Using this DeviceMemory is only valid as long as it is used in the same CUDA stream, or in case another stream is used, only if that stream is synchronized first with respect to
cudaStream
. - In case the
function DeviceMemory
explicit DeviceMemory(
void * deviceMemory
)
Constructor taking ownership of an already allocated CUDA memory pointer. This class will free the pointer once it goes out of scope.
Parameters:
- deviceMemory CUDA memory pointer to take ownership over.
function allocateMemory
bool allocateMemory(
size_t numberOfBytes
)
Allocates device memory. The memory allocated will automatically be freed by the destructor.
Parameters:
- numberOfBytes Number of bytes to allocate
Return: True on success, false if there is already memory allocated by this instance, or if the CUDA malloc failed.
function allocateMemoryAsync
bool allocateMemoryAsync(
size_t numberOfBytes,
cudaStream_t cudaStream
)
Allocates device memory. The memory allocated will automatically be freed by the destructor.
Parameters:
- numberOfBytes Number of bytes to allocate
- cudaStream The CUDA stream to use for the allocation. A reference to this stream will also be saved internally to be used to asynchronously free them memory when the instance goes out of scope.
Return: True on success, false if there is already memory allocated by this instance, or if the CUDA malloc failed.
function reallocateMemory
bool reallocateMemory(
size_t numberOfBytes
)
Reallocates device memory. Already existing memory allocation will be freed before the new allocation is made. In case this DeviceMemory has no earlier memory allocation, this method will just allocate new CUDA memory and return a success status.
Parameters:
- numberOfBytes Number of bytes to allocate in the new allocation
Return: True on success, false if CUDA free or CUDA malloc failed.
function reallocateMemoryAsync
bool reallocateMemoryAsync(
size_t numberOfBytes,
cudaStream_t cudaStream
)
Asynchronously reallocate device memory. Already existing memory allocation will be freed before the new allocation is made. In case this DeviceMemory has no earlier memory allocation, this method will just allocate new CUDA memory and return a success status.
Parameters:
- numberOfBytes Number of bytes to allocate in the new allocation
- cudaStream The CUDA stream to use for the allocation and freeing of memory. A reference to this stream will also be saved internally to be used to asynchronously free them memory when this instance goes out of scope.
Return: True on success, false if CUDA free or CUDA malloc failed.
function allocateAndResetMemory
bool allocateAndResetMemory(
size_t numberOfBytes
)
Allocates device memory and resets all bytes to zeroes. The memory allocated will automatically be freed by the destructor.
Parameters:
- numberOfBytes Number of bytes to allocate
Return: True on success, false if there is already memory allocated by this instance, or if any of the CUDA operations failed.
function allocateAndResetMemoryAsync
bool allocateAndResetMemoryAsync(
size_t numberOfBytes,
cudaStream_t cudaStream
)
Allocates device memory and resets all bytes to zeroes. The memory allocated will automatically be freed by the destructor.
Parameters:
- numberOfBytes Number of bytes to allocate
- cudaStream The CUDA stream to use for the allocation and resetting. A reference to this stream will also be saved internally to be used to asynchronously free them memory when the instance goes out of scope.
Return: True on success, false if there is already memory allocated by this instance, or if any of the CUDA operations failed.
function freeMemory
bool freeMemory()
Free the device memory held by this class. Calling this when no memory is allocated is a no-op.
See: freeMemoryAsync instead.
Return: True in case the memory was successfully freed (or not allocated to begin with), false otherwise.
Note: This method will free the memory in an synchronous fashion, synchronizing the entire CUDA context and ignoring the internally saved CUDA stream reference in case one exist. For async freeing of the memory, use
function freeMemoryAsync
bool freeMemoryAsync(
cudaStream_t cudaStream
)
Deallocate memory asynchronously, in a given CUDA stream. Calling this when no memory is allocated is a no-op.
Parameters:
- cudaStream The CUDA stream to free the memory asynchronously in
Return: True in case the memory deallocation request was successfully put in queue in the CUDA stream.
Note:
- The method will return as soon as the deallocation is put in queue in the CUDA stream, i.e. before the actual deallocation is made.
- It is the programmer’s responsibility to ensure this DeviceMemory is not used in another CUDA stream before this method is called. In case it is used in another CUDA stream, sufficient synchronization must be made before calling this method (and a very good reason given for not freeing the memory in that CUDA stream instead)
function setFreeingCudaStream
void setFreeingCudaStream(
cudaStream_t cudaStream
)
Set which CUDA stream to use for freeing this DeviceMemory. In case the DeviceMemory already holds a CUDA stream to use for freeing the memory, this will be overwritten.
Parameters:
- cudaStream The new CUDA stream to use for freeing the memory when the destructor is called.
Note: It is the programmer’s responsibility to ensure this DeviceMemory is not used in another CUDA stream before this instance is destructed. In case it is used in another CUDA stream, sufficient synchronization must be made before setting this as the new CUDA stream to use when freeing the memory.
function ~DeviceMemory
~DeviceMemory()
Destructor, frees the internal CUDA memory.
function getDevicePointer
template <typename T =uint8_t>
inline T * getDevicePointer() const
Template Parameters:
- T The pointer type to return
Return: the CUDA memory pointer handled by this class. Nullptr in case no memory is allocated.
function getSize
size_t getSize() const
Return: The size of the CUDA memory allocation held by this class.
function DeviceMemory
DeviceMemory(
DeviceMemory && other
)
function operator=
DeviceMemory & operator=(
DeviceMemory && other
)
function swap
void swap(
DeviceMemory & other
)
function DeviceMemory
DeviceMemory(
DeviceMemory const &
) =delete
DeviceMemory is not copyable.
function operator=
DeviceMemory operator=(
DeviceMemory const &
) =delete
Updated on 2024-01-25 at 12:02:05 +0100
6.2.3.1.16 - IngestApplication
IngestApplication
Public Classes
Name | |
---|---|
struct | Settings |
Public Functions
Name | |
---|---|
IngestApplication() Constructor, creates an empty IngestApplication without starting it. | |
~IngestApplication() Destructor. | |
bool | start(const std::shared_ptr< ISystemControllerInterface > & controllerInterface, const Settings & settings) Start this IngestApplication given an interface to the System Controller and an UUID. |
bool | stop() Stop this IngestApplication. |
std::string | getVersion() Get application version. |
std::string | getGitRevision() Get git revision which application was built on. |
std::string | getBuildInfo() Get application build information. |
std::string | getLibraryVersions() Get the versions of the libraries available at runtime, among others, CUDA version, BMD and NDI versions. |
Public Functions Documentation
function IngestApplication
IngestApplication()
Constructor, creates an empty IngestApplication without starting it.