This is the multi-page printable view of this section. Click here to print.
Agile Content Delivery
- 1: Components
- 1.1: ESB3024 Router
- 1.1.1: Getting Started
- 1.1.2: Installation
- 1.1.2.1: Installing a 1.16 release
- 1.1.2.1.1: Configuration changes between 1.14 and 1.16
- 1.1.2.2: Installing a 1.14 release
- 1.1.2.3: Installing a 1.12 release
- 1.1.2.4: Installing release 1.10.x
- 1.1.2.5: Installing release 1.8.0
- 1.1.2.6: Installing release 1.6.0
- 1.1.3: Firewall
- 1.1.4: API Overview
- 1.1.5: Configuration
- 1.1.5.1: WebUI Configuration
- 1.1.5.2: Confd and Confcli
- 1.1.5.3: Session Groups and Classification
- 1.1.5.4: Accounts
- 1.1.5.5: Advanced features
- 1.1.5.5.1: Content popularity
- 1.1.5.5.2: Consistent Hashing
- 1.1.5.5.3: Security token verification
- 1.1.5.5.4: Subnets API
- 1.1.5.5.5: Lua Features
- 1.1.5.5.5.1: Built-in Lua Functions
- 1.1.5.5.5.2: Global Lua Tables
- 1.1.5.5.5.3: Request Translation Function
- 1.1.5.5.5.4: Session Translation Function
- 1.1.5.5.5.5: Host Request Translation Function
- 1.1.5.5.5.6: Response Translation Function
- 1.1.5.6: Trusted proxies
- 1.1.5.7: Confd Auto Upgrade Tool
- 1.1.6: Operations
- 1.1.6.1: Services
- 1.1.7: Convoy Bridge
- 1.1.8: Monitoring
- 1.1.8.1: Access logging
- 1.1.8.2: System troubleshooting
- 1.1.8.3: Scraping data with Prometheus
- 1.1.8.4: Visualizing data with Grafana
- 1.1.8.4.1: Managing Grafana
- 1.1.8.4.2: Grafana Dashboards
- 1.1.8.5: Alarms and Alerting
- 1.1.8.6: Monitoring multiple routers
- 1.1.8.7: Routing Rule Evaluation Metrics
- 1.1.8.8: Metrics
- 1.1.8.8.1: Internal Metrics
- 1.1.9: Releases
- 1.1.9.1: Release esb3024-1.16.0
- 1.1.9.2: Release esb3024-1.14.2
- 1.1.9.3: Release esb3024-1.14.0
- 1.1.9.4: Release esb3024-1.12.1
- 1.1.9.5: Release esb3024-1.12.0
- 1.1.9.6: Release esb3024-1.10.2
- 1.1.9.7: Release esb3024-1.10.1
- 1.1.9.8: Release esb3024-1.10.0
- 1.1.9.9: Release esb3024-1.8.0
- 1.1.9.10: Release esb3024-1.6.0
- 1.1.9.11: Release esb3024-1.4.0
- 1.1.9.12: Release acd-router-1.2.3
- 1.1.9.13: Release acd-router-1.2.0
- 1.1.9.14: Release acd-router-1.0.0
- 1.1.10: Glossary
- 1.2: ESB3032 ACD Aggregator
- 1.2.1: Getting Started
- 1.2.2: Releases
- 1.2.2.1: Release esb3032-0.2.0
- 1.2.2.2: Release esb3032-1.0.0
- 1.2.2.3: Release esb3032-1.2.1
- 1.2.2.4: Release esb3032-1.4.0
- 1.3: ACD Cache
- 1.4: ESB3013 BGP Sniffer
- 1.5: Edgeware CDN Management System
- 1.6: ESB3008 Edgeware CDN Request Router
- 2: Introduction
- 3: Solutions
- 3.1: Cache hardware metrics: monitoring and routing
- 3.2: Private CDN Offload Routing with DNS
- 3.3: Monitor ACD with Prometheus, Grafana and Alert Manager
- 4: Use Cases
- 4.1: Route on GeoIP/ASN
- 4.2: Route on Subnet
- 4.3: Route on Content Type
- 4.4: Route on Content Popularity
- 4.5: Route on Selection Input
- 4.6: Adapt to multi-CDN
- 4.7: How to use ACD router for EDNS routing
- 4.8: How to use ESB3024 Router with Edgeware CDN Request Router
- 4.9: How to use ESB3024 Router with CoreDNS
- 4.10: How to use the Director as a replacement for Edgeware CDN Request Router
- 4.11: Use In-Stream Sessions
- 5: Ready for Production?
- 5.1: Setup for Redundancy
- 5.2: HTTPS Certificates
1 - Components
1.1 - ESB3024 Router
1.1.1 - Getting Started
The Director serves as a versatile network service designed to redirect incoming HTTP(s) requests to the optimal host or Content Delivery Network (CDN) by evaluating various request properties through a set of rules. Although requests can be generic, the primary focus centers around audio-video content delivery. The rule engine allows users to construct routing configurations using predefined blocks, providing for the creation of intricate routing logic. This modular approach allows the users to tailor and streamline the content delivery process to meet their specific needs. The Director’s flexible rule engine takes into account factors such as geographical location, server load, content type, and other metadata from external sources to intelligently route incoming requests. It supports dynamic adjustments to seamlessly adapt to changing network conditions, ensuring efficient and reliable content delivery. The Director improves the overall user experience by delivering content from the most suitable and responsive sources, thereby reducing latency and enhancing performance.
Requirements
Hardware
The Director is designed to be installed and operated on commodity hardware, ensuring accessibility for a broad range of users. The minimum hardware specifications are as follows:
- CPU: x86-64 AMD or Intel with at least 2 cores.
- Memory: At least 2 GB free at runtime.
Operating System Compatibility
The Director is officially supported on Red Hat Enterprise Linux 8 or 9 or any
compatible operating system. In order to run the service, a minimum CPU
architecture of x86-64-v2
is required. This can be determined by running the
following command. If supported, it will be listed as “(supported)” in the
output.
/usr/lib64/ld-linux-x86-64.so.2 --help | grep x86-64-v2
External Internet access is necessary during the installation process for the installer to download and install additional dependencies. This ensures a seamless setup and optimal functionality of the Director on Red Hat Enterprise Linux 8 or 9. It’s worth noting that, due to the unique workings of the DNF package manager in Red Hat Enterprise Linux with rolling package streams, an air-gapped installation process is not available.
Firewall Recommendations
See Firewall.
Installation
See Installation.
Operations
See Operations.
Configuration Process
Once the router is operational, it requires a valid configuration before it can route incoming requests.
There are currently three methods available for configuring the router, each catering to different levels of complexity. The first is a Web UI, suitable for the most common use-cases, providing an intuitive interface for configuration. The second involves utilizing a confd REST service, complemented by an optional command line tool, confcli, suitable for all but the most advanced scenarios. The third method involves leveraging an internal REST API, ideal for the most intricate cases where using confd proves to be less flexible. It’s essential to note that as the configuration method advances through these levels, both flexibility and complexity increase, providing users with tailored options based on their specific needs and expertise.
API Key Management
Regardless of the method used to configure the system, a unique API key is
crucial for safeguarding the router’s configuration and preventing unauthorized
access to the API. This key must be supplied when interacting with the API.
During the router software installation, an automatically generated API key is
created and can be located on the installed system at
/opt/edgeware/acd/router/cache/rest-api-key.json
. The structure of this file
is as follows:
{"api_key": "abc123"}
When accessing the internal configuration API, the key must be included in the
X-API-key
header of the request, as shown below:
curl -v -k -H "X-API-Key: abc123" https://<router-host.example>:5001/v2/configuration
Modification to the authentication key and behavior can be done through the
/v2/rest_api_key
endpoint. To change the key, a PUT
request with a JSON body
of the same structure can be sent to the endpoint:
curl -v -k -X PUT -T new-key.json -H "X-API-Key: abc123" \
-H "Content-Type: application/json" https://<router-host.example>:5001/v2/rest_api_key
Additionally, key authentication can be disabled completely by sending a DELETE
request to the endpoint:
curl -v -k -X DELETE -H "X-API-Key: abc123" \
https://<router-host.example>:5001/v2/rest_api_key
In the event of a lost or forgotten authentication key, it can always be
retrieved at /opt/edgeware/acd/router/cache/rest-api-key.json
on the
machine running the router. It is critical to emphasize that the API key should
remain private to prevent unauthorized access to the internal API, as it grants
full access to the router’s configuration.
Configuration Basics
Upon completing the installation process and configuring the API keys, the subsequent section will provide guidance on configuring the router to route all incoming requests to a single host. For straightforward CDN Offload use cases, there is a web based user interface described here.
For further details on configuring the router using confd and confcli, please consult the Confd documentation.
The initial step involves defining the target host group. In this illustration,
a singular group named all
will be established, comprising two hosts.
$ confcli services.routing.hostGroups -w
Running wizard for resource 'hostGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
hostGroups : [
hostGroup can be one of
1: dns
2: host
3: redirecting
Choose element index or name: host
Adding a 'host' element
hostGroup : {
name (default: ): all
type (default: host):
httpPort (default: 80):
httpsPort (default: 443):
hosts : [
host : {
name (default: ): host1.example.com
hostname (default: ): host1.example.com
ipv6_address (default: ):
}
Add another 'host' element to array 'hosts'? [y/N]: y
host : {
name (default: ): host2.example.com
hostname (default: ): host2.example.com
ipv6_address (default: ):
}
Add another 'host' element to array 'hosts'? [y/N]: n
]
}
Add another 'hostGroup' element to array 'hostGroups'? [y/N]: n
]
Generated config:
{
"hostGroups": [
{
"name": "all",
"type": "host",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "host1.example.com",
"hostname": "host1.example.com",
"ipv6_address": ""
},
{
"name": "host2.example.com",
"hostname": "host2.example.com",
"ipv6_address": ""
}
]
}
]
}
Merge and apply the config? [y/n]:
After defining the host group, the next step is to establish a rule that directs
incoming requests to the designated host. In this example, a sole rule named
random
will be generated, ensuring that all incoming requests are consistently
routed to the previously defined host.
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: random
Adding a 'random' element
rule : {
name (default: ): random
type (default: random):
targets : [
target (default: ): host1.example.com
Add another 'target' element to array 'targets'? [y/N]: y
target (default: ): host2.example.com
Add another 'target' element to array 'targets'? [y/N]: n
]
}
Add another 'rule' element to array 'rules'? [y/N]: n
]
Generated config:
{
"rules": [
{
"name": "random",
"type": "random",
"targets": [
"host1.example.com",
"host2.example.com"
]
}
]
}
Merge and apply the config? [y/n]:
The last essential step involves instructing the router on which rule should
serve as the entry point into the routing tree. In this example, we designate
the rule random
as the entrypoint for the routing process.
$ confcli services.routing.entrypoint random
services.routing.entrypoint = 'random'
Once this configuration is defined, all incoming requests will initiate their
traversal through the routing rules, starting with the rule named random
. This
rule is designed to consistently match for every incoming request, effectively load
balancing evenly between host1.example.com
and host2.example.com
on port 80
or 443, depending on whether the initial request was made using HTTP or HTTPS.
Integration with Convoy
The router is equipped with the capability to synchronize specific configuration metadata with a separate Convoy installation through the integrated convoy-bridge service. However, this service necessitates additional setup and configuration, and you can find comprehensive details on the process here..
Additional Resources
Additional documentation resources are included with the Director and can be
accessed at the following directory: /opt/edgeware/acd/documentation/
. This
directory contains supplementary materials to provide users with comprehensive
information and guidance for optimizing their experience with the Director.
Ready for Production
Once the Director software is completely installed and configured, there are a few additional considerations before moving to a full production environment. See the section Ready for Production for additional information.
1.1.2 - Installation
1.1.2.1 - Installing a 1.16 release
To install ESB3024 Router, one first needs to copy the installation ISO image
to the target node where the router will be run. Due to the way the
installer operates, it is necessary that the host is reachable by
password-less SSH from itself for the user account that will perform the
installation, and that this user has sudo
access.
Prerequisites:
Ensure that the current user has
sudo
access.sudo -l
If the above command fails, you may need to add the user to the
/etc/sudoers
file.Ensure that the installer has password-less SSH access to
localhost
.If using the
root
user, thePermitRootLogin
property of the/etc/ssh/sshd_config
file must be set to ‘yes’.The local host key must also be included in the
.ssh/authorized_keys
file of the user running the installer. That can be done by issuing the following as the intended user:mkdir -m 0700 -p ~/.ssh ssh-keyscan localhost >> ~/.ssh/authorized_keys
Note! The
ssh-keyscan
utility will result in the key fingerprint being output on the console. As a security best-practice it is recommended to verify that this host-key matches the machine’s true SSH host key. As an alternative, to thisssh-keyscan
approach, establishing an SSH connection to localhost and accepting the host key will have the same result.Disable SELinux.
The Security-Enhanced Linux Project (SELinux) is designed to add an additional layer of security to the operating system by enforcing a set of rules on processes. Unfortunately out of the box the default configuration is not compatible with the way the installer operates. Before proceeding with the installation, it is recommended to disable SELinux. It can be re-enabled after the installation completes, if desired, but will require manual configuration. Refer to the Red Hat Customer Portal for details.
To check if SELinux is enabled:
getenforce
This will result in one of 3 states, “Enforcing”, “Permissive” or “Disabled”. If the state is “Enforcing” use the following to disable SELinux. Either “Permissive” or “Disabled” is required to continue.
setenforce 0
This disables SELinux, but does not make the change persistent across reboots. To do that, edit the
/etc/selinux/config
file and set theSELINUX
property todisabled
.It is recommended to reboot the computer after changing SELinux modes, but the changes should take effect immediately.
Assuming the installation ISO image is in the current working directory,
the following steps need to be executed either by root
user or with sudo
.
Mount the installation ISO image under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-1.16.0.iso /mnt/acd
Run the installer script.
/mnt/acd/installer
Upgrade from and earlier ESB3024 Router release
The following steps can be taken to upgrade the router from a 1.10 or later release to 1.16.0. If upgrading from an earlier release it is recommended to first upgrade to 1.10.1 and then to upgrade to 1.16.0.
The upgrade procedure for the router is performed by taking a backup of the configuration, installing the new release of the router, and applying the saved configuration.
With the router running, save a backup of the configuration.
The exact procedure to accomplish this depends on the current method of configuration, e.g. if
confd
is used, then the configuration should be extracted fromconfd
, but if the REST API is used directly, then the configuration must be saved by fetching the current configuration snapshot using the REST API.Extracting the configuration using
confd
is the recommend approach where available.confcli | tee config_backup.json
To extract the configuration from the REST API, the following may be used instead. Depending on the version of the router used, an API-Key may be required to fetch from the REST API.
curl --insecure https://localhost:5001/v2/configuration \ | tee config_backup.json
If the API Key is required, it can be found in the file
/opt/edgeware/acd/router/cache/rest-api-key.json
and can be passed to the API by setting the value of theX-API-Key
header.curl --insecure -H "X-API-Key: 1234abcd" \ https://localhost:5001/v2/configuration \ | tee config_backup.json
Mount the new installation ISO under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-1.16.0.iso /mnt/acd
Stop the router and all associated services.
Before upgrading the router it needs to be stopped, which can be done by typing this:
systemctl stop 'acd-*'
Run the installer script.
/mnt/acd/installer
Migrate the configuration.
Note that this step only applies if the router is configured using
confd
. If it is configured using the REST API, this step is not necessary.The confd configuration used in the previous versions is not directly compatible with 1.16, and may need to be converted. If this is not done, the configuration will not be valid and it will not be possible to make configuration changes.
The
acd-confd-migration
tool will automatically apply any necessary schema migrations. Further details about this tool can be found at Confd Auto Upgrade Tool.The tool takes as input the old configuration file, either by reading the file directly, or by reading from standard input, applies any necessary migrations between the two specified versions, and outputs a new configuration to standard output which is suitable for being applied to the upgraded system. While the tool has the ability to migrate between multiple versions at a time, the earliest supported version is 1.10.1.
The example below shows how to upgrade from 1.10.2. If upgrading from 1.14.0,
--from 1.10.2
should be replaced with--from 1.14.0
.The command line required to run the tool is different depending on which esb3024 release it is run on. On 1.16.0 it is run like this:
cat config_backup.json | \ podman run -i --rm \ images.edgeware.tv/acd-confd-migration:1.16.0 \ --in - --from 1.10.2 --to 1.16.0 \ | tee config_upgraded.json
After running the above command, apply the new configuration to
confd
by runningcat config_upgraded.json | confcli -i
.
Troubleshooting
If there is a problem running the installer, additional debug information can
be output by adding -v
or -vv
or -vvv
to the installer command, the
more “v” characters, the more detailed output.
1.1.2.1.1 - Configuration changes between 1.14 and 1.16
Confd configuration changes
Below are the changes to the confd configuration between versions 1.14 and 1.16 listed.
Added region
GeoIP classifier
Classifiers of type geoip
now have a region
property.
Added integration.routing.gui
configuration
There is now an integration.routing.gui
section which will be used by the
GUI.
Added services.routing.accounts
configuration
The services.routing.accounts
list has been added to the configuration.
1.1.2.2 - Installing a 1.14 release
To install ESB3024 Router, one first needs to copy the installation ISO image
to the target node where the router will be run. Due to the way the
installer operates, it is necessary that the host is reachable by
password-less SSH from itself for the user account that will perform the
installation, and that this user has sudo
access.
Prerequisites:
Ensure that the current user has
sudo
access.sudo -l
If the above command fails, you may need to add the user to the
/etc/sudoers
file.Ensure that the installer has password-less SSH access to
localhost
.If using the
root
user, thePermitRootLogin
property of the/etc/ssh/sshd_config
file must be set to ‘yes’.The local host key must also be included in the
.ssh/authorized_keys
file of the user running the installer. That can be done by issuing the following as the intended user:mkdir -m 0700 -p ~/.ssh ssh-keyscan localhost >> ~/.ssh/authorized_keys
Note! The
ssh-keyscan
utility will result in the key fingerprint being output on the console. As a security best-practice it is recommended to verify that this host-key matches the machine’s true SSH host key. As an alternative, to thisssh-keyscan
approach, establishing an SSH connection to localhost and accepting the host key will have the same result.Disable SELinux.
The Security-Enhanced Linux Project (SELinux) is designed to add an additional layer of security to the operating system by enforcing a set of rules on processes. Unfortunately out of the box the default configuration is not compatible with the way the installer operates. Before proceeding with the installation, it is recommended to disable SELinux. It can be re-enabled after the installation completes, if desired, but will require manual configuration. Refer to the Red Hat Customer Portal for details.
To check if SELinux is enabled:
getenforce
This will result in one of 3 states, “Enforcing”, “Permissive” or “Disabled”. If the state is “Enforcing” use the following to disable SELinux. Either “Permissive” or “Disabled” is required to continue.
setenforce 0
This disables SELinux, but does not make the change persistent across reboots. To do that, edit the
/etc/selinux/config
file and set theSELINUX
property todisabled
.It is recommended to reboot the computer after changing SELinux modes, but the changes should take effect immediately.
Assuming the installation ISO image is in the current working directory,
the following steps need to be executed either by root
user or with sudo
.
Mount the installation ISO image under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-1.14.0.iso /mnt/acd
Run the installer script.
/mnt/acd/installer
Upgrade from and earlier ESB3024 Router release
The following steps can be used to upgrade the router from a 1.10 or 1.12 release to 1.14.0. If upgrading from an earlier release it is recommended to first upgrade to 1.10.1 in multiple steps; for instance when upgrading from release 1.8.0 to 1.14.0, it is recommended to first upgrade to 1.10.1 and then to 1.14.0.
The upgrade procedure for the router is performed by taking a backup of the configuration, installing the new release of the router, and applying the saved configuration.
With the router running, save a backup of the configuration.
The exact procedure to accomplish this depends on the current method of configuration, e.g. if
confd
is used, then the configuration should be extracted fromconfd
, but if the REST API is used directly, then the configuration must be saved by fetching the current configuration snapshot using the REST API.Extracting the configuration using
confd
is the recommend approach where available.confcli | tee config_backup.json
To extract the configuration from the REST API, the following may be used instead. Depending on the version of the router used, an API-Key may be required to fetch from the REST API.
curl --insecure https://localhost:5001/v2/configuration \ | tee config_backup.json
If the API Key is required, it can be found in the file
/opt/edgeware/acd/router/cache/rest-api-key.json
and can be passed to the API by setting the value of theX-API-Key
header.curl --insecure -H "X-API-Key: 1234abcd" \ https://localhost:5001/v2/configuration \ | tee config_backup.json
Mount the new installation ISO under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-1.14.0.iso /mnt/acd
Stop the router and all associated services.
Before upgrading the router it needs to be stopped, which can be done by typing this:
systemctl stop 'acd-*'
Run the installer script.
/mnt/acd/installer
Migrate the configuration.
Note that this step only applies if the router is configured using
confd
. If it is configured using the REST API, this step is not necessary.The confd configuration used in the previous versions is not directly compatible with 1.14, and may need to be converted. If this is not done, the configuration will not be valid and it will not be possible to make configuration changes.
The
acd-confd-migration
tool will automatically apply any necessary schema migrations. Further details about this tool can be found at Confd Auto Upgrade Tool.The tool takes as input the old configuration file, either by reading the file directly, or by reading from standard input, applies any necessary migrations between the two specified versions, and outputs a new configuration to standard output which is suitable for being applied to the upgraded system. While the tool has the ability to migrate between multiple versions at a time, the earliest supported version is 1.10.1.
The example below shows how to upgrade from 1.10.2. If upgrading from 1.12.0,
--from 1.10.2
should be replaced with--from 1.12.0
.The command line required to run the tool is different depending on which esb3024 release it is run on. On 1.14.0 it is run like this:
cat config_backup.json | \ podman run -i --rm \ images.edgeware.tv/acd-confd-migration:1.14.0 \ --in - --from 1.10.2 --to 1.14.0 \ | tee config_upgraded.json
After running the above command, apply the new configuration to
confd
by runningcat config_upgraded.json | confcli -i
.
Troubleshooting
If there is a problem running the installer, additional debug information can
be output by adding -v
or -vv
or -vvv
to the installer command, the
more “v” characters, the more detailed output.
1.1.2.2.1 - Configuration changes between 1.12.1 and 1.14
Confd configuration changes
Below are the changes to the confd configuration between versions 1.12.1 and 1.14.x listed.
Renamed services.routing.settings.allowedProxies
The configuration setting services.routing.settings.allowedProxies
has been
renamed to services.routing.settings.trustedProxies
.
Added services.routing.tuning.general.restApiMaxBodySize
This parameter configures the maximum body size for the REST API. It mainly applies to the configuration, which sometimes has a large payload size.
1.1.2.3 - Installing a 1.12 release
To install ESB3024 Router, one first needs to copy the installation ISO image
to the target node where the router will be run. Due to the way the
installer operates, it is necessary that the host is reachable by
password-less SSH from itself for the user account that will perform the
installation, and that this user has sudo
access.
Prerequisites:
Ensure that the current user has
sudo
access.sudo -l
If the above command fails, you may need to add the user to the
/etc/sudoers
file.Ensure that the installer has password-less SSH access to
localhost
.If using the
root
user, thePermitRootLogin
property of the/etc/ssh/sshd_config
file must be set to ‘yes’.The local host key must also be included in the
.ssh/authorized_keys
file of the user running the installer. That can be done by issuing the following as the intended user:mkdir -m 0700 -p ~/.ssh ssh-keyscan localhost >> ~/.ssh/authorized_keys
Note! The
ssh-keyscan
utility will result in the key fingerprint being output on the console. As a security best-practice its recommended to verify that this host-key matches the machine’s true SSH host key. As an alternative, to thisssh-keyscan
approach, establishing an SSH connection to localhost and accepting the host key will have the same result.Disable SELinux.
The Security-Enhanced Linux Project (SELinux) is designed to add an additional layer of security to the operating system by enforcing a set of rules on processes. Unfortunately out of the box the default configuration is not compatible with the way the installer operates. Before proceeding with the installation, it is recommended to disable SELinux. It can be re-enabled after the installation completes, if desired, but will require manual configuration. Refer to the Red Hat Customer Portal for details.
To check if SELinux is enabled:
getenforce
This will result in one of 3 states, “Enforcing”, “Permissive” or “Disabled”. If the state is “Enforcing” use the following to disable SELinux. Either “Permissive” or “Disabled” is required to continue.
setenforce 0
This disables SELinux, but does not make the change persistent across reboots. To do that, edit the
/etc/selinux/config
file and set theSELINUX
property todisabled
.It is recommended to reboot the computer after changing SELinux modes, but the changes should take effect immediately.
Assuming the installation ISO image is in the current working directory,
the following steps need to be executed either by root
user or with sudo
.
Mount the installation ISO image under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-esb3024-1.12.1.iso /mnt/acd
Run the installer script.
/mnt/acd/installer
Upgrade from ESB3024 Router release 1.10
The following steps can be used to upgrade the router from a 1.10 release to 1.12.0 or 1.12.1. If upgrading from an earlier release it is recommended to perform the upgrade in multiple steps; for instance when upgrading from release 1.8.0 to 1.12.1, it is recommended to first upgrade to 1.10.1 or 1.10.2 and then to 1.12.1.
The upgrade procedure for the router is performed by taking a backup of the configuration, installing the new release of the router, and applying the saved configuration.
With the router running, save a backup of the configuration.
The exact procedure to accomplish this depends on the current method of configuration, e.g. if
confd
is used, then the configuration should be extracted fromconfd
, but if the REST API is used directly, then the configuration must be saved by fetching the current configuration snapshot using the REST API.Extracting the configuration using
confd
is the recommend approach where available.confcli | tee config_backup.json
To extract the configuration from the REST API, the following may be used instead. Depending on the version of the router used, an API-Key may be required to fetch from the REST API.
curl --insecure https://localhost:5001/v2/configuration \ | tee config_backup.json
If the API Key is required, it can be found in the file
/opt/edgeware/acd/router/cache/rest-api-key.json
and can be passed to the API by setting the value of theX-API-Key
header.curl --insecure -H "X-API-Key: 1234abcd" \ https://localhost:5001/v2/configuration \ | tee config_backup.json
Mount the new installation ISO under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-esb3024-1.12.1.iso /mnt/acd
Stop the router and all associated services.
Before upgrading the router it needs to be stopped, which can be done by typing this:
systemctl stop 'acd-*'
Run the installer script.
/mnt/acd/installer
Migrate the configuration.
Note that this step only applies if the router is configured using
confd
. If it is configured using the REST API, this step is not necessary.The confd configuration used in the 1.10 versions is not directly compatible with 1.12, and may need to be converted. If this is not done, the configuration will not be valid and it will not be possible to make configuration changes.
To help with migrating the configuration, a new tool has been included in the 1.12.0 release, which will automatically apply any necessary schema migrations. Further details about this tool can be found here. Confd Auto Upgrade.
The
confd-auto-upgrade
tool takes as input the old configuration file, either by reading the file directly, or by reading from standard input, applies any necessary migrations between the two specified versions, and outputs a new configuration to standard output which is suitable for being applied to the upgraded system. While the tool has the ability to migrate between multiple versions at a time, the earliest supported version is 1.10.1.The example below shows how to upgrade from 1.10.2. If upgrading from 1.10.1,
--from 1.10.2
should be replaced with--from 1.10.1
.The command line required to run the tool is different if it is run on esb3024-1.12.0 or esb3024-1.12.1. On 1.12.1 it is run like this:
cat config_backup.json | \ podman run -i --rm \ images.edgeware.tv/auto-upgrade-esb3024-1.12.1-master:20240702T151205Z-f1b53a98f \ --in - --from 1.10.2 --to 1.12.1 \ | tee config_upgraded.json
On esb3024-1.12.0 it is run like this:
cat config_backup.json | \ podman run -i --rm \ images.edgeware.tv/auto-upgrade-esb3024-1.12.0:20240619T154952Z-2b72f7400 \ --in - --from 1.10.2 --to 1.12.0 \ | tee config_upgraded.json
After running the above command, apply the new configuration to
confd
by runningcat config_upgraded.json | confcli -i
.
Troubleshooting
If there is a problem running the installer, additional debug information can
be output by adding -v
or -vv
or -vvv
to the installer command, the
more “v” characters, the more detailed output.
1.1.2.3.1 - Configuration changes between 1.10.2 and 1.12
Confd configuration changes
Below are the major changes to the confd configuration between versions 1.10.2 and 1.12.0/1.12.1. Note that there are no configuration changes between versions 1.12.0 and 1.12.1, so the differences apply to both.
Added services.routing.translationFunctions.hostRequest
A new translation function has been added which will allow custom Lua code to modify requests to backend hosts before they are sent.
Added services.routing.translationFunctions.session
A new translation function has been added which will allow custom Lua code to be executed after the router has made the routing decision but before generating the redirect URL.
An example use case would be for enabling instream sessions, which can be done by
setting this value to return set_session_type('instream')
.
Removed services.routing.settins.managedSessions
configuration
This configuration is no longger used.
Added services.routing.tuning.general.maxActiveManagedSessions
tuning parameter.
This parameter configures the maximum number of active managed sessions.
1.1.2.4 - Installing release 1.10.x
To install ESB3024 Router, one first needs to copy the installation ISO image
to the target node where the router will be run. Due to the way the
installer operates, it is necessary that the host is reachable by
password-less SSH from itself for the user account that will perform the
installation, and that this user has sudo
access.
Prerequisites:
Ensure that the current user has
sudo
access.sudo -l
If the above command fails, you may need to add the user to the
/etc/sudoers
file.Ensure that the installer has password-less SSH access to
localhost
.If using the
root
user, thePermitRootLogin
property of the/etc/ssh/sshd_config
file must be set to ‘yes’.The local host key must also be included in the
.ssh/authorized_keys
file of the user running the installer. That can be done by issuing the following as the intended user:mkdir -m 0700 -p ~/.ssh ssh-keyscan localhost >> ~/.ssh/authorized_keys
Note! The
ssh-keyscan
utility will result in the key fingerprint being output on the console. As a security best-practice its recommended to verify that this host-key matches the machine’s true SSH host key. As an alternative, to thisssh-keyscan
approach, establishing an SSH connection to localhost and accepting the host key will have the same result.Disable SELinux.
The Security-Enhanced Linux Project (SELinux) is designed to add an additional layer of security to the operating system by enforcing a set of rules on processes. Unfortunately out of the box the default configuration is not compatible with the way the installer operates. Before proceeding with the installation, it is recommended to disable SELinux. It can be re-enabled after the installation completes, if desired, but will require manual configuration. Refer to the Red Hat Customer Portal for details.
To check if SELinux is enabled:
getenforce
This will result in one of 3 states, “Enforcing”, “Permissive” or “Disabled”. If the state is “Enforcing” use the following to disable SELinux. Either “Permissive” or “Disabled” is required to continue.
setenforce 0
This disables SELinux, but does not make the change persistent across reboots. To do that, edit the
/etc/selinux/config
file and set theSELINUX
property todisabled
.It is recommended to reboot the computer after changing SELinux modes, but the changes should take effect immediately.
Assuming the installation ISO image is in the current working directory,
the following steps need to be executed either by root
user or with sudo
.
Mount the installation ISO image under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-esb3024-1.10.1.iso /mnt/acd
Run the installer script.
/mnt/acd/installer
Upgrade from ESB3024 Router release 1.8.0
The following steps can be used to upgrade the router from release 1.8.0 to 1.10.x. If upgrading from an earlier release it is recommended to perform the upgrade in multiple steps; for instance when upgrading from release 1.6.0 to 1.10.x, it is recommended to first upgrade to 1.8.0 and then to 1.10.x.
The upgrade procedure for the router is performed by taking a backup of the configuration, installing the new release of the router, and applying the saved configuration.
With the router running, save a backup of the configuration.
The exact procedure to accomplish this depends on the current method of configuration, e.g. if
confd
is used, then the configuration should be extracted fromconfd
, but if the REST API is used directly, then the configuration must be saved by fetching the current configuration snapshot using the REST API.Extracting the configuration using
confd
is the recommend approach where available.confcli | tee config_backup.json
To extract the configuration from the REST API, the following may be used instead. Depending on the version of the router used, an API-Key may be required to fetch from the REST API.
curl --insecure https://localhost:5001/v2/configuration \ | tee config_backup.json
If the API Key is required, it can be found in the file
/opt/edgeware/acd/router/cache/rest-api-key.json
and can be passed to the API by setting the value of theX-API-Key
header.curl --insecure -H "X-API-Key: 1234abcd" \ https://localhost:5001/v2/configuration \ | tee config_backup.json
Mount the new installation ISO under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-esb3024-1.10.1.iso /mnt/acd
Stop the router and all associated services.
Before upgrading the router it needs to be stopped, which can be done by typing this:
systemctl stop 'acd-*'
Run the installer script.
/mnt/acd/installer
Migrate the configuration.
Note that this step only applies if the router is configured using
confd
. If it is configured using the REST API, this step is not necessary.The confd configuration used in version 1.8.0 is not directly compatible with 1.10.x, and may need to be converted. If this is not done, the configuration will not be valid and it will not be possible to make configuration changes.
To determine if the configuration needs to be converted,
confcli
can be run like below. If it prints error messages, the configuration needs to be converted. If no error messages are printed, the configuration is valid and no further updates are necessary.confcli | head -n5 [2024-04-02 14:48:37,155] [ERROR] Missing configuration key /integration [2024-04-02 14:48:37,162] [ERROR] Missing configuration key /services/routing/settings/qoeTracking [2024-04-02 14:48:37,222] [ERROR] Missing configuration key /services/routing/hostGroups/convoy-rr/hosts/convoy-rr-1/healthChecks [2024-04-02 14:48:37,222] [ERROR] Missing configuration key /services/routing/hostGroups/convoy-rr/hosts/convoy-rr-2/healthChecks [2024-04-02 14:48:37,242] [ERROR] Missing configuration key /services/routing/hostGroups/e-dns/hosts/linton-dns-1/healthChecks { "integration": { "convoy": { "bridge": { "accounts": {
If error messages are printed, the configuration needs to be converted. If the configuration was saved in the file
config_backup.json
, the conversion can be done by typing this at the command line:sed -E -e '/"hosts":/,/]/ s/([[:space:]]+)("hostname":.*)/\1\2\n\1"healthChecks": [],/' -e '/"apiKey":/ d' config_backup.json | \ curl -s -X PUT -T - -H 'Content-Type: application/json' http://localhost:5000/config/__active/
systemctl restart acd-confd
This adds empty
healthChecks
sections to all hosts and removes theapiKey
configuration. After that,acd-confd
is restarted. See Configuration changes between 1.8.0 and 1.10.x for more details about the configuration changes.Migrating configuration to esb3024-1.10.2
When upgrading to version 1.10.2, an extra step is required to migrate the consistent hashing configuration. This step is necessary both when upgrading from an earlier 1.10 release and when upgrading from older versions. It is only needed if consistent hashing was configured in the previous version.
To determine if consistent hashing was configured, execute the following command:
confcli | head -n2 [2024-05-31 09:43:55,932] [ERROR] Missing configuration key /services/routing/rules/constantine/hashAlgorithm { "integration": {
If an error message about a missing configuration key appears, the configuration must be migrated. If no such error message appears, this step should be skipped.
To migrate the configuration, execute the following command at the command line:
curl -s http://localhost:5000/config/__active/ | \ sed -E 's/(.*)("type":.*"consistentHashing")(,?)/\1\2,\n\1"hashAlgorithm": "MD5"\3/' | \ curl -s -X PUT -T - -H 'Content-Type: application/json' http://localhost:5000/config/__active/
This command will read the current configuration, add the
hashAlgorithm
configuration key, and write back the updated configuration.Remove the Account Monitor container
Older versions of the router installed the Account Monitor tool. This was removed in release 1.8.0, but if it is still present and unused, it can be removed by typing:
podman rm account-monitor
Remove the
confd-transformer.lua
fileAfter installing or upgrading to 1.10.x, ensure that the
confd-transformer.lua
script located in/opt/edgeware/acd/router/lib/standard_lua
directory is removed.This file contains deprecated Lua language definitions which will override newer versions of those functions already present in the ACD Router’s Lua Standard Library. When upgrading beyond 1.10.2, the installer will automatically remove this file, however for this particular release, it requires manual intervention.
rm -f /opt/edgeware/acd/router/lib/standard_lua/confd-transformer.lua
After removing this file, it will be necessary to restart the router to flush the definitions from the router’s memory:
systemctl restart acd-router
Troubleshooting
If there is a problem running the installer, additional debug information can
be output by adding -v
or -vv
or -vvv
to the installer command, the
more “v” characters, the more detailed output.
1.1.2.4.1 - Configuration changes between 1.8.0 and 1.10.x
Confd configuration changes
Below are the major changes to the confd configuration between version 1.8.0 and 1.10.x listed.
Added integration.convoy
section
An integration.convoy
section has been added to the configuration. It is
currently used for configuring the Convoy Bridge
service.
Removed services.routing.apiKey
configuration
The services.routing.apiKey
configuration key has been removed. This was an
obsolete way of giving the configuration access to the router. The key has to be
removed from the configuration when upgrading, otherwise the configuration will
not be accepted.
Added services.routing.settings.qoeTracking
A services.routing.settings.qoeTracking
section has been added to the
configuration.
Added healthChecks
sections to the hosts
The hosts in the hostGroup
entries have been extended with a healthChecks
key, which is a list of functions that determine if a host is in good health.
For example, a redirecting host might look like this after the configuration has been updated:
{
"services": {
"routing": {
"hostGroups": [
{
"name": "convoy-rr",
"type": "redirecting",
"httpPort": 80,
"httpsPort": 443,
"forwardHostHeader": true,
"hosts": [
{
"name": "convoy-rr-1",
"hostname": "convoy-rr-1",
"ipv6_address": "",
"healthChecks": [
"health_check('convoy-rr-1')"
]
}
]
}
],
Added hashAlgorithm
to the consistentHashing
rule
In esb3024-1.10.2 the consistentHashing
routing rule has been extended with a
hashAlgorithm
key, which can have the values MD5
, SDBM
and Murmur
. The
default value is MD5
.
1.1.2.5 - Installing release 1.8.0
To install ESB3024 Router, one first needs to copy the installation ISO image
to the target node where the router will be run. Due to the way the
installer operates, it is necessary that the host is reachable by
password-less SSH from itself for the user account that will perform the
installation, and that this user has sudo
access.
Prerequisites:
Ensure that the current user has
sudo
access.sudo -l
If the above command fails, you may need to add the user to the
/etc/sudoers
file.Ensure that the installer has password-less SSH access to
localhost
.If using the
root
user, thePermitRootLogin
property of the/etc/ssh/sshd_config
file must be set to ‘yes’.The local host key must also be included in the
.ssh/authorized_keys
file of the user running the installer. That can be done by issuing the following as the intended user:mkdir -m 0700 -p ~/.ssh ssh-keyscan localhost >> ~/.ssh/authorized_keys
Note! The
ssh-keyscan
utility will result in the key fingerprint being output on the console. As a security best-practice its recommended to verify that this host-key matches the machine’s true SSH host key. As an alternative, to thisssh-keyscan
approach, establishing an SSH connection to localhost and accepting the host key will have the same result.Disable SELinux.
The Security-Enhanced Linux Project (SELinux) is designed to add an additional layer of security to the operating system by enforcing a set of rules on processes. Unfortunately out of the box the default configuration is not compatible with the way the installer operates. Before proceeding with the installation, it is recommended to disable SELinux. It can be re-enabled after the installation completes, if desired, but will require manual configuration. Refer to the Red Hat Customer Portal for details.
To check if SELinux is enabled:
getenforce
This will result in one of 3 states, “Enforcing”, “Permissive” or “Disabled”. If the state is “Enforcing” use the following to disable SELinux. Either “Permissive” or “Disabled” is required to continue.
setenforce 0
It is recommended to reboot the computer after changing SELinux modes, but the changes should take effect immediately.
Assuming the installation ISO image is in the current working directory,
the following steps need to be executed either by root
user or with sudo
.
Mount the installation ISO image under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-esb3024-1.8.0.iso /mnt/acd
Run the installer script.
/mnt/acd/installer
Upgrade
The upgrade procedure for the router is performed by taking a backup of the configuration, installing the new version of the router, and applying the saved configuration.
With the router running, save a backup of the configuration.
The exact procedure to accomplish this depends on the current method of configuration, e.g. if
confd
is used, then the configuration should be extracted fromconfd
, but if the REST API is used directly, then the configuration must be saved by fetching the current configuration snapshot using the REST API.Extracting the configuration using
confd
is the recommend approach where available.confcli | tee config_backup.json
To extract the configuration from the REST API, the following may be used instead. Depending on the version of the router used, an API-Key may be required to fetch from the REST API.
curl --insecure https://localhost:5001/v2/configuration \ | tee config_backup.json
If the API Key is required, it can be found in the file
/opt/edgeware/acd/router/cache/rest-api-key.json
and can be passed to the API by setting the value of theX-API-Key
header.curl --insecure -H "X-API-Key: 1234abcd" \ https://localhost:5001/v2/configuration \ | tee config_backup.json
Mount the new installation ISO under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-1.2.0.iso /mnt/acd
Stop the router and all associated services.
Before upgrading the router it needs to be stopped, which can be done by typing this:
systemctl stop 'acd-*'
Run the installer script.
/mnt/acd/installer
Migrate the configuration.
Note that this step only applies if the router is configured using
confd
. If it is configured using the REST API, this step is not necessary.The confd configuration used in version 1.6.0 is not directly compatible with 1.8.0, and may need to have a few minor manual updates in order to be valid. If this is not done, the configuration will not be valid and it will not be possible to make configuration changes.
To determine if the configuration needs to be manually updated,
confcli
can be run like below. If it prints error messages, the configuration needs to be updated. If no error messages are printed, the configuration is valid and no further updates are necessary.confcli services.routing | head [2024-02-01 19:05:10,769] [ERROR] Missing configuration key /services/routing/hostGroups/convoy-rr/forwardHostHeader [2024-02-01 19:05:10,779] [ERROR] Missing configuration key /services/routing/hostGroups/e-dns/forwardHostHeader [2024-02-01 19:05:10,861] [ERROR] 'forwardHostHeader'
If error messages are printed, a
forwardHostHeader
configuration needs to be added to thehostGroups
configuration. This can be done by running this at the command line:curl -s http://localhost:5000/config/__active/ | \ sed -E 's/([[:space:]]+)"type": "(host|redirecting|dns)"(,?)/\1"type": "\2",\n\1"forwardHostHeader": false\3/' | \ curl -s -X PUT -T - -H 'Content-Type: application/json' http://localhost:5000/config/__active/
This reads the active configuration from the router, adds the “forwardHostHeader” configuration to all host groups, and then sends the updated configuration back to the router.
See Configuration changes between 1.6.0 and 1.8.0 for more details about the configuration changes.
Remove the Account Monitor container
Previous versions of the router installed the Account Monitor tool. This is no longer included, but since the previous version installed, there will be a stopped Account Monitor container. If it is not used, the container can be removed by typing:
podman rm account-monitor
Troubleshooting
If there is a problem running the installer, additional debug information can
be output by adding -v
or -vv
or -vvv
to the installer command, the
more “v” characters, the more detailed output.
1.1.2.5.1 - Configuration changes between 1.6.0 and 1.8.0
Confd configuration changes
Below are some of the configuration changes between version 1.4.0 and 1.6.0 listed. The list only contains the changes that might affect already existing configuration, enirely new items are not listed. Normally nothing needs to be done about this since they will be upgraded automatically, but they are listed here for reference.
Added enabled
to contentPopularity
An enabled
key has been added to
services.routing.settings.contentPopularity
. After the key has been added, the
configuration looks like this:
{
"services": {
"routing": {
"settings": {
"contentPopularity": {
"enabled": true,
"algorithm": "score_based",
"sessionGroupNames": []
},
...
Added selectionInputItemLimit
to tuning
A selectionInputItemLimit
key has been added to
services.routing.tuning.general
. After the key has been added, the
configuration looks like this:
{
"services": {
"routing": {
"tuning": {
"general": {
...
"selectionInputItemLimit": 10000,
...
Added forwardHostHeader
to hostGroups
All three hostGroup
types (host
, redirecting
and dns
) have been extended
with a forwardHostHeader
key. For example, a redirecting host might look like
this after the change:
{
"services": {
"routing": {
"hostGroups": [
{
"name": "convoy-rr",
"type": "redirecting",
"httpPort": 80,
"httpsPort": 443,
"forwardHostHeader": true,
"hosts": [
{
"name": "convoy-rr-1",
"hostname": "convoy-rr-1",
"ipv6_address": ""
}
]
}
],
...
REST API configuration changes
The following items have been added to the REST API configuration. They will not need to be manually updated, the router will add the new keys with default values. Note that this is not a complete list of all changes, it only contains the changes that will be automatically added when upgrading the router.
If the router is configured via confd
and confcli
, these changes will be
applied by them. This section is only relevant if the router is configured via
the v2/configuration
API.
- Added the
session_translation_function
key. - Added the
tuning.selection_input_item_limit
key.
1.1.2.6 - Installing release 1.6.0
To install ESB3024 Router, one first needs to copy the installation ISO image
to the target node where the router will be run. Due to the way the
installer operates, it is necessary that the host is reachable by
password-less SSH from itself for the user account that will perform the
installation, and that this user has sudo
access.
Prerequisites:
Ensure that the current user has
sudo
access.sudo -l
If the above command fails, you may need to add the user to the
/etc/sudoers
file.Ensure that the installer has password-less SSH access to
localhost
.If using the
root
user, thePermitRootLogin
property of the/etc/ssh/sshd_config
file must be set to ‘yes’.The local host key must also be included in the
.ssh/authorized_keys
file of the user running the installer. That can be done by issuing the following as the intended user:mkdir -m 0700 -p ~/.ssh ssh-keyscan localhost >> ~/.ssh/authorized_keys
Note! The
ssh-keyscan
utility will result in the key fingerprint being output on the console. As a security best-practice its recommended to verify that this host-key matches the machine’s true SSH host key. As an alternative, to thisssh-keyscan
approach, establishing an SSH connection to localhost and accepting the host key will have the same result.Disable SELinux.
The Security-Enhanced Linux Project (SELinux) is designed to add an additional layer of security to the operating system by enforcing a set of rules on processes. Unfortunately out of the box the default configuration is not compatible with the way the installer operates. Before proceeding with the installation, it is recommended to disable SELinux. It can be re-enabled after the installation completes, if desired, but will require manual configuration. Refer to the Red Hat Customer Portal for details.
To check if SELinux is enabled:
getenforce
This will result in one of 3 states, “Enforcing”, “Permissive” or “Disabled”. If the state is “Enforcing” use the following to disable SELinux. Either “Permissive” or “Disabled” is required to continue.
setenforce 0
It is recommended to reboot the computer after changing SELinux modes, but the changes should take effect immediately.
Assuming the installation ISO image is in the current working directory,
the following steps need to be executed either by root
user or with sudo
.
Mount the installation ISO image under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-esb3024-1.8.0.iso /mnt/acd
Run the installer script.
/mnt/acd/installer
Upgrade
The upgrade procedure for the router is performed by taking a backup of the configuration, installing the new version of the router, and applying the saved configuration.
With the router running, save a backup of the configuration.
The exact procedure to accomplish this depends on the current method of configuration, e.g. if
confd
is used, then the configuration should be extracted fromconfd
, but if the REST API is used directly, then the configuration must be saved by fetching the current configuration snapshot using the REST API.Extracting the configuration using
confd
is the recommend approach where available.confcli | tee config_backup.json
To extract the configuration from the REST API, the following may be used instead. Depending on the version of the router used, an API-Key may be required to fetch from the REST API.
curl --insecure https://localhost:5001/v2/configuration \ | tee config_backup.json
If the API Key is required, it can be found in the file
/opt/edgeware/acd/router/cache/rest-api-key.json
and can be passed to the API by setting the value of theX-API-Key
header.curl --insecure -H "X-API-Key: 1234abcd" \ https://localhost:5001/v2/configuration \ | tee config_backup.json
Mount the new installation ISO under
/mnt/acd
.Note: The mount-point may be any accessible path, but
/mnt/acd
will be used throughout this document.mkdir -p /mnt/acd mount esb3024-acd-router-1.2.0.iso /mnt/acd
Stop the router and all associated services.
Before upgrading the router it needs to be stopped, which can be done by typing this:
systemctl stop 'acd-*'
Run the installer script.
/mnt/acd/installer
Migrate the configuration.
Note that this step only applies if the router is configured using
confd
. If it is configured using the REST API, this step is not necessary.See Configuration changes between 1.4.0 and 1.6.0 for instructions on how to migrate the configuration to release 1.6.0.
Troubleshooting
If there is a problem running the installer, additional debug information can
be output by adding -v
or -vv
or -vvv
to the installer command, the
more “v” characters, the more detailed output.
1.1.2.6.1 - Configuration changes between 1.4.0 and 1.6.0
confd configuration
The confd configuration used in version 1.4.0 is not directly compatible with
1.6.0, and will need to have a few minor updates in order to be valid. If this
is not done, the configuration will not be valid and it will not be possible to
make configuration changes. Running confcli
will cause error messages and an
empty default configuration to be printed.
$ confcli services.routing.
[2023-12-12 16:08:07,120] [ERROR] Missing configuration key /services/routing/translationFunctions
[2023-12-12 16:08:07,122] [ERROR] Missing configuration key /services/routing/settings/instream/dashManifestRewrite/sessionGroupNames
[2023-12-12 16:08:07,122] [ERROR] Missing configuration key /services/routing/settings/instream/hlsManifestRewrite/sessionGroupNames
[2023-12-12 16:08:07,123] [ERROR] Missing configuration key /services/routing/settings/managedSessions
[2023-12-12 16:08:07,123] [ERROR] Missing configuration key /services/routing/tuning/target/recentDurationMilliseconds
{
"routing": {
"apiKey": "",
"settings": {
"allowedProxies": [],
"contentPopularity": {
"algorithm": "score_based",
"sessionGroupNames": []
},
"extendedContentIdentifier": {
...
The first thing that needs to be done is to rename the keys sessionGroupIds
to
sessionGroupNames
. If the configuration was backed up to the file
config_backup.json
before upgrading, the keys can be renamed and the
updated configuration can be applied by typing this:
sed 's/"sessionGroupIds"/"sessionGroupNames"/' config_backup.json | confcli -i
[2023-12-19 12:33:17,725] [ERROR] Missing configuration key /services/routing/translationFunctions
[2023-12-19 12:33:17,726] [ERROR] Missing configuration key /services/routing/settings/instream/dashManifestRewrite/sessionGroupNames
[2023-12-19 12:33:17,727] [ERROR] Missing configuration key /services/routing/settings/instream/hlsManifestRewrite/sessionGroupNames
[2023-12-19 12:33:17,727] [ERROR] Missing configuration key /services/routing/settings/managedSessions
[2023-12-19 12:33:17,727] [ERROR] Missing configuration key /services/routing/tuning/target/recentDurationMilliseconds
The configuration has not yet been converted, so the error messages are still
printed. The configuration will be converted when the acd-confd
service is
restarted.
systemctl restart acd-confd
This concludes the conversion of the configuration and the router is ready to be used.
Configuration changes
Below are all configuration changes between version 1.4.0 and 1.6.0 listed. Normally nothing needs to be done about this since they will be upgraded automatically, but they are listed here for reference.
Added translationFunctions
block
services.routing.translationFunctions
has been added. It can be added as a
map with two empty strings as values, to make the top of the configuration look
like this:
{
"services": {
"routing": {
"translationFunctions": {
"request": "",
"response": ""
},
...
Renamed sessionGroupIds
to sessionGroupNames
The keys
services.routing.settings.instream.dashManifestRewrite.sessionGroupIds
and
services.routing.settings.instream.hlsManifestRewrite.sessionGroupIds
have
been renamed to
services.routing.settings.instream.dashManifestRewrite.sessionGroupNames
and
services.routing.settings.instream.hlsManifestRewrite.sessionGroupNames
respectively. Any session group IDs need to be manually converted to session
group names.
After the conversion, the head of the configuration file might look like this:
{
"services": {
"routing": {
"apiKey": "",
"settings": {
"allowedProxies": [],
"contentPopularity": {
"algorithm": "score_based",
"sessionGroupNames": []
},
"extendedContentIdentifier": {
"enabled": false,
"includedQueryParams": []
},
"instream": {
"dashManifestRewrite": {
"enabled": false,
"sessionGroupNames": []
},
"hlsManifestRewrite": {
"enabled": false,
"sessionGroupNames": []
},
"reversedFilenameComparison": false
},
...
Added managedSessions
block
A services.routing.settings.managedSessions
block has been added. After
adding the block, the configuration might look like this:
{
"services": {
"routing": {
"apiKey": "",
"settings": {
"allowedProxies": [],
"contentPopularity": {
"algorithm": "score_based",
"sessionGroupNames": []
},
...
"managedSessions": {
"fraction": 0.0,
"maxActive": 100000,
"sessionTypes": []
},
"usageLog": {
"enabled": false,
"logInterval": 3600000
}
},
...
Added recentDurationMilliseconds
A services.routing.tuning.target.recentDurationMilliseconds
key has been added
to the configuration file, with a default value of 500. After adding the key,
the configuration might look like this:
{
"services": {
"routing": {
"apiKey": "",
...
"tuning": {
"target": {
...
"recentDurationMilliseconds": 500,
...
Storing the updated configuration
After all these changes have been done to the configuration file, it can be
applied to the router using confcli
.
confcli
will still display error messages because the stored configuration is
not valid. They will not be displayed anymore after the valid configuration has
been applied.
$ confcli -i < updated_config.json
[2023-12-12 18:52:05,500] [ERROR] Missing configuration key /services/routing/translationFunctions
[2023-12-12 18:52:05,502] [ERROR] Missing configuration key /services/routing/settings/instream/dashManifestRewrite/sessionGroupNames
[2023-12-12 18:52:05,502] [ERROR] Missing configuration key /services/routing/settings/instream/hlsManifestRewrite/sessionGroupNames
[2023-12-12 18:52:05,503] [ERROR] Missing configuration key /services/routing/settings/managedSessions
[2023-12-12 18:52:05,511] [ERROR] Missing configuration key /services/routing/tuning/target/recentDurationMilliseconds
Raw configuration
The following changes have been made to the raw configuration. If the router is
configured via confd
and confcli
, these changes will be applied by them.
This section is only relevant if the router is configured via the
v2/configuration
API.
Simple changes
The following keys were added or removed. They will not need to be manually updated, the router will add the new keys with default values.
- Removed the
tuning.repeated_session_start_threshold_seconds
key. - Removed the
lua_paths
key. - Added the
tuning.target_recent_duration_milliseconds
key.
EDNS proxy changes
If the router has been configured to use an EDNS server, the following has to be changed for the configuration to work.
The hosts.proxy_address
key has been renamed to hosts.proxy_url
and now
accepts a port that is used when connecting to the proxy.
The cdns.http_port
and cdns.https_port
keys now configure the port that is
used for connecting to the EDNS server, before they configured the port that is
used for connecting to the proxy.
1.1.3 - Firewall
For security reasons, the ESB3024 Installer does not automatically configure the local firewall to allow incoming traffic. It is the responsibility of the operations person to ensure that the system is protected from external access by placing it behind a suitable firewall solution. The following table describes the set of ports required for operation of the router.
Application | Port | Protocol | Direction | Source | Description |
---|---|---|---|---|---|
Prometheus Alert Manager | 9093 | TCP | IN | internal | Monitoring Services |
Confd | 5000 | TCP | IN | internal | Configuration Services |
Router | 80 | TCP | IN | public | Incoming HTTP Requests |
Router | 443 | TCP | IN | public | Incoming HTTPS Requests |
Router | 5001 | TCP | IN | localhost | Access to router’s REST API |
Router | 8000 | TCP | IN | localhost | Internal monitoring port |
EDNS-Proxy | 8888 | TCP | IN | localhost | Proxy EDNS Requests |
Grafana | 3000 | TCP | IN | internal | Monitoring Services |
Grafana-Loki | 3100 | TCP | IN | internal | Log monitoring daemon |
Prometheus | 9090 | TCP | IN | internal | Monitoring Service |
The “Direction” column represents the direction in which the connection is established.
IN
- The connection is originated from an outside serverOUT
- The connection is established from the host to an external server.
Once a connection is established through the firewall, bidirectional traffic must be allowed using the established connection.
For the “Source” column, the following terms are used.
internal
- Any host or network which is allowed to monitor or operate the system.public
- Any host or subnet that can access the router. This includes any customer network that will be making routing requests.localhost
- Access can be limited to local connections only.any
- All traffic from any source or to any destination.
Additional Ports
Convoy bridge integration
The optional convoy-bridge service needs the ability to access the Convoy MariaDB service, which by default runs on port 3306 on all of the Convoy Management servers. To allow this integration to run, port 3306/tcp must be allowed from the router to the configured Convoy Management node.
1.1.4 - API Overview
ESB3024 Router provides two different types of API:s:
- A content request API that is used by video clients to ask for content, normally using port 80 for HTTP and port 443 for HTTPS.
- A few REST API:s used by administrators to configure and monitor the router installation, using port 5001 over HTTPS by default.
The content API won’t be described further in this document, since it’s a simple HTTP interface serving content as regular files or redirect responses.
Raw configuration – /v2/configuration
Used to check and update the raw configuration of ESB3024 Router. Note that this API is considered an implementation detail and is not documented further.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
GET | <N/A> | Success | 200 OK | application/json |
PUT | application/json | Success | 204 No Content | <N/A> |
PUT | application/json | Failure | 400 Bad Request | application/json 1 |
Validate Configuration – /v2/validate_configuration
Used to determine if a JSON payload is correctly formatted without actually applying its configuration. A successful return status does not guarantee that the applied configuration will work, it only validates the JSON structure.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
PUT | application/json | Success | 204 No Content | <N/A> |
PUT | application/json | Failure | 400 Bad Request | application/json 1 |
Example request
When an expected field is missing from the payload, the validation will show which one and return an appropriate error message in its payload:
$ curl -i -X PUT \
-d '{"routing": {"log_level": 3}}' \
-H "Content-Type: application/json" \
https://router.example:5001/v2/validate_configuration
HTTP/1.1 400 Bad Request
Access-Control-Allow-Origin: *
Content-Length: 132
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
"Configuration validation: Configuration parsing failed. \
Exception: [json.exception.out_of_range.403] (/routing) key 'id' not found"
Selection Input – /v1/selection_input
Selection input API can be used to inject external key:value data into the
routing engine, making the data available when making routing decisions. An
arbitrary JSON structure can be pushed to the endpoint. When performing GET
or
DELETE
requests, specific selection input values can be accessed or deleted by
including a path to the request. Note that not specifying a path will select all
selection input values.
One use case for selection input is to provide data on cache availability. E.g.
If you send {"edge-streamer-2-online": true}
to the selection input API,
you can create a routing condition eq('edge-streamer-online', true)
to
ensure that no traffic gets routed to the streamer if it’s offline. Note that
sending the same key:value data to the selection input API will overwrite the
previous value.
There is a configurable limit to how many key:value items that can be injected into the router, see the tuning parameter
$ confcli services.routing.tuning.general.selectionInputItemLimit
{
"selectionInputItemLimit": 10000
}
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
PUT | application/json | Success | 204 No Content | <N/A> |
PUT | application/json | Failure | 400 Bad Request | application/json |
GET | <N/A> | Success | 200 OK | application/json |
DELETE | <N/A> | Success | 204 No Content | <N/A> |
DELETE | <N/A> | Failure | 404 Not Found | <N/A> |
Example successful request (PUT
)
$ curl -i -X PUT \
-d '{"host1_bitrate": 13000, "host1_capacity": 50000}' \
-H "Content-Type: application/json" \
https://router.example:5001/v1/selection_input
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Content-Length: 0
X-Service-Identity: router.example-5fc78d
Example unsuccessful request (PUT
)
$ curl -i -X PUT \
-d '{"cdn-status": {"session-count": 12345, "load-percent" 98}}' \
-H "Content-Type: application/json" \
https://router.example:5001/v1/selection_input
HTTP/1.1 400 Bad Request
Access-Control-Allow-Origin: *
Content-Length: 169
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"error": "[json.exception.parse_error.101] parse error at line 1, column 57: \
syntax error while parsing object separator - \
unexpected number literal; expected ':'"
}
Example successful request (GET
)
curl -i https://router.example:5001/v1/selection_input
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 129
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"host1_bitrate": 13000,
"host1_capacity": 50000
}
Example successful specific value request (GET
)
curl -i https://router.example:5001/v1/selection_input/path/to/value
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 129
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
1
Example successful request (DELETE
)
curl -i -X DELETE https://router.example:5001/v1/selection_input
HTTP/1.1 204 OK
Access-Control-Allow-Origin: *
Content-Length: 129
X-Service-Identity: router.example-5fc78d
Example successful specific value request (DELETE
)
curl -i -X DELETE https://router.example:5001/v1/selection_input/value/to/delete
HTTP/1.1 204 OK
Access-Control-Allow-Origin: *
Content-Length: 129
X-Service-Identity: router.example-5fc78d
Example unsuccessful request (DELETE
)
curl -i -X DELETE https://router.example:5001/v1/selection_input/non/existent/value
HTTP/1.1 404 Not Found
Access-Control-Allow-Origin: *
Content-Length: 129
X-Service-Identity: router.example-5fc78d
Subnets – /v1/subnets
An API for managing named subnets that can be used for routing and block lists. See Subnets for more details.
PUT
requests inject key value pairs with the form {<subnet>: <value>}
, where
<subnet>
is a valid CIDR string, into ACD, e.g.:
$ curl -i -X PUT \
-d '{"255.255.255.255/24": "area1", "1.2.3.4/24": "area2"}' \
-H "Content-Type: application/json" \
https://router.example:5001/v1/subnets
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Content-Length: 0
X-Service-Identity: router.example-5fc78d
GET
requests are used to fetch injected subnets, e.g.:
# Fetch all injected subnets
$ curl -i https://router.example:5001/v1/subnets
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 411
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"1.2.3.4/16": "area2",
"1.2.3.4/24": "area1",
"1.2.3.4/8": "area3",
"255.255.255.255/16": "area2",
"255.255.255.255/24": "area1",
"255.255.255.255/8": "area3",
"2a02:2e02:9bc0::/16": "area8",
"2a02:2e02:9bc0::/32": "area7",
"2a02:2e02:9bc0::/48": "area6",
"2a02:2e02:9de0::/44": "combined_area",
"2a02:2e02:ada0::/44": "combined_area",
"5.5.0.4/8": "area5",
"90.90.1.3/16": "area4"
}
DELETE
requests are used to delete injected subnets, e.g.:
# Delete all injected subnets
$ curl -i https://router.example:5001/v1/subnets -X DELETE
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Content-Length: 0
X-Service-Identity: router.example-5fc78d
Both GET
and DELETE
requests can be specified with the paths /byKey/
and
/byValue/
to filter which subnets to GET
or DELETE
.
# Fetch subnet with the CIDR string 1.2.3.4/8 if it exists
$ curl -i https://router.example:5001/v1/subnets/byKey/1.2.3.4/8
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 26
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"1.2.3.4/8": "area3"
}
# Fetch all subnets whose CIDR string begins with the IP 1.2.3.4
$ curl -i https://router.example:5001/v1/subnets/byKey/1.2.3.4
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 76
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"1.2.3.4/16": "area2",
"1.2.3.4/24": "area1",
"1.2.3.4/8": "area3"
}
# Fetch all subnets whose value equals 'area1'
$ curl -i https://router.example:5001/v1/subnets/byValue/area1
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 60
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"1.2.3.4/24": "area1",
"255.255.255.255/24": "area1"
}
# Delete subnet with the CIDR string 1.2.3.4/8 if it exists
$ curl -i https://router.example:5001/v1/subnets/byKey/1.2.3.4/8
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Content-Length: 0
X-Service-Identity: router.example-5fc78d
# Delete all subnets whose CIDR string begins with the IP 1.2.3.4
$ curl -i https://router.example:5001/v1/subnets/byKey/1.2.3.4
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Content-Length: 0
X-Service-Identity: router.example-5fc78d
# Delete all subnets whose value equals 'area1'
$ curl -i https://router.example:5001/v1/subnets/byValue/area1
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Content-Length: 0
X-Service-Identity: router.example-5fc78d
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
PUT | application/json | Success | 204 No Content | <N/A> |
PUT | application/json | Failure | 400 Bad Request | application/json |
GET | <N/A> | Success | 200 OK | application/json |
GET | <N/A> | Failure | 400 Bad Request | application/json |
DELETE | <N/A> | Success | 204 No Content | application/json |
DELETE | <N/A> | Failure | 400 Bad Request | application/json |
Subrunner Resource Usage – /v1/usage
Used to monitor the load on subrunners, the processes performing those tasks that are possible to run in parallel.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
GET | <N/A> | Success | 200 OK | application/json |
Example request
$ curl -i https://router.example:5001/v1/usage
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 1234
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"total_usage": {
"content": {
"lru": 0,
"newest": "-",
"oldest": "-",
"total": 0
},
"sessions": 0,
"subrunner_usage": {
[...]
}
},
"usage_per_subrunner": [
{
"subrunner_usage": {
[...]
}
},
[...]
]
}
Metrics – /m1/v1/metrics
An interface intended to be scraped by Prometheus. It is possible to scrape it manually to see current values, but doing so will reset some counters and cause actual Prometheus data to become faulty.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
GET | <N/A> | Success | 200 OK | text/plain |
Example request
$ curl -i https://router.example:5001/m1/v1/metrics
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 1234
Content-Type: text/plain
X-Service-Identity: router.example-5fc78d
# TYPE num_configuration_changes counter
num_configuration_changes 12
# TYPE num_log_errors_total counter
num_log_errors_total 0
# TYPE num_log_warnings_total counter
num_log_warnings_total{category=""} 123
# TYPE num_log_warnings_total counter
num_log_warnings_total{category="cdn"} 0
# TYPE num_log_warnings_total counter
num_log_warnings_total{category="content"} 0
# TYPE num_log_warnings_total counter
num_log_warnings_total{category="generic"} 10
# TYPE num_log_warnings_total counter
num_log_warnings_total{category="repeated_session"} 0
# TYPE num_ssl_errors_total counter
[...]
Node Visit Counters – /v1/node_visits
Used to gather statistics about the number of visits to each node in the routing tree. The returned value is a JSON object containing node ID names and their corresponding counter values.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
GET | <N/A> | Success | 200 OK | application/json |
See Routing Rule Evaluation Metrics for more details.
Example request
$ curl -i https://router.example:5001/v1/node_visits
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 73
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"cache1.tv": "99900",
"offload": "100"
"routingtable": "100000"
}
Node Visit Graph – /v1/node_visits_graph
Creates a GraphML representation of the node visitation data that can be rendered into an image to make it easier to understand the data.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
GET | <N/A> | Success | 200 OK | application/xml |
See Routing Rule Evaluation Metrics for more details.
Example request
> curl -i -k https://router.example:5001/v1/node_visits_graph
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 731
Content-Type: application/xml
X-Service-Identity: router.example-5fc78d
<?xml version="1.0"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key id="visits" for="node" attr.name="visits" attr.type="string" />
<graph id="G" edgedefault="directed">
<node id="routingtable">
<data key="visits">100000</data>
</node>
<node id="cache1.tv">
<data key="visits">99900</data>
</node>
<node id="offload">
<data key="visits">100</data>
</node>
<edge id="e0" source="routingtable" target="cache1.tv" />
<edge id="e1" source="routingtable" target="offload" />
</graph>
</graphml>
Session list - /v1/sessions
Used to monitor the load on subrunners, the processes performing those tasks that are possible to run in parallel.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
GET | <N/A> | Success | 200 OK | application/json |
Example request
$ curl -k -i https://router.example:5001/v1/sessions
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 12345
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"sessions": [
{
"age_seconds": 103,
"cdn": "edgeware",
"cdn_is_redirecting": false,
"client_ip": "1.2.3.4",
"host": "cdn.example:80",
"id": "router.example-5fc78d-00000001",
"idle_seconds": 103,
"last_request_time": "2022-12-02T14:05:05Z",
"latest_request_path": "/__cl/s:storage1/__c/v/f/0/5/v_sintel3v_f05a05f07d352e891d79863131ef4df7/__op/hls-default/__f/index.m3u8",
"no_of_requests": 1,
"requested_bytes": 0,
"requests_redirected": 0,
"requests_served": 0,
"session_groups": [
"all"
],
"session_groups_generation": 2,
"session_path": "/__cl/s:storage1/__c/v/f/0/5/v_sintel3v_f05a05f07d352e891d79863131ef4df7/__op/hls-default/__f/index.m3u8",
"start_time": "2022-12-02T14:05:05Z",
"type": "instream",
"user_agent": "libmpv"
},
[...]
]
}
Session details - /v1/sessions/<id: str>
Used to get details about a specific session from the above session list. The
id
part of the URL corresponds to the id
field in one of the
returned session entries in the above response.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
GET | <N/A> | Success | 200 OK | application/json |
GET | <N/A> | Failure | 404 Not Found | application/json |
Example request
$ curl -k -i https://router.example:5001/v1/sessions/router.example-5fc78d-00000001
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 763
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"age_seconds": 183,
"cdn": "edgeware",
"cdn_is_redirecting": false,
"client_ip": "1.2.3.4",
"host": "cdn.example:80",
"id": "router.example-5fc78d-00000001",
"idle_seconds": 183,
"last_request_time": "2022-12-02T14:05:05Z",
"latest_request_path": "/__cl/s:storage1/__c/v/f/0/5/v_sintel3v_f05a05f07d352e891d79863131ef4df7/__op/hls-default/__f/index.m3u8",
"no_of_requests": 1,
"requested_bytes": 0,
"requests_redirected": 0,
"requests_served": 0,
"session_groups": [
"all"
],
"session_groups_generation": 2,
"session_path": "/__cl/s:storage1/__c/v/f/0/5/v_sintel3v_f05a05f07d352e891d79863131ef4df7/__op/hls-default/__f/index.m3u8",
"start_time": "2022-12-02T14:05:05Z",
"type": "instream",
"user_agent": "libmpv"
}
Content List - /v1/content
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
GET | <N/A> | Success | 200 OK | application/json |
Example request
$ curl -k -i https://router.example:5001/v1/content
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 572
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"content": [
[
"/__cl/s:storage1/__c/v/f/0/5/v_sintel3v_f05a05f07d352e891d79863131ef4df7/__op/hls-default/__f/index.m3u8",
{
"cached_count": 0,
"content_requested": false,
"content_set": false,
"expiration_time": "2022-12-02T14:05:05Z",
"key": "/__cl/s:storage1/__c/v/f/0/5/v_sintel3v_f05a05f07d352e891d79863131ef4df7/__op/hls-default/__f/index.m3u8",
"listeners": 0,
"manifest": "",
"request_count": 4,
"state": "HLS:MANIFEST-PENDING",
"wait_count": 0
}
]
]
}
Lua scripts – /v1/lua/<path str>.lua
Used to upload, retrieve and delete custom named Lua scripts on the router.
Global functions in uploaded scripts automatically become available to Lua code
in the configuration (which effectively may be viewed as hooks). Upload a
script by PUTing
a application/x-lua
to the endpoint, and retrieve it by
GETing
the endpoint without payload.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
PUT | application/x-lua | Success | 204 No Content | <N/A> |
PUT | application/x-lua | Failure | 400 Bad Request | application/json |
GET | <N/A> | Success | 200 OK | application/x-lua |
GET | <N/A> | Failure | 404 Not Found | application/json |
DELETE | <N/A> | Success | 204 No Content | <N/A> |
DELETE | <N/A> | Failure | 400 Bad Request | application/json |
DELETE | <N/A> | Failure | 404 Not Found | application/json |
Example request (PUT
)
Save a Lua script under the name advanced_functions/f1.lua
:
$ curl -i -X PUT \
-d 'function fun1() return 1 end' \
-H "Content-Type: application/x-lua" \
https://router.example:5001/v1/lua/advanced_functions/f1.lua
HTTP/1.1 204 Successfully saved Lua file
Access-Control-Allow-Origin: *
Content-Length: 0
X-Service-Identity: router.example-5fc78d
Example request (PUT
, from file)
Upload an entire Lua file under the name advanced_functions/f1.lua
:
First put your code in a file.
$ cat f1.lua
function fun1()
return 1
end
Then upload it using the --data-binary
flag to preserve newlines
$ curl -i -X PUT \
--data-binary @f1.lua \
-H "Content-Type: application/x-lua" \
https://router.example:5001/v1/lua/advanced_functions/f1.lua
HTTP/1.1 204 Successfully saved Lua file
Access-Control-Allow-Origin: *
Content-Length: 0
X-Service-Identity: router.example-5fc78d
Example request (GET
)
Request the Lua script named advanced_functions/f1.lua
using a GET
request:
$ curl -i https://router.example:5001/v1/lua/advanced_functions/f1.lua
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 28
Content-Type: application/x-lua
X-Service-Identity: router.example-5fc78d
function fun1() return 1 end
Example request (DELETE
)
Delete the Lua script named advanced_functions/f1.lua
using a DELETE
request:
$ curl -i -X DELETE \
https://router.example:5001/v1/lua/advanced_functions/f1.lua
HTTP/1.1 204 Successfully removed Lua file
Access-Control-Allow-Origin: *
Content-Length: 0
X-Service-Identity: router.example-5fc78d
List Lua scripts – /v1/lua
Used to list previously uploaded custom Lua scripts on the router, retrieving their respective paths and file checksums.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
GET | <N/A> | Success | 200 OK | application/json |
Example request
$ curl -k -i https://router.example:5001/v1/lua
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 108
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
[
{
"file_checksum": "d41d8cd98f00b204e9800998ecf8427e",
"path": "advanced_functions/f1.lua"
}
]
Debug a Lua expression – /v1/lua/debug
Used to debug an arbitrary Lua expression on the router in a “sandbox” (with no visible side effects to the state of the router), and inspect the result.
The Lua expression in the body is evaluated inside an isolated copy of the
internal Lua environment including selection input. The stdout
field of the
resulting JSON body is populated with a concatenation of every string provided
as argument to the Lua print()
function during the course of evaluation.
Upon a successful evaluation, as indicated by the success
flag,
return.value
and return.lua_type_name
capture the resulting Lua value.
Otherwise, if valuation was aborted (e.g. due to a Lua exception), error_msg
reflects any error description arising from the Lua environment.
REQUEST Method | Content-Type | RESPONSE Result | Status Code | Content-Type |
---|---|---|---|---|
POST | application/x-lua | Success | 200 OK | application/json |
Example successful request
$ curl -i -X POST \
-d 'fun1()' \
-H "Content-Type: application/x-lua" \
https://router.example:5001/v1/lua/debug
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 123
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"error_msg": "",
"return": {
"lua_type_name": "number",
"value": 1.0
},
"stdout": "",
"success": true
}
Example unsuccessful request
(attempt to invoke unknown function)
$ curl -i -X POST \
-d 'fun5()' \
-H "Content-Type: application/x-lua" \
https://router.example:5001/v1/lua/debug
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 123
Content-Type: application/json
X-Service-Identity: router.example-5fc78d
{
"error_msg": "[string \"function f0() ...\"]:2: attempt to call global 'fun5' (a nil value)",
"return": {
"lua_type_name": "",
"value": null
},
"stdout": "",
"success": false
}
Footnotes
1.1.5 - Configuration
1.1.5.1 - WebUI Configuration
The web based user interface is installed as a separate component and can be used to configure many common use cases. After navigating to the UI, a login screen will be presented.
Enter your credentials and log in. In the top left corner is a menu to select what section of the configuration to change. The configuration that will be active on the router is added in the Routing Workflow view. However, basic elements such as classification rules and routing targets, etc must be added first. Hence the following main steps are required to produce a proper configuration:
- Create classifiers serving as basic elements to create session groups.
- Create session groups which, using the classifiers, tag requests/clients for later use in the routing logic. of the incoming traffic.
- Define offload rules.
- Define rules to control behavior of internal traffic.
- Define backup rules to be used if the routing targets in the above step are unavailable.
- Finally, create the desired routing workflow using the elements defined in the previous steps.
A simplified concrete example of the above steps could be:
- Create two classifiers “smartphone” and “off-net”.
- Create a session group “mobile off-net”.
- Offload off-net traffic from mobile phones to a public CDN.
- Route other traffic to a private CDN.
- If the private CDN has an outage, use the public CDN for all traffic.
Hence, to start with, define the classifiers you will need. Those are based on information in the incoming request, optionally in combination with GeoIP databases or subnet information configured via the Subnet API. Here we show how to set up a GeoIP classifier. Note that the Director ships with a compatible snapshot of the GeoIP database, but for a production system a licensed and updated database is required.
Click the plus sign indicated in the picture above to create a new GeoIP classifier. You will be presented with the following view:
Here you can enter the geographical data on which to match, or check the “Inverted” check box to match anything except the entered geographical data.
The other kinds of classifiers are configured in a similar way.
After having added all the classifiers you need, it is time to create the session groups. Those are named filters that group incoming requests, typically video playback sessions in a video streaming CDN, and are defined with the help of the classifiers. For example, a session group “off-net mobile devices” could be composed of the classifiers “off-net traffic” and “mobile devices”.
Open the Session Groups view from the menu and hit the plus sign to add a new session group.
Define the new sessions groups by combining the previously created classifiers. It is often convenient to define an “All” session group that matches any incoming request.
Next go the “CDN Offload” view:
Here you define conditions for CDN offload. Each row defines a rule for offloading a specified session group. The rule makes use of the Selection Input API. This is an integration API that provides a way to supply additional data for use in the routing decision. Common examples are current bitrates or availability status. The selection input variables to use must be defined in the “Selection Input Types” view in the “Administration” section of the menu:
Reach out to the solution engineers from Agile Content in order to perform this integration in the best way. If no external data is required, such that the offload rule can be based solely based on session groups, this is not necessary and the condition field can be set to “Always” or “Disabled”.
When clicking the plus sign to add a new CDN Offload rule, the following view is presented:
The selection input rule is phrased in terms of a variable being above or below a threshold, but also a state such as “available” taking values 0 or 1 can be supported by for instance checking if “available” is below 1.
Moving on, if an incoming request is not offloaded, it will be handled by the Primary CDN section of the routing configuration.
Add all hosts in your primary CDN, together with a weight. A row in this table will be selected by random weighted load balancing. If each weight is the same, each row will be selected with the same probability. Another example would be three rows with weights 100, 100 and 200 which would randomly balance 50% of the load on the last row and the remaining load on the first two rows, i.e. 25% on each of the first and second row. If a Primary CDN host is unavailable, that host will not take part in the random selection.
If all hosts are unavailable, as a final resort the routing evaluation will go to the final Backup CDN step:
Here you can define what to do when all else fail. If not all requests are covered, for example with an “All” session group, then the request will fail with 403 Forbidden.
Now you have defined the basic elements and it is time to define the routing workflow. Select “Routing Workflow” from the menu, as pictured below. Here you can combine the elements previously created to achieve the desired routing behavior.
When everything seems correct, open the “Publish Routing” view from the menu:
Hit “Publish All Changes” and verify that you get a successful result.
1.1.5.2 - Confd and Confcli
confcli
to set up routing rulesConfiguration of a complex routing tree can be difficult. The command line
interface tool called confcli
has been developed to make it simpler. It
combines building blocks, representing simple routing decisions, into complex
routing trees capable of satisfying almost any routing requirements.
These blocks are translated into an ESB3024 Router configuration which is automatically sent to the router, overwriting existing routing rules, CDN list and host list.
Installation and Usage
The confcli
tools are installed alongside ESB3024 Router, on the same host,
and the confcli
command line tool itself is made available on the host machine.
Simply type confcli
in a shell on the host to see the current routing
configuration:
$ confcli
{
"services": {
"routing": {
"settings": {
"trustedProxies": [],
"contentPopularity": {
"algorithm": "score_based",
"sessionGroupNames": []
},
"extendedContentIdentifier": {
"enabled": false,
"includedQueryParams": []
},
"instream": {
"dashManifestRewrite": {
"enabled": false,
"sessionGroupNames": []
},
"hlsManifestRewrite": {
"enabled": false,
"sessionGroupNames": []
},
"reversedFilenameComparison": false
},
"usageLog": {
"enabled": false,
"logInterval": 3600000
}
},
"tuning": {
"content": {
"cacheSizeFullManifests": 1000,
"cacheSizeLightManifests": 10000,
"lightCacheTimeMilliseconds": 86400000,
"liveCacheTimeMilliseconds": 100,
"vodCacheTimeMilliseconds": 10000
},
"general": {
"accessLog": false,
"coutFlushRateMilliseconds": 1000,
"cpuLoadWindowSize": 10,
"eagerCdnSwitching": false,
"httpPipeliningEnable": false,
"logLevel": 3,
"maxConnectionsPerHost": 5,
"overloadThreshold": 32,
"readyThreshold": 8,
"redirectingCdnManifestDownloadRetries": 2,
"repeatedSessionStartThresholdSeconds": 30,
"selectionInputMetricsTimeoutSeconds": 30
},
"session": {
"idleDeactivateTimeoutMilliseconds": 20000,
"idleDeleteTimeoutMilliseconds": 1800000
},
"target": {
"responseTimeoutSeconds": 5,
"retryConnectTimeoutSeconds": 2,
"retryResponseTimeoutSeconds": 2,
"connectTimeoutSeconds": 5,
"maxIdleTimeSeconds": 30,
"requestAttempts": 3
}
},
"sessionGroups": [],
"classifiers": [],
"hostGroups": [],
"rules": [],
"entrypoint": "",
"applyConfig": true
}
}
}
The CLI tool can be used to modify, add and delete values by providing it with
the “path” to the object to change. The path is constructed by joining the field
names leading up to the value with a period between each name, e.g. the path to
the entrypoint
is services.routing.entrypoint
since entrypoint
is nested
under the routing
object, which in turn is under the services
root object.
Lists use an index number in place of a field name, where 0 indicates the very
first element in the list, 1 the second element and so on.
If the list contains objects which have a field with the name name
, the
index number can be replaced by the unique name of the object of interest.
Tab completion is supported by confcli. Pressing tab once will complete as far as possible, and pressing tab twice will list all available alternatives at the path constructed so far.
Display the values at a specific path:
$ confcli services.routing.hostGroups
{
"hostGroups": [
{
"name": "internal",
"type": "redirecting",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "rr1",
"hostname": "rr1.example.com",
"ipv6_address": ""
}
]
},
{
"name": "external",
"type": "host",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "offload-streamer1",
"hostname": "streamer1.example.com",
"ipv6_address": ""
},
{
"name": "offload-streamer2",
"hostname": "streamer2.example.com",
"ipv6_address": ""
}
]
}
]
}
Display the values in a specific list index:
$ confcli services.routing.hostGroups.1
{
"1": {
"name": "external",
"type": "host",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "offload-streamer1",
"hostname": "streamer1.example.com",
"ipv6_address": ""
},
{
"name": "offload-streamer2",
"hostname": "streamer2.example.com",
"ipv6_address": ""
}
]
}
}
Display the values in a specific list index using the object’s name:
$ confcli services.routing.hostGroups.1.hosts.offload-streamer2
{
"offload-streamer2": {
"name": "offload-streamer2",
"hostname": "streamer2.example.com",
"ipv6_address": ""
}
}
Modify a single value:
confcli services.routing.hostGroups.1.hosts.offload-streamer2.hostname new-streamer.example.com
services.routing.hostGroups.1.hosts.offload-streamer2.hostname = 'new-streamer.example.com'
Delete an entry:
$ confcli services.routing.sessionGroups.Apple.classifiers.
{
"classifiers": [
"Apple",
""
]
}
$ confcli services.routing.sessionGroups.Apple.classifiers.1 -d
http://localhost:5000/config/__active/services/routing/sessionGroups/Apple/classifiers/1 reset to default/deleted
$ confcli services.routing.sessionGroups.Apple.classifiers.
{
"classifiers": [
"Apple"
]
}
Adding new values in objects and lists is done using a wizard by invoking
confcli with a path and the -w
argument. This will be shown extensively in
the examples further down in this document rather than here.
If you have a JSON file with a previously generated confcli
configuration
output it can be applied to a system by typing confcli -i <file path>
.
CDNs and Hosts
Configuration using confcli has no real concept of CDNs, instead it has groups of hosts that share some common settings such as HTTP(S) port and whether they return a redirection URL, serve content directly or perform a DNS lookup. Of these three variants, the two former share the same parameters, while the DNS variant is slightly different.
Each host belongs to a host group and may itself be an entire CDN using a single public hostname or a single streamer server, all depending on the needs of the user.
Host Health
When creating a host in the confd configuration, you have the option to define a list of health check functions. Each health check function must return true for a host to be selected. This means that the host will only be considered available if all the defined health check functions evaluate to true. If any of the health check functions return false, the host will be considered unavailable and will not be selected for routing. All health check functions are detailed in the section Built-in Lua functions.
$ confcli services.routing.hostGroups -w
Running wizard for resource 'hostGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
hostGroups : [
hostGroup can be one of
1: dns
2: host
3: redirecting
Choose element index or name: redirecting
Adding a 'redirecting' element
hostGroup : {
name (default: ): edgeware
type (default: redirecting): ⏎
httpPort (default: 80): ⏎
httpsPort (default: 443): ⏎
hosts : [
host : {
name (default: ): rr1
hostname (default: ): convoy-rr1.example.com
ipv6_address (default: ): ⏎
healthChecks : [
healthCheck (default: always()): health_check()
Add another 'healthCheck' element to array 'healthChecks'? [y/N]: n
]
}
Add another 'host' element to array 'hosts'? [y/N]: y
host : {
name (default: ): rr2
hostname (default: ): convoy-rr2.example.com
ipv6_address (default: ): ⏎
healthChecks : [
healthCheck (default: always()): ⏎
Add another 'healthCheck' element to array 'healthChecks'? [y/N]: n
]
}
Add another 'host' element to array 'hosts'? [y/N]: ⏎
]
}
Add another 'hostGroup' element to array 'hostGroups'? [y/N]: ⏎
]
Generated config:
{
"hostGroups": [
{
"name": "edgeware",
"type": "redirecting",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "rr1",
"hostname": "convoy-rr1.example.com",
"ipv6_address": "",
"healthChecks": [
"health_check()"
]
},
{
"name": "rr2",
"hostname": "convoy-rr2.example.com",
"ipv6_address": "",
"healthChecks": [
"always()"
]
}
]
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.hostGroups -w
Running wizard for resource 'hostGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
hostGroups : [
hostGroup can be one of
1: dns
2: host
3: redirecting
Choose element index or name: dns
Adding a 'dns' element
hostGroup : {
name (default: ): external-dns
type (default: dns): ⏎
hosts : [
host : {
name (default: ): dns-host
hostname (default: ): dns.example.com
ipv6_address (default: ): ⏎
healthChecks : [
healthCheck (default: always()): ⏎
Add another 'healthCheck' element to array 'healthChecks'? [y/N]: n
]
}
Add another 'host' element to array 'hosts'? [y/N]: ⏎
]
}
Add another 'hostGroup' element to array 'hostGroups'? [y/N]: ⏎
]
Generated config:
{
"hostGroups": [
{
"name": "external-dns",
"type": "dns",
"hosts": [
{
"name": "dns-host",
"hostname": "dns.example.com",
"ipv6_address": "",
"healthChecks": [
"always()"
]
}
]
}
]
}
Merge and apply the config? [y/n]: y
Rule Blocks
The routing configuration using confcli
is done using a combination of logical
building blocks, or rules. Each block evaluates the incoming request in some way
and sends it on to one or more sub-blocks. If the block is of the host
type
described above, the client is sent to that host and the evaluation is done.
Existing blocks
Currently supported blocks are:
allow
: Incoming requests, for which a given rule function matches, are immediately sent to the providedonMatch
target.consistentHashing
: Splits incoming requests randomly between preferred hosts, determined by the proprietary consistent hashing algorithm. The amount of hosts to split between is controlled by thespreadFactor
.contentPopularity
: Splits incoming requests into two sub-blocks depending on how popular the requested content is.deny
: Incoming requests, for which a given rule function matches, are immediately denied, and all non-matching requests are sent to theonMiss
target.firstMatch
: Incoming requests are matched by an ordered series of rules, where the request will be handled by the first rule for which the condition evaluates to true.random
: Splits incoming requests randomly and equally between a list of target sub-blocks. Useful for simple load balancing.split
: Splits incoming requests between two sub-blocks depending on how the request is evaluated by a provided function. Can be used for sending clients to different hosts depending on e.g. geographical location or client hardware type.weighted
: Randomly splits incoming requests between a list of target sub-blocks, weighted according to each target’s associated weight rule. A higher weight means a higher portion of requests will be routed to a sub-block. Rules can be used to decide whether or not to pick a target.rawGroup
: Contains a raw ESB3024 Router configuration routing tree node, to be inserted as is in the generated configuration. This is only meant to be used in the rare cases when it’s impossible to construct the required routing behavior in any other way.rawHost
: A host reference for use as endpoints in rawGroup trees.
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: allow
Adding a 'allow' element
rule : {
name (default: ): allow
type (default: allow): ⏎
condition (default: ): customFunction()
onMatch (default: ): rr1
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "content",
"type": "contentPopularity",
"condition": "customFunction()",
"onMatch": "rr1"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: consistentHashing
Adding a 'consistentHashing' element
rule : {
name (default: ): consistentHashingRule
type (default: consistentHashing):
spreadFactor (default: 1): 2
hashAlgorithm (default: MD5):
targets : [
target : {
target (default: ): rr1
enabled (default: True):
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
target (default: ): rr2
enabled (default: True):
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
target (default: ): rr3
enabled (default: True):
}
Add another 'target' element to array 'targets'? [y/N]: n
]
}
Add another 'rule' element to array 'rules'? [y/N]: n
]
Generated config:
{
"rules": [
{
"name": "consistentHashingRule",
"type": "consistentHashing",
"spreadFactor": 2,
"hashAlgorithm": "MD5",
"targets": [
{
"target": "rr1",
"enabled": true
},
{
"target": "rr2",
"enabled": true
},
{
"target": "rr3",
"enabled": true
}
]
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: contentPopularity
Adding a 'contentPopularity' element
rule : {
name (default: ): content
type (default: contentPopularity): ⏎
contentPopularityCutoff (default: 10): 20
onPopular (default: ): rr1
onUnpopular (default: ): rr2
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "content",
"type": "contentPopularity",
"contentPopularityCutoff": 20.0,
"onPopular": "rr1",
"onUnpopular": "rr2"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: deny
Adding a 'deny' element
rule : {
name (default: ): deny
type (default: deny): ⏎
condition (default: ): customFunction()
onMiss (default: ): rr1
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "content",
"type": "contentPopularity",
"condition": "customFunction()",
"onMiss": "rr1"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: firstMatch
Adding a 'firstMatch' element
rule : {
name (default: ): firstMatch
type (default: firstMatch): ⏎
targets : [
target : {
onMatch (default: ): rr1
rule (default: ): customFunction()
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
onMatch (default: ): rr2
rule (default: ): otherCustomFunction()
}
Add another 'target' element to array 'targets'? [y/N]: n
]
}
Add another 'rule' element to array 'rules'? [y/N]: n
]
Generated config:
{
"rules": [
{
"name": "firstMatch",
"type": "firstMatch",
"targets": [
{
"onMatch": "rr1",
"condition": "customFunction()"
},
{
"onMatch": "rr2",
"condition": "otherCustomFunction()"
}
]
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: random
Adding a 'random' element
rule : {
name (default: ): random
type (default: random): ⏎
targets : [
target (default: ): rr1
Add another 'target' element to array 'targets'? [y/N]: y
target (default: ): rr2
Add another 'target' element to array 'targets'? [y/N]: ⏎
]
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "random",
"type": "random",
"targets": [
"rr1",
"rr2"
]
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: split
Adding a 'split' element
rule : {
name (default: ): split
type (default: split): ⏎
condition (default: ): custom_function()
onMatch (default: ): rr2
onMiss (default: ): rr1
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "split",
"type": "split",
"condition": "custom_function()",
"onMatch": "rr2",
"onMiss": "rr1"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.rules. -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: weighted
Adding a 'weighted' element
rule : {
name (default: ): weight
type (default: weighted): ⏎
targets : [
target : {
target (default: ): rr1
weight (default: 100): ⏎
condition (default: always()): always()
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
target (default: ): rr2
weight (default: 100): si('rr2-input-weight')
condition (default: always()): gt('rr2-bandwidth', 1000000)
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
target (default: ): rr2
weight (default: 100): custom_func()
condition (default: always()): always()
}
Add another 'target' element to array 'targets'? [y/N]: ⏎
]
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "weight",
"type": "weighted",
"targets": [
{
"target": "rr1",
"weight": "100",
"condition": "always()"
},
{
"target": "rr2",
"weight": "si('rr2-input-weight')",
"condition": "gt('rr2-bandwith', 1000000)"
},
{
"target": "rr2",
"weight": "custom_func()",
"condition": "always()"
}
]
}
]
}
Merge and apply the config? [y/n]: y
>> First add a raw host block that refers to a regular host
$ confcli services.routing.rules. -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: rawHost
Adding a 'rawHost' element
rule : {
name (default: ): raw-host
type (default: rawHost): ⏎
hostId (default: ): rr1
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "raw-host",
"type": "rawHost",
"hostId": "rr1"
}
]
}
Merge and apply the config? [y/n]: y
>> And then add a rule using the host node
$ confcli services.routing.rules. -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: rawGroup
Adding a 'rawGroup' element
rule : {
name (default: ): raw-node
type (default: rawGroup): ⏎
memberOrder (default: sequential): ⏎
members : [
member : {
target (default: ): raw-host
weightFunction (default: ): return 1
}
Add another 'member' element to array 'members'? [y/N]: ⏎
]
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "raw-node",
"type": "rawGroup",
"memberOrder": "sequential",
"members": [
{
"target": "raw-host",
"weightFunction": "return 1"
}
]
}
]
}
Merge and apply the config? [y/n]: y
Rule Language
Some blocks, such as the split
and firstMatch
types, have a rule field that
contains a small function in a very simple programming language. This field is
used to filter any incoming client requests in order to determine how to rule
block should react.
In the case of a split
block, the rule is evaluated and if it is true the
client is sent to the onMatch
part of the block, otherwise it is sent to the
onMiss
part for further evaluation.
In the case of a firstMatch
block, the rule for each target will be evaluated
top to bottom in order until either a rule evaluates to true or the list is
exhausted. If a rule evaluates to true, the client will be sent to the onMatch
part of the block, otherwise the next target in the list will be tried. If all
targets have been exhausted, then the entire rule evaluation will fail, and the
routing tree will be restarted with the firstMatch
block effectively removed.
Example of Boolean Functions
Let’s say we have an ESB3024 Router set up with a session group that matches Apple
devices (named “Apple”). To route all Apple devices to a specific streamer one
would simply create a split
block with the following rule:
in_session_group('Apple')
In order to make more complex rules it’s possible to combine several checks like
this in the same rule. Let’s extend the hypothetical ESB3024 Router above with a
configured subnet with all IP addresses in Europe (named “Europe”). To make a
rule that accepts any clients using an Apple device and living outside of
Europe, but only as long as the reported load on the streamer (as indicated by
the selection input variable
“europe_load_mbps”) is less than 1000 megabits per second one could make an
offload
block with the following rule (without linebreaks):
in_session_group('Apple')
and not in_subnet('Europe')
and lt('europe_load_mbps', 1000)
In this example in_session_group('Apple')
will be true if the client belongs
to the session group named ‘Apple’. The function call in_subnet('Europe')
is
true if the client’s IP belongs to the subnet named ‘Europe’, but the word not
in front of it reverses the value so the entire section ends up being false if
the client is in Europe. Finally lt('europe_load_mbps', 1000)
is true if
there is a selection input variable named “europe_load_mbps” and its value is
less than 1000.
Since the three parts are conjoined with the and
keyword they must all
be true for the entire rule to match. If the keyword or
had been used
instead it would have been enough for any of the parts to be true for the
rule to match.
Example of Numeric Functions
A hypothetical CDN has two streamers with different capacity; Host_1
has
roughly twice the capacity of Host_2
. A simple random load balancing would put
undue stress on the second host since it will receive as much traffic as the
more capable Host_1
.
This can be solved by using a weighted
random distribution rule block with
suitable rules for the two hosts:
{
"targets": [
{
"target": "Host_1",
"condition": "always()",
"weight": "100"
}
{
"target": "Host_2",
"condition": "always()",
"weight": "50"
},
]
}
resulting in Host_1
receiving twice as many requests as Host_2
as its
weight function is double that of Host_2
.
If the CDN is capable of reporting the free capacity of the hosts, for example by writing to a selection input variable for each host, it’s easy to write a more intelligent load balancing rule by making the weights correspond to the amount of capacity left on each host:
{
"targets": [
{
"target": "Host_1",
"condition": "always()",
"weight": "si('free_capacity_host_1')"
}
{
"target": "Host_2",
"condition": "always()",
"weight": "si('free_capacity_host_2')"
},
]
}
It is also possible to write custom Lua functions that return suitable weights, perhaps taking the host as an argument:
{
"targets": [
{
"target": "Host_1",
"condition": "always()",
"weight": "intelligent_weight_function('Host_1')"
}
{
"target": "Host_2",
"condition": "always()",
"weight": "intelligent_weight_function('Host_1')"
},
]
}
These different weight rules can of course be combined in the same rule block, with one target having a hard coded number, another using a dynamically updated selection input variable and yet another having a custom-built function.
Due to limitations in the random number generator used to distribute requests, it’s better to use somewhat large values, around 100–1000 or so, than to use small values near 0.
Built-in Functions
The following built-in functions are available when writing rules:
in_session_group(str name)
: True if session belongs to session group<name>
in_all_session_groups(str sg_name, ...)
: True if session belongs to all specified session groupsin_any_session_group(str sg_name, ...)
: True if session belongs to any specified session groupin_subnet(str subnet_name)
: True if client IP belongs to the named subnetgt(str si_var, number value)
: True if selection_inputs[si_var] > valuegt(str si_var1, str si_var2)
: True if selection_inputs[si_var1] > selection_inputs[si_var2]ge(str si_var, number value)
: True if selection_inputs[si_var] >= valuege(str si_var1, str si_var2)
: True if selection_inputs[si_var1] >= selection_inputs[si_var2]lt(str si_var, number value)
: True if selection_inputs[si_var] < valuelt(str si_var1, str si_var2)
: True if selection_inputs[si_var1] < selection_inputs[si_var2]le(str si_var, number value)
: True if selection_inputs[si_var] <= valuele(str si_var1, str si_var2)
: True if selection_inputs[si_var1] <= selection_inputs[si_var2]eq(str si_var, number value)
: True if selection_inputs[si_var] == valueeq(str si_var1, str si_var2)
: True if selection_inputs[si_var1] == selection_inputs[si_var2]neq(str si_var, number value)
: True if selection_inputs[si_var] != valueneq(str si_var1, str si_var2)
: True if selection_inputs[si_var1] != selection_inputs[si_var2]si(str si_var)
: Returns the value of selection_inputs[si_var] if it is defined and non-negative, otherwise it returns 0.always()
: Returns true, useful when creating weighted rule blocks.never()
: Returns false, opposite ofalways()
.
These functions, as well as custom functions written in Lua and uploaded to the ESB3024 Router, can be combined to make suitably precise rules.
Combining Multiple Boolean Functions
In order to make the rule language easy to work with, it is fairly restricted
and simple. One restriction is that it’s only possible to chain multiple
function results together using either and
or or
, but not a
combination of both conjunctions.
Statements joined with and
or or
keywords are evaluated one by one,
starting with the left-most statement and moving right. As soon as the end
result of the entire expression is certain, the evaluation ends. This means that
evaluation ends with the first false
statement for and
expressions since a
single false
component means the entire expression must also be false
. It
also means that evaluation ends with the first true
statement for or
expressions since only one component must be true
for the entire statement to
be true
as well. This is known as short-circuit or lazy evaluation.
Custom Functions
It is possible to write extremely complex Lua functions that take many parameters or calculations into consideration when evaluating an incoming client request. By writing such functions and making sure that they return only non-negative integer values and uploading them to the router they can be used from the rule language. Simply call them like any of the built-in functions listed above, using strings and numbers as arguments if necessary, and their result will be used to determine the routing path to use.
Formal Syntax
The full syntax of the language can be described in just a few lines of BNF grammar:
<rule> := <weight_rule> | <match_rule> | <value_rule>
<weight_rule> := "if" <compound_predicate> "then" <weight> "else" <weight>
<match_rule> := <compound_predicate>
<value_rule> := <weight>
<compound_predicate> := <logical_predicate> |
<logical_predicate> ["and" <logical_predicate> ...] |
<logical_predicate> ["or" <logical_predicate> ...] |
<logical_predicate> := ["not"] <predicate>
<predicate> := <function_name> "(" ")" |
<function_name> "(" <argument> ["," <argument> ...] ")"
<function_name> := <letter> [<function_name_tail> ...]
<function_name_tail> := empty | <letter> | <digit> | "_"
<argument> := <string> | <number>
<weight> := integer | <predicate>
<number> := float | integer
<string> := "'" [<letter> | <digit> | <symbol> ...] "'"
Building a Routing Configuration
This example sets up an entire routing configuration for a system with a ESB3008 Request Router, two streamers and the Apple devices outside of Europe example used earlier in this document. Any clients not matching the criteria will be sent to an offload CDN with two streamers in a simple uniformly randomized load balancing setup.
Set up Session Group
First make a classifier and a session group that uses it:
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: userAgent
Adding a 'userAgent' element
classifier : {
name (default: ): Apple
type (default: userAgent): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): ⏎
pattern (default: ): *apple*
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "Apple",
"type": "userAgent",
"inverted": false,
"patternType": "stringMatch",
"pattern": "*apple*"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.sessionGroups -w
Running wizard for resource 'sessionGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
sessionGroups : [
sessionGroup : {
name (default: ): Apple
classifiers : [
classifier (default: ): Apple
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: ⏎
]
Generated config:
{
"sessionGroups": [
{
"name": "Apple",
"classifiers": [
"Apple"
]
}
]
}
Merge and apply the config? [y/n]: y
Set up Hosts
Create two host groups and add a Request Router to the first and two streamers to the second, which will be used for offload:
$ confcli services.routing.hostGroups -w
Running wizard for resource 'hostGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
hostGroups : [
hostGroup can be one of
1: dns
2: host
3: redirecting
Choose element index or name: redirecting
Adding a 'redirecting' element
hostGroup : {
name (default: ): internal
type (default: redirecting): ⏎
httpPort (default: 80): ⏎
httpsPort (default: 443): ⏎
hosts : [
host : {
name (default: ): rr1
hostname (default: ): rr1.example.com
ipv6_address (default: ): ⏎
}
Add another 'host' element to array 'hosts'? [y/N]: ⏎
]
}
Add another 'hostGroup' element to array 'hostGroups'? [y/N]: y
hostGroup can be one of
1: dns
2: host
3: redirecting
Choose element index or name: host
Adding a 'host' element
hostGroup : {
name (default: ): external
type (default: host): ⏎
httpPort (default: 80): ⏎
httpsPort (default: 443): ⏎
hosts : [
host : {
name (default: ): offload-streamer1
hostname (default: ): streamer1.example.com
ipv6_address (default: ): ⏎
}
Add another 'host' element to array 'hosts'? [y/N]: y
host : {
name (default: ): offload-streamer2
hostname (default: ): streamer2.example.com
ipv6_address (default: ): ⏎
}
Add another 'host' element to array 'hosts'? [y/N]: ⏎
]
}
Add another 'hostGroup' element to array 'hostGroups'? [y/N]: ⏎
]
Generated config:
{
"hostGroups": [
{
"name": "internal",
"type": "redirecting",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "rr1",
"hostname": "rr1.example.com",
"ipv6_address": ""
}
]
},
{
"name": "external",
"type": "host",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "offload-streamer1",
"hostname": "streamer1.example.com",
"ipv6_address": ""
},
{
"name": "offload-streamer2",
"hostname": "streamer2.example.com",
"ipv6_address": ""
}
]
}
]
}
Merge and apply the config? [y/n]: y
Create Load Balancing and Offload Block
Add both offload streamers as targets in a randomgroup
block:
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: random
Adding a 'random' element
rule : {
name (default: ): balancer
type (default: random): ⏎
targets : [
target (default: ): offload-streamer1
Add another 'target' element to array 'targets'? [y/N]: y
target (default: ): offload-streamer2
Add another 'target' element to array 'targets'? [y/N]: ⏎
]
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "balancer",
"type": "random",
"targets": [
"offload-streamer1",
"offload-streamer2"
]
}
]
}
Merge and apply the config? [y/n]: y
Then create a split block with the request router and the load balanced CDN as targets:
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: split
Adding a 'split' element
rule : {
name (default: ): offload
type (default: split): ⏎
rule (default: ): in_session_group('Apple') and not in_subnet('Europe') and lt('europe_load_mbps', 1000)
onMatch (default: ): rr1
onMiss (default: ): balancer
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "offload",
"type": "split",
"condition": "in_session_group('Apple') and not in_subnet('Europe') and lt('europe_load_mbps', 1000)",
"onMatch": "rr1",
"onMiss": "balancer"
}
]
}
Merge and apply the config? [y/n]: y
The last step required is to set the entrypoint of the routing tree so the router knows where to start evaluating:
$ confcli services.routing.entrypoint offload
services.routing.entrypoint = 'offload'
Evaluate
Now that all the rules have been set up properly and the router has been reconfigured. The translated configuration can be read from the router’s configuration API:
$ curl -k https://router-host:5001/v2/configuration 2> /dev/null | jq .routing
{
"id": "offload",
"member_order": "sequential",
"members": [
{
"host_id": "rr1",
"id": "offload.rr1",
"weight_function": "return ((in_session_group('Apple') ~= 0) and
(in_subnet('Europe') == 0) and
(lt('europe_load_mbps', 1000) ~= 0) and 1) or 0 "
},
{
"id": "offload.balancer",
"member_order": "weighted",
"members": [
{
"host_id": "offload-streamer1",
"id": "offload.balancer.offload-streamer1",
"weight_function": "return 100"
},
{
"host_id": "offload-streamer2",
"id": "offload.balancer.offload-streamer2",
"weight_function": "return 100"
}
],
"weight_function": "return 1"
}
],
"weight_function": "return 100"
}
Note that the configuration language code has been translated into its Lua equivalent.
1.1.5.3 - Session Groups and Classification
ESB3024 Router provides a flexible classification engine, allowing the assignment of clients into session groups that can then be used to base routing decisions on.
Session Classification
In order to perform routing it is necessary to classify incoming sessions according to the relevant parameters. This is done through session groups and their associated classifiers.
There are different ways of classifying a request:
Strings with wildcards
: Simple case-insensitive string pattern with support for adding asterisks (’*’) in order to match any value at that point in the pattern.String with regular expressions
: A complex string matching pattern capable of matching more complicated strings than the simple wildcard matching type.
Valid string matching sources are
content_url_path
,content_url_query_params
,hostname
anduser_agent
, examples of which will be shown below.
GeoIP
: Based on the geographic location of the client, supporting wildcard matching. Geographic location data is provided by MaxMind. The possible values to match with are any combinations of:- Continent
- Country
- Cities
- ASN
Anonymous IP
: Classifies clients using an anonymous IP. Database of anonymous IPs is provided by MaxMind.IP range
: Based on whether a client’s IP belongs to any of the listed IP ranges or not.Subnet
: Tests if a client’s IP belongs to a named subnet, see Subnets for more details.ASN ID list
: Checks to see if a client’s IP belongs to any of the specified ASN IDs.Random
: Randomly classifies clients according to a given probability. The classifier is deterministic, meaning that a session will always get the same classification, even if evaluated multiple times.
A session group may have more than one classifier. If it does, all the classifiers must match the incoming client request for it to belong to the session group. It is also possible for a request to belong to multiple session groups, or to none.
To send certain clients to a specific host you first need to create a suitable
classifier using confcli
in wizard mode. The wizard will guide you through the
process of creating a new entry, asking you what value to input for each field
and helping you by telling you what inputs are allowed for restricted fields
such as the string comparison source mentioned above:
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: geoip
Adding a 'geoip' element
classifier : {
name (default: ): sweden_matcher
type (default: geoip): ⏎
inverted (default: False): ⏎
continent (default: ): ⏎
country (default: ): sweden
cities : [
city (default: ): ⏎
Add another 'city' element to array 'cities'? [y/N]: ⏎
]
asn (default: ): ⏎
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "sweden_matcher",
"type": "geoip",
"inverted": false,
"continent": "",
"country": "sweden",
"cities": [
""
],
"asn": ""
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: ipranges
Adding a 'ipranges' element
classifier : {
name (default: ): company_matcher
type (default: ipranges): ⏎
inverted (default: False): ⏎
ipranges : [
iprange (default: ): 90.128.0.0/12
Add another 'iprange' element to array 'ipranges'? [y/N]: ⏎
]
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "company_matcher",
"type": "ipranges",
"inverted": false,
"ipranges": [
"90.128.0.0/12"
]
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: stringMatcher
Adding a 'stringMatcher' element
classifier : {
name (default: ): apple_matcher
type (default: stringMatcher): ⏎
inverted (default: False): ⏎
source (default: content_url_path): user_agent
pattern (default: ): *apple*
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "apple_matcher",
"type": "stringMatcher",
"inverted": false,
"source": "user_agent",
"pattern": "*apple*"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: regexMatcher
Adding a 'regexMatcher' element
classifier : {
name (default: ): content_matcher
type (default: regexMatcher): ⏎
inverted (default: False): ⏎
source (default: content_url_path): ⏎
pattern (default: ): .*/(live|news_channel)/.*m3u8
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "content_matcher",
"type": "regexMatcher",
"inverted": false,
"source": "content_url_path",
"pattern": ".*/(live|news_channel)/.*m3u8"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: subnet
Adding a 'subnet' element
classifier : {
name (default: ): company_matcher
type (default: subnet): ⏎
inverted (default: False): ⏎
pattern (default: ): company
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "company_matcher",
"type": "subnet",
"inverted": false,
"pattern": "company"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: hostName
Adding a 'hostName' element
classifier : {
name (default: ): host_name_classifier
type (default: hostName): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): ⏎
pattern (default: ): *live.example*
}
Add another 'classifier' element to array 'classifiers'? [y/N]: n
]
Generated config:
{
"classifiers": [
{
"name": "host_name_classifier",
"type": "hostName",
"inverted": false,
"patternType": "stringMatch",
"pattern": "*live.example*"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: contentUrlPath
Adding a 'contentUrlPath' element
classifier : {
name (default: ): vod_matcher
type (default: contentUrlPath): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): ⏎
pattern (default: ): *vod*
}
Add another 'classifier' element to array 'classifiers'? [y/N]: n
]
Generated config:
{
"classifiers": [
{
"name": "vod_matcher",
"type": "contentUrlPath",
"inverted": false,
"patternType": "stringMatch",
"pattern": "*vod*"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: contentUrlQueryParameters
Adding a 'contentUrlQueryParameters' element
classifier : {
name (default: ): bitrate_matcher
type (default: contentUrlQueryParameters): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): regex
pattern (default: ): .*bitrate=100000.*
}
Add another 'classifier' element to array 'classifiers'? [y/N]: n
]
Generated config:
{
"classifiers": [
{
"name": "bitrate_matcher",
"type": "contentUrlQueryParameters",
"inverted": false,
"patternType": "regex",
"pattern": ".*bitrate=100000.*"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: userAgent
Adding a 'userAgent' element
classifier : {
name (default: ): iphone_matcher
type (default: userAgent): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): regex
pattern (default: ): i(P|p)hone
}
Add another 'classifier' element to array 'classifiers'? [y/N]: n
]
Generated config:
{
"classifiers": [
{
"name": "iphone_matcher",
"type": "userAgent",
"inverted": false,
"patternType": "regex",
"pattern": "i(P|p)hone"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: asnIds
Adding a 'asnIds' element
classifier : {
name (default: ): asn_matcher
type (default: asnIds): ⏎
inverted (default: False): ⏎
asnIds <The list of ASN IDs to accept. (default: [])>: [
asnId: 1
Add another 'asnId' element to array 'asnIds'? [y/N]: y
asnId: 2
Add another 'asnId' element to array 'asnIds'? [y/N]: y
asnId: 3
Add another 'asnId' element to array 'asnIds'? [y/N]: ⏎
]
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "asn_matcher",
"type": "asnIds",
"inverted": false,
"asnIds": [
1,
2,
3
]
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: random
Adding a 'random' element
classifier <A classifier randomly applying to clients based on the provided probability. (default: OrderedDict())>: {
name (default: ): random_matcher
type (default: random):
probability (default: 0.5): 0.7
}
Add another 'classifier' element to array 'classifiers'? [y/N]: n
]
Generated config:
{
"classifiers": [
{
"name": "random_matcher",
"type": "random",
"probability": 0.7
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: anonymousIp
Adding a 'anonymousIp' element
classifier : {
name (default: ): anon_ip_matcher
type (default: anonymousIp):
inverted (default: False):
}
Add another 'classifier' element to array 'classifiers'? [y/N]: n
]
Generated config:
{
"classifiers": [
{
"name": "anon_ip_matcher",
"type": "anonymousIp",
"inverted": false
}
]
}
Merge and apply the config? [y/n]: y
These classifiers can now be used to construct session groups and properly classify clients. Using the examples above, let’s create a session group classifying clients from Sweden using an Apple device:
$ confcli services.routing.sessionGroups -w
Running wizard for resource 'sessionGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
sessionGroups : [
sessionGroup : {
name (default: ): inSwedenUsingAppleDevice
classifiers : [
classifier (default: ): sweden_matcher
Add another 'classifier' element to array 'classifiers'? [y/N]: y
classifier (default: ): apple_matcher
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: ⏎
]
Generated config:
{
"sessionGroups": [
{
"name": "inSwedenUsingAppleDevice",
"classifiers": [
"sweden_matcher",
"apple_matcher"
]
}
]
}
Merge and apply the config? [y/n]: y
Clients classified by the sweden_matcher
and apple_matcher
classifiers
will now be put in the session group inSwedenUsingAppleDevice
. Using session
groups in routing will be demonstrated later in this document.
Advanced Classification
The above example will simply apply all classifiers in the list, and as long as they all evaluate to true for a session, that session will be tagged with the session group. For situations where this isn’t enough, classifiers can instead be combined using simple logic statements to form complex rules.
A first simple example can be a session group that accepts any viewers in either
ASN 1, 2 or 3 (corresponding to the classifier asn_matcher
or living in Sweden.
This can be done by creating a session group, and adding the following logic
statement:
'sweden_matcher' OR 'asn_matcher'
A slightly more advanced case is where a session group should only contain sessions neither in any of the three ASNs nor in Sweden. This is done by negating the previous example:
NOT ('sweden_matcher' OR 'asn_matcher')
A single classifier can also be negated, rather than the whole statement, for example to accept any Swedish viewers except those in the three ASNs:
'sweden_matcher' AND NOT 'asn_matcher'
Arbitrarily complex statements can be created using classifier names, parentheses,
and the keywords AND
, OR
and NOT
.
For example a session group accepting any Swedish viewers except those in the Stockholm region unless they are also Apple users:
'sweden_matcher' AND (NOT 'stockholm_matcher' OR 'apple_matcher')
Note that the classifier names must be enclosed in single quotes when using this syntax.
Applying this kind of complex classifier using confcli is no more difficult than adding a single classifier at a time:
$ confcli services.routing.sessionGroups. -w
Running wizard for resource 'sessionGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
sessionGroups : [
sessionGroup : {
name (default: ): complex_group
classifiers : [
classifier (default: ): 'sweden_matcher' AND (NOT 'stockholm_matcher' OR 'apple_matcher')
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: ⏎
]
Generated config:
{
"sessionGroups": [
{
"name": "complex_group",
"classifiers": [
"'sweden_matcher' AND (NOT 'stockholm_matcher' OR 'apple_matcher')"
]
}
]
}
Merge and apply the config? [y/n]: y
1.1.5.4 - Accounts
If accounts are configured, the router will tag sessions as belonging to an
account. Note that if accounts are not configured or a session does not belong to
an account, a session will be tagged with the default
account.
Metrics will be tracked separately for each account when applicable.
Configuration
Accounts are configured using session groups, see Classification
for more information. Using confcli
, an account is configured by defining an
account name and a list of session groups for which a session must be classified
into to belong to the account. An account called account_1
can be configured by
running the command
confcli services.routing.accounts -w
Running wizard for resource 'accounts'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
accounts : [
account : {
name (default: ): account_1
sessionGroups <A session will be tagged as belonging to this account if it's classified into all of the listed session groups. (default: [])>: [
sessionGroup (default: ): session_group_1
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: y
sessionGroup (default: ): session_group_2
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: n
]
}
Add another 'account' element to array 'accounts'? [y/N]: n
]
Generated config:
{
"accounts": [
{
"name": "account_1",
"sessionGroups": [
"session_group_1",
"session_group_2"
]
}
]
}
Merge and apply the config? [y/n]: y
A session will belong to the account account_1
if it has been classified into
the two session groups session_group_1
and session_group_2
.
Metrics
If using the configuration above, the metrics will be separated per account:
# TYPE num_requests counter
num_requests{account="account_1",selector="initial"} 3
# TYPE num_requests counter
num_requests{account="default",selector="initial"} 3
1.1.5.5 - Advanced features
1.1.5.5.1 - Content popularity
ESB3024 Router allows routing decisions based on content popularity. All incoming content requests are tracked to continuously update a content popularity ranking list. The popularity ranking algorithm is designed to let popular content quickly rise to the top while unpopular content decays and sinks towards the bottom.
Routing
A content popularity based routing rule can be created by running
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: contentPopularity
Adding a 'contentPopularity' element
rule : {
name (default: ): content_popularity_rule
type (default: contentPopularity):
contentPopularityCutoff (default: 10): 5
onPopular (default: ): edge-streamer
onUnpopular (default: ): offload
}
Add another 'rule' element to array 'rules'? [y/N]: n
]
Generated config:
{
"rules": [
{
"name": "content_popularity_rule",
"type": "contentPopularity",
"contentPopularityCutoff": 5.0,
"onPopular": "edge-streamer",
"onUnpopular": "offload"
}
]
}
Merge and apply the config? [y/n]: y
This rule will route requests for top 5 most popular content to
edge-streamer
and all other requests to offload
.
Some configuration settings attributed to content popularity are available:
$ confcli services.routing.settings.contentPopularity
{
"contentPopularity": {
"enabled": true,
"algorithm": "score_based",
"sessionGroupNames": []
}
}
enabled
: Whether or not to track content popularity. Whenenabled
is set tofalse
, content popularity will not be tracked. Note that routing on content popularity is possible even ifenabled
isfalse
and content popularity has been tracked previously.algorithm
: Choice of content popularity tracking algorithm. There are two possible choices:score_based
ortime_based
(detailed below).sessionGroupNames
: Names of the session groups for which content popularity should be tracked. Note that content popularity is tracked globally, not per session group.
Algorithm tuning
The behaviour of each content popularity tracking algorithm can be tuned using the raw JSON API.
All configuration parameters for content popularity reside in the
settings
object of the configuration, an example of which can be
seen below:
{
"settings": {
"content_popularity": {
"algorithm": "scored_based",
"session_group_names": ["vod_only"],
"score_based:": {
"requests_between_popularity_decay": 1000,
"popularity_list_max_size": 100000,
"popularity_prediction_factor": 2.5,
"popularity_decay_fraction": 0.2
},
"time_based": {
"intervals_per_hour": 10
}
}
}
}
The field algorithm
dictates which content popularity tracking
algorithm to use, can either be score_based
or time_based
.
The field session_group_names
defines the sessions for which content
popularity should be tracked. In the example above, session belonging to
the vod_only
session group will be tracked for content popularity.
If left empty, content popularity will be tracked for all sessions.
The remaining configuration parameters are algorithm specific.
Score based algorithm
The field popularity_list_max_size
defines
the maximum amount of unique contents to track for popularity. This can
be used to limit memory growth. A single entry in the popularity ranking
list will at most consume 180 bytes of memory. E.g. using
"popularity_list_max_size": 1000
would consume at most
180⋅1,000 = 180,000 B = 0.18 MB. If the content popularity list is full,
a request to unique content would replace the least popular content.
Setting a very high max size will not impact performance, it will only consume more memory.
The field requests_between_popularity_decay
defines the number of requests
between each popularity decay update, an integral component of this feature.
The fields popularity_prediction_factor
and popularity_decay_fraction
tune
the behaviour of the content popularity ranking algorithm, explained further
below.
Decay update
To allow for popular content to quickly rise in popularity and unpopular content to sink, a dynamic popularity ranking algorithm is used. The goal of the algorithm is to track content popularity in real time, allowing routing decisions based on the requested content’s popularity. The algorithm is applied every decay update.
The algorithm uses current trending content to predict content popularity. The
field popularity_prediction_factor
regulates how much the algorithm should rely
on predicted popularity. A high prediction factor allows rising content to quickly
rise to high popularity but can also cause unpopular content with a sudden burst
of requests to wrongfully rise to the top. A low prediction factor can cause
stagnation in the popularity ranking, not allowing new popular content to rise
to the top.
Unpopular content decays in popularity, the magnitude of which is regulated by
popularity_decay_fraction
. A high value will aggressively decay content popularity
every decay update while a low value will bloat the ranking, causing stagnation.
Once content decays to a trivially low popularity score, it is pruned from the
content popularity list.
When configuring these tuning parameters, the most crucial data to consider is
the size of your asset catalog, i.e. the number of unique contents you offer.
The recommended values, obtained through testing, are presented in the table below.
Note that the field popularity_prediction_factor
is the principal factor in
controlling the algorithm’s behaviour.
Catalog size n | popularity_prediction_factor | popularity_decay_fraction |
---|---|---|
n < 1000 | 2.2 | 0.2 |
1000 < n < 5000 | 2.3 | 0.2 |
5000 < n < 10000 | 2.5 | 0.2 |
n > 10000 | 2.6 | 0.2 |
Time based algorithm
The time based algorithm only requires the configuration parameter
intervals_per_hour
. E.g., the value "intervals_per_hour": 10
would give 10 six minute intervals per hour. During each interval,
all unique content requests has an associated counter, increasing
by one for each incoming request. After an hour, all intervals have
been cycled through. The counters in the first interval will be reset
and all incoming content requests will increase the counters in the
first interval again. This cycle continues forever.
When determining a single content’s popularity, the sum of each content’s counter in all intervals is used to determine a popularity ranking.
1.1.5.5.2 - Consistent Hashing
Consistent hashing based routing is a feature that can be used to distribute requests to a set of hosts in a cache friendly manner. By using Agile Content’s consistent distributed hash algorithm, the amount of cache redistribution is minimized within a set of hosts. Requests for a content will always be routed to the same set of hosts, the amount of which is configured by the spread factor, allowing high cache usage. When adding or removing hosts, the algorithm minimizes cache redistribution.
Say you have the host group [s1, s2, s3, s4, s5]
and have
configured spreadFactor = 3
. A request for a content asset1
would then be
routed to the same three hosts with one of them being selected randomly for each
request. Requests for a different content asset2
would also be routed to one
of three different hosts, most likely a different combination of hosts than
requests for content asset1
.
Example routing results with spreadFactor = 3
:
- Request for
asset1
→ route to one of[s1, s3, s4]
. - Request for
asset2
→ route to one of[s2, s4, s5]
. - Request for
asset3
→ route to one of[s1, s2, s5]
.
Since consistent hashing based routing ensures that requests for a specific content always get routed to the same set of hosts, the risk of cache misses are lowered on the hosts since they will be served the same content requests over and over again.
Note that the maximum value of
spreadFactor
is 64. Consequently, the highest amount of hosts you can use in aconsistentHashing
rule block is 64.
Three different hashing algorithms are available: MD5
, SDBM
and Murmur
.
The algorithm is chosen during configuration.
Configuration
Configuring consistent hashing based routing is easily done using confcli. Let’s configure the example described above:
confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: consistentHashing
Adding a 'consistentHashing' element
rule : {
name (default: ): consistentHashingRule
type (default: consistentHashing):
spreadFactor (default: 1): 3
hashAlgorithm (default: MD5):
targets : [
target : {
target (default: ): s1
enabled (default: True):
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
target (default: ): s2
enabled (default: True):
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
target (default: ): s3
enabled (default: True):
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
target (default: ): s4
enabled (default: True):
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
target (default: ): s5
enabled (default: True):
}
Add another 'target' element to array 'targets'? [y/N]: n
]
}
Add another 'rule' element to array 'rules'? [y/N]: n
]
Generated config:
{
"rules": [
{
"name": "consistentHashingRule",
"type": "consistentHashing",
"spreadFactor": 3,
"hashAlgorithm": "MD5",
"targets": [
{
"target": "s1",
"enabled": true
},
{
"target": "s2",
"enabled": true
},
{
"target": "s3",
"enabled": true
},
{
"target": "s4",
"enabled": true
},
{
"target": "s5",
"enabled": true
}
]
}
]
}
Adding hosts
Adding a host to the list will give an additional target for the consistent hashing algorithm to route requests to. This will shift content distribution onto the new host.
confcli services.routing.rules.consistentHashingRule.targets -w
Running wizard for resource 'targets'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
targets : [
target : {
target (default: ): s6
enabled (default: True):
}
Add another 'target' element to array 'targets'? [y/N]: n
]
Generated config:
{
"targets": [
{
"target": "s6",
"enabled": true
}
]
}
Merge and apply the config? [y/n]: y
Removing hosts
There is one very important caveat of using a consistent hashing rule block. As long as you don’t modify the list of hosts, the consistent hashing algorithm will keep routing requests to the same hosts. However, if you remove a host from the block in any position except the last, the consistent hashing algorithm’s behaviour will change and the algorithm cannot maintain a minimum amount of cache redistribution.
If you’re in a situation where you have to remove a host from the routing
targets but want to keep the same consistent hashing behaviour, e.g. during
very high load, you’ll have to toggle that target’s enabled
field to false
.
E.g., disabling requests to s2
can be accomplished by:
$ confcli services.routing.rules.consistentHashingRule.targets.1.enabled false
services.routing.rules.consistentHashingRule.targets.1.enabled = False
$ confcli services.routing.rules.consistentHashingRule.targets.1
{
"1": {
"target": "s2",
"enabled": false
}
}
If you modify the list order or remove hosts, it is highly recommended to do so during moments where a higher rate of cache misses are acceptable.
1.1.5.5.3 - Security token verification
The security token verification feature allows for ESB3024 Router to only process requests that contain a correct security token. The token is generated by the client, for example in the portal, using an algorithm that it shares with the router. The router verifies the token and rejects the request if the token is incorrect.
It is beyond the scope of this document to describe how the token is generated, that is described in the Security Tokens application note that is installed with the ESB3024 Router’s extra documentation.
Setting up a routing rule
The token verification is performed by calling the verify_security_token()
function from a routing rule. The function returns 1
if the token is
correct, otherwise it returns 0
. It should typically be called from the
first routing rule, to make requests with bad tokens fail as early as possible.
The confcli example assumes that the router already has rules configured, with
an entry point named select_cdn
. Token verification is enabled by inserting an
“allow” rule first in the rule list.
confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: allow
Adding a 'allow' element
rule : {
name (default: ): token_verification
type (default: allow):
condition (default: always()): verify_security_token()
onMatch (default: ): select_cdn
}
Add another 'rule' element to array 'rules'? [y/N]: n
]
Generated config:
{
"rules": [
{
"name": "token_verification",
"type": "allow",
"condition": "verify_security_token()",
"onMatch": "select_cdn"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.entrypoint token_verification
services.routing.entrypoint = 'token_verification'
"routing": {
"id": "token_verification",
"member_order": "sequential",
"members": [
{
"id": "token_verification.0.select_cdn",
"member_order": "weighted",
"members": [
...
],
"weight_function": "return verify_security_token() ~= 0"
},
{
"id": "token_verification.1.rejected",
"member_order": "sequential",
"members": [],
"weight_function": "return 1"
}
],
"weight_function": "return 100"
},
Configuring security token options
The secret
parameter is not part of the router request, but needs to be
configured separately in the router. That can be done with the host-config
tool that is installed with the router.
Besides configuring the secret, host-config
can also configure floating
sessions and a URL prefix. Floating sessions are sessions that are not tied to a
specific IP address. When that is enabled, the token verification will not take
the IP address into account when verifying the token.
The security token verification is configured per host, where a host is the name
of the host that the request was sent to. This makes it possible for a router to
support multiple customer accounts, each with their own secret. If no
configuration is found for a host, a configuration with the name default
is
used.
host-config
supports three commands: print
, set
and delete
.
The print
command prints the current configuration for a host. The following
parameters are supported:
host-config print [-n <host-name>]
By default it prints the configuration for all hosts, but if the optional -n
flag is given it will print the configuration for a single host.
Set
The set
command sets the configuration for a host. The configuration is given
as command line parameters. The following parameters are supported:
host-config set
-n <host-name>
[-f floating]
[-p url-prefix]
[-r <secret-to-remove>]
[-s <secret-to-add>]
-n <host-name>
- The name of the host to configure.-f floating
- A boolean option that specifies if floating sessions are accepted. The parameter accepts the valuestrue
andfalse
.-p url-prefix
- A URL prefix that is used for identifying requests that come from a certain account. This is not used when verifying tokens.-r <secret-to-remove>
- A secret that should be removed from the list of secrets.-s <secret-to-add>
- A secret that should be added to the list of secrets.
For example, to set the secret “secret-1” and enable floating sessions for the default host, the following command can be used:
host-config set -n default -s secret-1 -f true
The set
command only touches the configuration options that are mentioned on
the command line, so the following command line will add a second secret to the
default host without changing the floating session setting:
host-config set -n default -s secret-2
It is possible to set multiple secrets per host. This is useful when updating a secret, then both the old and the new secret can be valid during the transition period. After the transition period the old secret can be removed by typing:
host-config set -n default -r secret-1
Delete
The delete
command deletes the configuration for a host. It supports the
following parameters:
host-config delete -n <host-name>
For example, to delete the configuration for example.com
, the following
command can be used:
host-config delete -n example.com
Global options
host-config
also has a few global options. They are:
-k <security-key>
- The security key that is used when communicating with the router. This is normally retrieved automatically.-h
- Print a help message and exit.-r <router>
- The router to connect to. This default tolocalhost
, but can be changed to connect to a remote router.-v
- Verbose output, can be given multiple times.
Debugging security token verification
The security token verification only logs messages when the log level is set to
4 or higher. Then it will only log some errors. It is possible to enable more
verbose logging using the security-token-config
that is installed together
with the router.
When verbose logging is enabled, the router will log information about the token verification, including the configured token secrets, so it needs to be used with care.
The logged lines are prefixed with verify_security_token
.
The security-token-config
tool supports the commands print
and set
.
The print
command prints the current configuration. If nothing is configured
it will not print anything.
Set
The set
command sets the configuration. The following parameters are
supported:
security-token-config set
[-d <enabled>]
-d <enabled>
- A boolean option that specifies if debug logging should be enabled or not. The parameter accepts the valuestrue
andfalse
.
1.1.5.5.4 - Subnets API
ESB3024 Router provides utilities to quickly match clients into subnets. Any combination of IPv4 and IPv6 addresses can be used. To begin, a JSON file is needed, defining all subnets, e.g:
{
"255.255.255.255/24": "area1",
"255.255.255.255/16": "area2",
"255.255.255.255/8": "area3",
"90.90.1.3/16": "area4",
"5.5.0.4/8": "area5",
"2a02:2e02:9bc0::/48": "area6",
"2a02:2e02:9bc0::/32": "area7",
"2a02:2e02:9bc0::/16": "area8",
"2a02:2e02:9de0::/44": "combined_area",
"2a02:2e02:ada0::/44": "combined_area"
}
and PUT
it to the endpoint :5001/v1/subnets
or :5001/v2/subnets
, the
API version doesn’t matter for subnets:
curl -k -T subnets.json -H "Content-Type: application/json" https://router-host:5001/v1/subnets
Note that it is possible for several subnet CIDR strings to share the same label, effectively grouping them together.
The router provides the built-in function in_subnet(subnet_name)
that
can to make routing decisions based on a client’s subnet. For more details, see
Built-in Lua functions.
To configure a rule that only allows clients in the area1
subnet, run the
command
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: allow
Adding a 'allow' element
rule : {
name (default: ): only_allow_area1
type (default: allow):
condition (default: always()): in_subnet('area1')
onMatch (default: ): example-host
}
Add another 'rule' element to array 'rules'? [y/N]: n
]
Generated config:
{
"rules": [
{
"name": "only_allow_area1",
"type": "allow",
"condition": "in_subnet('area1')",
"onMatch": "example-host"
}
]
}
Merge and apply the config? [y/n]: y
Invalid IP-addresses will be omitted during subnet list construction accompanied by a message in the log displaying the invalid IP address.
1.1.5.5.5 - Lua Features
Detailed descriptions and examples of Lua features offered by ESB3024 Router.
1.1.5.5.5.1 - Built-in Lua Functions
This section details all built-in Lua functions provided by the router.
Logging functions
The router provides Lua logging functionality that is convenient when creating custom Lua functions. A prefix can be added to the log message which is useful to differentiate log messages from different lua files. At the top of the Lua source file, add the line
local log = log.add_prefix("my_lua_file")
to prepend all log messages with "my_lua_file"
.
The logging functions support formatting and common log levels:
log.critical('A log message with number %d', 1.5)
log.error('A log message with string %s', 'a string')
log.warning('A log message with integer %i', 1)
log.info('A log message with a local number variable %d', some_local_number)
log.debug('A log message with a local string variable %s', some_local_string)
log.trace('A log message with a local integer variable %i', some_local_integer)
log.message('A log message')
Many of the router’s built-in Lua functions use the logging functions.
Predictive load balancing functions
Predictive load balancing is a tool that can be used to avoid overloading hosts with traffic. Consider the case where a popular event starts at a certain time, let’s say 12 PM. A spike in traffic will be routed to the hosts that are streaming the content at 12 PM, most of them starting at low bitrates. A host might have sufficient bandwidth left to take on more clients but when the recently connected clients start ramping up in video quality and increase their bitrate, the host can quickly become overloaded, possibly dropping incoming requests or going offline. Predictive load balancing solves this issue by considering how many times a host recently been redirected to.
Four functions for predictive load balancing are provided by the router
that can be used when constructing conditions/weight functions: host_bitrate()
, host_bitrate_custom()
, host_has_bw()
and host_has_bw_custom()
.
All require data to be supplied to the selection input API and apply
only to leaf nodes in the routing tree. In order for predictive load balancing
to work properly the data must be updated at regular intervals. The data needs
to be supplied by the target system.
These functions are suitable to used as host health checks. To configure host health checks, see configuring CDNs and hosts.
Note that host_bitrate()
and host_has_bw()
rely on data supplied by metrics
agents, detailed in Cache hardware metrics: monitoring and routing.
host_bitrate_custom()
and host_has_bw_custom()
rely on
manually supplied selection input data, detailed in selection input API. The
bitrate unit depends on the data submitted to the selection input API.
Example metrics
The data supplied to the selection input API by the metrics agents uses the following structure:
{
"streamer-1": {
"hardware_metrics": {
"/": {
"free": 1741596278784,
"total": 1758357934080,
"used": 16761655296,
"used_percent": 0.9532561585516977
},
"cpu_load1": 0.02,
"cpu_load15": 0.12,
"cpu_load5": 0.02,
"mem_available": 4895789056,
"mem_available_percent": 59.551760354263074,
"mem_total": 8221065216,
"mem_used": 2474393600,
"n_cpus": 4
},
"per_interface_metrics": {
"eths1": {
"link": 1,
"interface_up": true,
"megabits_sent": 22322295739.378456,
"megabits_sent_rate": 8085.2523952,
"speed": 100000
}
}
}
}
Note that all built-in functions interacting with selection input values support indexing into nested selection input data. Consider the selection input data in above. The nested values can be accessed by using dots between the keys:
si('streamer-1.per_interface_metrics.eths1.megabits_sent_rate')
Note that the whole selection input variable name must be within single quotes.
The function si()
is documented under
general purpose functions.
host_bitrate({})
host_bitrate()
returns the predicted bitrate (in megabits per second) of
the host after the recently connected clients start ramping up in streaming
quality. The function accepts an argument table with the following keys:
interface
: The name of the interface to use for bitrate prediction.- Optional
avg_bitrate
: the average bitrate per client, defaults to 6 megabits per second. - Optional
num_routers
: the number of routers that can route to this host, defaults to 1. This is important to accurately predict the incoming load if multiple routers are used. - Optional
host
: The name of the host to use for bitrate prediction. Defaults to the current host if not provided.
Required selection input data
This function relies on the field megabits_sent_rate
, supplied by the Telegraf
metrics agent, as seen in example metrics. If these fields
are missing from your selection input data, this function will not work.
Examples of usage:
host_bitrate({interface='eths0'})
host_bitrate({avg_bitrate=1, interface='eths0'})
host_bitrate({num_routers=2, interface='eths0'})
host_bitrate({avg_bitrate=1, num_routers=4, interface='eths0'})
host_bitrate({avg_bitrate=1, num_routers=4, host='custom_host', interface='eths0'})
host_bitrate({})
calculates the predicted bitrate as:
predicted_host_bitrate = current_host_bitrate + (recent_connections * avg_bitrate * num_routers)
host_bitrate_custom({})
Same functionality as host_bitrate()
but uses a custom selection input
variable as bitrate input instead of accessing hardware metrics. The function
accepts an argument table with the following keys:
custom_bitrate_var
: The name of the selection input variable to be used for accessing current host bitrate.- Optional
avg_bitrate
: seehost_bitrate()
documentation above. - Optional
num_routers
: seehost_bitrate()
documentation above.
host_bitrate_custom({custom_bitrate_var='host1_current_bitrate'})
host_bitrate_custom({avg_bitrate=1, custom_bitrate_var='host1_current_bitrate'})
host_bitrate_custom({num_routers=4, custom_bitrate_var='host1_current_bitrate'})
host_has_bw({})
Instead of accessing the predicted bitrate of a host through host_bitrate()
,
host_has_bw()
returns 1 if the host is predicted to have enough
bandwidth left to take on more clients after recent connections ramp up in
bitrate, otherwise it returns 0. The function accepts an argument table with the
following keys:
interface
: seehost_bitrate()
documentation above.- Optional
avg_bitrate
: seehost_bitrate()
documentation above. - Optional
num_routers
: seehost_bitrate()
documentation above. - Optional
host
: seehost_bitrate()
documentation above. - Optional
margin
: the bitrate (megabits per second) headroom that should be taken into account during calculation, defaults to 0.
host_has_bw({})
returns whether or not the following statement is true:
predicted_host_bitrate + margin < host_bitrate_capacity
Required selection input data
host_has_bw({})
relies on the fields megabits_sent_rate
and speed
,
supplied by the Telegraf metrics agent, as seen in
example metrics. If these fields are missing from your
selection input data, this function will not work.
Examples of usage:
host_has_bw({interface='eths0'})
host_has_bw({margin=10, interface='eth0'})
host_has_bw({avg_bitrate=1, interface='eth0'})
host_has_bw({num_routers=4, interface='eth0'})
host_has_bw({host='custom_host', interface='eth0'})
host_has_bw_custom({})
Same functionality as host_has_bw()
but uses a custom selection input
variable as bitrate. It also uses a number or a custom selection input
variable for the capacity. The function accepts an argument table
with the following keys:
custom_capacity_var
: a number representing the capacity of the network interface OR the name of the selection input variable to be used for accessing host capacity.custom_bitrate_var
: seehost_bitrate_custom()
documentation- Optional
margin
: seehost_has_bw()
documentation above. above. - Optional
avg_bitrate
: seehost_bitrate()
documentation above. - Optional
num_routers
: seehost_bitrate()
documentation above.
Examples of usage:
host_has_bw_custom({custom_capacity_var=10000, custom_bitrate_var='streamer-1.per_interface_metrics.eths1.megabits_sent_rate'})
host_has_bw_custom({custom_capacity_var='host1_capacity', custom_bitrate_var='streamer-1.per_interface_metrics.eths1.megabits_sent_rate'})
host_has_bw_custom({margin=10, custom_capacity_var=10000, custom_bitrate_var='streamer-1.per_interface_metrics.eths1.megabits_sent_rate'})
host_has_bw_custom({avg_bitrate=1, custom_capacity_var=10000, custom_bitrate_var='streamer-1.per_interface_metrics.eths1.megabits_sent_rate'})
host_has_bw_custom({num_routers=4, custom_capacity_var=10000, custom_bitrate_var='streamer-1.per_interface_metrics.eths1.megabits_sent_rate'})
Health check functions
This section details built-in Lua functions that are meant to be used for host health checks. Note that these functions rely on data supplied by metric agents detailed in Cache hardware metrics: monitoring and routing. Make sure cache hardware metrics are supplied to the router before using any of these functions.
cpu_load_ok({})
The function accepts an optional argument table with the following keys:
- Optional
host
: The name of the host. Defaults to the name of the selected host if not provided. - Optional
cpu_load5_limit
: The acceptable limit for the 5-minute CPU load. Defaults to 0.9 if not provided.
The function returns 1 if the five minute CPU load average is below their respective limits, and 0 otherwise.
Examples of usage:
cpu_load_ok()
cpu_load_ok({host = 'custom_host'})
cpu_load_ok({cpu_load5_limit = 0.8})
cpu_load_ok({host = 'custom_host', cpu_load5_limit = 0.8})
memory_usage_ok({})
The function accepts an optional argument table with the following keys:
- Optional
host
: The name of the host. Defaults to the host of the selected host if not provided. - Optional
memory_usage_limit
: The acceptable limit for the memory usage. Defaults to 0.9 if not provided.
The function returns 1 if the memory usage is below the limit, and 0 otherwise.
Examples of usage:
memory_usage_ok()
memory_usage_ok({host = 'custom_host'})
memory_usage_ok({memory_usage_limit = 0.7})
memory_usage_ok({host = 'custom_host', memory_usage_limit = 0.7})
interfaces_online({})
The function accepts an argument table with the following keys:
- Required
interfaces
: A string or a table of strings representing the network interfaces to check. - Optional
host
: The name of the host. Defaults to the host of the selected host if not provided.
The function returns 1 if all the specified interfaces are online, and 0 otherwise.
Required selection input data
This function relies on the fields link
and interface_up
, supplied by
the Telegraf metrics agent, as seen in example metrics. If
these fields are missing from your selection input data, this function will not
work.
Examples of usage:
interfaces_online({interfaces = 'eth0'})
interfaces_online({interfaces = {'eth0', 'eth1'}})
interfaces_online({host = 'custom_host', interfaces = 'eth0'})
interfaces_online({host = 'custom_host', interfaces = {'eth0', 'eth1'}})
health_check({})
The function accepts an optional argument table with the following keys:
- Required
interfaces
: A string or a table of strings representing the network interfaces to check. - Optional
host
: The name of the host. Defaults to the host of the selected host if not provided. - Optional
cpu_load5_limit
: The acceptable limit for the 5-minute CPU load. Defaults to 0.9 if not provided. - Optional
memory_usage_limit
: The acceptable limit for the memory usage. Defaults to 0.9 if not provided.
The function calls the health check functions cpu_load_ok({})
,
memory_usage_ok({})
and interfaces_online({})
. The functions returns 1 if
all these functions returned 1, otherwise it returns 0.
Examples of usage:
health_check({interfaces = 'eths0'})
health_check({host = 'custom_host', interfaces = 'eths0'})
health_check({cpu_load5_limit = 0.7, memory_usage_limit = 0.8, interfaces = 'eth0'})
health_check({host = 'custom_host', cpu_load5_limit = 0.7, memory_usage_limit = 0.8, interfaces = {'eth0', 'eth1'}})
General purpose functions
The router supplies a number of general purpose Lua functions.
always()
Always returns 1.
never()
Always returns 0. Useful for temporarily disabling caches by using it as a health check.
Examples of usage:
always()
never()
si(si_name)
The function reads the value of the selection input variable si_name
and
returns it if it exists, otherwise it returns 0. The function accepts a string
argument for the selection input variable name.
Examples of usage:
si('some_selection_input_variable_name')
si('streamer-1.per_interface_metrics.eths1.megabits_sent_rate')
Comparison functions
All comparison functions use the form function(si_name, value)
and compares
the selection input value with the name si_name
with value
.
ge(si_name, value)
- greater than or equal
gt(si_name, value)
- greater than
le(si_name, value)
- less than or equal
lt(si_name, value)
- less than
eq(si_name, value)
- equal to
neq(si_name, value)
- not equal to
Examples of usage:
ge('streamer-1.hardware_metrics.mem_available_percent', 30)
gt('streamer-1.hardware_metrics./.free', 1000000000)
le('streamer-1.hardware_metrics.cpu_load5', 0.8)
lt('streamer-1.per_interface_metrics.eths1.megabits_sent_rate', 9000)
eq('streamer-1.per_interface_metrics.eths1.link.', 1)
neq('streamer-1.hardware_metrics.n_cpus', 4)
Session checking functions
in_subnet(subnet)
Returns 1 if the current session belongs to subnet
, otherwise it returns 0.
See Subnets API for more details on how to use
subnets in routing. The function accepts a string argument for the subnet name.
Examples of usage:
in_subnet('stockholm')
in_subnet('unserviced_region')
in_subnet('some_other_subnet')
These functions checks the current session’s session groups.
in_session_group(session_group)
Returns 1 if the current session has been classified into session_group
,
otherwise it returns 0. The function accepts a string argument for the session
group name.
in_any_session_group({})
Returns 1 if the current session has been classified into any of
session_groups
, otherwise it returns 0. The function accepts a table array of
strings as argument for the session group names.
in_all_session_groups({})
Returns 1 if the current session has been classified into all of
session_groups
, otherwise it returns 0. The function accepts a table array of
strings as argument for the session group names.
Examples of usage:
in_session_group('safari_browser')
in_any_session_group({ 'in_europe', 'in_asia'})
in_all_session_group({ 'vod_content', 'in_america'})
Other built-in functions
base64_encode(data)
base64_encode(data)
returns the base64 encoded string of data
.
Arguments:
data
: The data to encode.
Example:
print(base64_encode('Hello world!'))
SGVsbG8gd29ybGQh
base64_decode(data)
base64_decode(data)
returns the decoded data of the base64 encoded string, as
a raw binary string.
Arguments:
data
: The data to decode.
Example:
print(base64_decode('SGVsbG8gd29ybGQh'))
Hello world!
base64_url_encode(data)
base64_url_encode(data)
returns the base64 URL encoded string of data
.
Arguments:
data
: The data to encode.
Example:
print(base64_url_encode('ab~~'))
YWJ-fg
base64_url_decode(data)
base64_url_decode(data)
returns the decoded data of the base64 URL encoded
string, as a raw binary string.
Arguments:
data
: The data to decode.
Example:
print(base64_url_decode('YWJ-fg'))
ab~~
to_hex_string(data)
to_hex_string(data)
returns a string containing the hexadecimal
representation of the string data
.
Arguments:
data
: The data to convert.
Example:
print(to_hex_string('Hello world!\n'))
48656c6c6f20776f726c64210a
from_hex_string(data)
from_hex_string(data)
returns a string containing the byte representation of
the hexadecimal string data
.
Arguments:
data
: The data to convert.
Example:
print(from_hex_string('48656c6c6f20776f726c6421'))
Hello world!
empty(table)
empty(table)
returns true if table
is empty, otherwise it returns false.
Arguments:
table
: The table to check.
Examples:
print(tostring(empty({})))
true
print(tostring(empty({1, 2, 3})))
false
md5(data)
md5(data)
returns the MD5 hash of data
, as a hexstring.
Arguments:
data
: The data to hash.
Example:
print(md5('Hello world!'))
86fb269d190d2c85f6e0468ceca42a20
sha256(date)
sha256(data)
returns the SHA-256 hash of data
, as a hexstring.
Arguments:
data
: The data to hash.
Example:
print(sha256('Hello world!'))
c0535e4be2b79ffd93291305436bf889314e4a3faec05ecffcbb7df31ad9e51a
hmac_sha256(key, data)
hmac_sha256(key, data)
returns the HMAC-SHA-256 hash of data
using key
,
as a base64 encoded string.
Note: This function is to be modified to return raw binary data instead of a base64 encoded string.
Arguments:
key
: The key to use.data
: The data to hash.
Example:
print(hmac_sha256('secret', 'Hello world!'))
pl9M/PX0If8r4FLgZCvMvP6xJu5z68T+OzgZZDAutjI=
hmac_sha384(key, data)
hmac_sha384(key, data)
returns the HMAC-SHA-384 hash of data
using key
,
as a string containing raw binary data.
Arguments:
key
: The key to use.data
: The data to hash.
Example:
print(to_hex_string(hmac_sha384('secret', 'Hello world!')))
917516d93d3509a371a129ca50933195dd659712652f07ba5792cbd5cade5e6285a841808842cfa0c3c69c8fb234468a
hmac_sha512(key, data)
hmac_sha512(key, data)
returns the HMAC-SHA-512 hash of data
using key
,
as a string containing raw binary data.
Arguments:
key
: The key to use.data
: The data to hash.
Example:
print(to_hex_string(hmac_sha512('secret', 'Hello world!')))
dff6c00943387f9039566bfee0994de698aa2005eecdbf12d109e17aff5bbb1b022347fbf4bd94ede7c7d51571022525556b64f9d5e4386de99d0025886eaaff
hmac_md5(key, data)
hmac_md5(key, data)
returns the HMAC-MD5 hash of data
using key
,
as a string containing raw binary data.
Arguments:
key
: The key to use.data
: The data to hash.
Example:
print(to_hex_string(hmac_md5('secret', 'Hello world!')))
444fad0d374d14369d6b595062da5d91
regex_replace
regex_replace(data, pattern, replacement)
returns the string data
with all
occurrences of the regular expression pattern
replaced with replacement
.
Arguments:
data
: The data to replace.pattern
: The regular expression pattern to match.replacement
: The replacement string.
Examples:
print(regex_replace('Hello world!', 'world', 'Lua'))
Hello Lua!
print(regex_replace('Hello world!', 'l+', 'lua'))
Heluao worluad!
If the regular expression pattern is invalid, regex_replace()
returns an
error message.
Examples:
print(regex_replace('Hello world!', '*', 'lua'))
regex_error caught: regex_error
unixtime()
unixtime()
returns the current Unix timestamp, as seconds since midnight,
Janury 1 1970 UTC, as an integer.
Arguments:
- None
Example:
print(unixtime())
1733517373
now()
now()
returns the current Unix timestamp, the number of seconds since
midnight, Janury 1 1970 UTC, as an number with decimals.
Arguments:
- None
Example:
print(now())
1733517373.5007
timeToEpoch(time, fmt)
timeToEpoch(time, fmt)
returns the Unix timestamp, the number of seconds
since midnight, Janury 1 1970 UTC, of the time string time
, which is
formatted according to the format string fmt
.
Note: This function is scheduled to be renamed to time_to_epoch()
.
Arguments:
time
: The time string to convert.fmt
(Optional): The format string of the time string, as specified by the POSIX function strptime(). If not specified, it defaults to “%Y-%m-%dT%TZ”.
Examples:
print(timeToEpoch('1972-04-17T06:10:20Z'))
72339020
print(timeToEpoch('17/04-72 06:20:30', '%d/%m-%y %H:%M:%S'))
72339630
epochToTime(time, format)
epochToTime(time, format)
returns the time string of the Unix timestamp
time
, formatted according to format
.
Note: This function is scheduled to be renamed to epoch_to_time()
.
Arguments:
time
: The Unix timestamp to convert, as a number.format
(Optional): The format string of the time string, as specified by the POSIX function strftime(). If not specified, it defaults to “%Y-%m-%dT%TZ”.
Examples:
print(epochToTime(123456789))
1973-11-29T21:33:09Z
print(epochToTime(1234567890, '%d/%m-%y %H:%M:%S'))
13/02-09 23:31:30
get_consistent_hashing_weight(contentName, nodeIdsString, spreadFactor, hashAlgoritm, nodeId)
get_consistent_hashing_weight(contentName, nodeIdsString, spreadFactor, hashAlgoritm, nodeId)
returns the priority that node nodeId
has in the list of preferred nodes,
determined using consistent hashing. The first spreadfactor:th nodes should
have equal weights to randomize requests between them. Remaining nodes should
have decrementally decreasing weights to honor node priority during failover.
Arguments:
contentName
: The name of the content to hash.nodeIdsString
: A string containing the node IDs to hash, on the format ‘0,1,2,3’.spreadFactor
: The number of nodes to spread the requests between.hashAlgorithm
: Which hash algorithm to use. Supported algorithms are “MD5”, “SDBM” and “Murmur”. Default is “MD5”.nodeId
: The ID of the node to calculate the weight for.
Examples:
print(get_consistent_hashing_weight('/vod/film1', '0,1,2,3,4,5', 3, 'MD5', 3))
6
print(get_consistent_hashing_weight('/vod/film2', '0,1,2,3,4,5', 3, 'MD5', 3))
4
print(get_consistent_hashing_weight('/vod/film2', '0,1,2', 2, 'Murmur', 1))
2
See Consistent Hashing for more information about consistent hashing.
expand_ipv6_address(address)
expand_ipv6_address(address)
returns the fully expanded form of the IPv6
address address
.
Arguments:
address
: The IPv6 address to expand. If the address is not a valid IPv6 address, the function returns the contents ofaddress
unmodified. This allows for the function to pass through IPv4 addresses.
Examples:
print(expand_ipv6_address('2001:db8::1'))
2001:0db8:0000:0000:0000:0000:0000:0001
print(expand_ipv6_address('198.51.100.5'))
198.51.100.5
Configuration examples
Many of the functions documented are suitable to use in host health checks. To configure host health checks, see configuring CDNs and hosts. Here are some configuration examples of using the built-in Lua functions, utilizing the example metrics:
"healthChecks": [
"gt('streamer-1.hardware_metrics.mem_available_percent', 20)", // More than 20% memory is left
"lt('streamer-1.per_interface_metrics.eths1.megabits_sent_rate', 9000)" // Current bitrate is lower than 9000 Mbps
"host_has_bw({host='streamer-1', interface='eths1', margin=1000})", // host_has_bw() uses 'streamer-1.per_interface_metrics.eths1.speed' to determine if there is enough bandwidth left with a 1000 Mbps margin
"interfaces_online({host='streamer-1', interfaces='eths1'})",
"memory_usage_ok({host='streamer-1'})",
"cpu_load_ok({host='streamer-1'})",
"health_check({host='streamer-1', interfaces='eths1'})" // Combines interfaces_online(), memory_usage_ok(), cpu_load_ok()
]
1.1.5.5.5.2 - Global Lua Tables
There are multiple global tables containing important data available while writing Lua code for the router.
selection_input
Contains arbitrary, custom fields fed into the router by clients, see API overview for details on how to inject data into this table.
Note that the selection_input
table is iterable.
Usage examples:
print(selection_input['some_value'])
-- Iterate over table
if selection_input then
for k, v in pairs(selection_input) do
print('here is '..'selection_input!')
print(k..'='..v)
end
else
print('selection_input is nil')
end
session_groups
Defines a mapping from session group name to boolean
, indicating whether
the session belongs to the session group or not.
Usage examples:
if session_groups.vod then print('vod') else print('not vod') end
if session_groups['vod'] then print('vod') else print('not vod') end
session_count
Provides counters of number of session types per session group. The table
uses the structure qoe_score.<session_type>.<session_group>
.
Usage examples:
print(session_count.instream.vod)
print(session_count.initial.vod)
qoe_score
Provides the quality of experience score per host per session group. The table
uses the structure qoe_score.<host>.<session_group>
.
Usage examples:
print(qoe_score.host1.vod)
print(qoe_score.host1.live)
request
Contains data related to the HTTP request between the client and the router.
request.method
- Description: HTTP request method.
- Type:
string
- Example:
'GET'
,'POST'
request.body
- Description: HTTP request body string.
- Type:
string
ornil
- Example:
'{"foo": "bar"}'
request.major_version
- Description: Major HTTP version such as
x
inHTTP/x.1
. - Type:
integer
- Example:
1
- Description: Major HTTP version such as
request.minor_version
- Description: Minor HTTP version such as
x
inHTTP/1.x
. - Type:
integer
- Example:
1
- Description: Minor HTTP version such as
request.protocol
- Description: Transfer protocol variant.
- Type:
string
- Example:
'HTTP'
,'HTTPS'
request.client_ip
- Description: IP address of the client issuing the request.
- Type:
string
- Example:
'172.16.238.128'
request.path_with_query_params
- Description: Full request path including query parameters.
- Type:
string
- Example:
'/mycontent/superman.m3u8?b=y&c=z&a=x'
request.path
- Description: Request path without query parameters.
- Type:
string
- Example:
'/mycontent/superman.m3u8'
request.query_params
- Description: The query parameter string.
- Type:
string
- Example:
'b=y&c=z&a=x'
request.filename
- Description: The part of the path following the final slash, if any.
- Type:
string
- Example:
'superman.m3u8'
request.subnet
- Description: Subnet of
client_ip
. - Type:
string
ornil
- Example:
'all'
- Description: Subnet of
session
Contains data related to the current session.
session.client_ip
- Description: Alias for
request.client_ip
. See documentation for table request above.
- Description: Alias for
session.path_with_query_params
- Description: Alias for
request.path_with_query_params
. See documentation for table request above.
- Description: Alias for
session.path
- Description: Alias for
request.path
. See documentation for table request above.
- Description: Alias for
session.query_params
- Description: Alias for
request.query_params
. See documentation for table request above.
- Description: Alias for
session.filename
- Description: Alias for
request.filename
. See documentation for table request above.
- Description: Alias for
session.subnet
- Description: Alias for
request.subnet
. See documentation for table request above.
- Description: Alias for
session.host
- Description: ID of the currently selected host for the session.
- Type:
string
ornil
- Example:
'host1'
session.id
- Description: ID of the session.
- Type:
string
- Example:
'8eb2c1bdc106-17d2ff-00000000'
session.session_type
- Description: Type of the session.
- Type:
string
- Example:
'initial'
or'instream'
. Identical to the value of theType
argument of the session translation function.
session.is_managed
- Description: Identifies managed sessions.
- Type:
boolean
- Example:
true
ifType
/session.session_type
is'instream'
request_headers
Contains the headers from the request between the client and the router, keyed by name.
Usage example:
print(request_headers['User-Agent'])
request_query_params
Contains the query parameters from the request between the client and the router, keyed by name.
Usage example:
print(request_query_params.a)
session_query_params
Alias for metatable request_query_params
.
response
Contains data related to the outgoing response apart from the headers.
response.body
- Description: HTTP response body string.
- Type:
string
ornil
- Example:
'{"foo": "bar"}'
response.code
- Description: HTTP response status code.
- Type:
integer
- Example:
200
,404
response.text
- Description: HTTP response status text.
- Type:
string
- Example:
'OK'
,'Not found'
response.major_version
- Description: Major HTTP version such as
x
inHTTP/x.1
. - Type:
integer
- Example:
1
- Description: Major HTTP version such as
response.minor_version
- Description: Minor HTTP version such as
x
inHTTP/1.x
. - Type:
integer
- Example:
1
- Description: Minor HTTP version such as
response.protocol
- Description: Transfer protocol variant.
- Type:
string
- Example:
'HTTP'
,'HTTPS'
response_headers
Contains the response headers keyed by name.
Usage example:
print(response_headers['User-Agent'])
1.1.5.5.5.3 - Request Translation Function
Specifies the body of a Lua function that inspects every incoming HTTP request and overwrites individual fields before further processing by the router.
Returns nil
when nothing is to be changed, or HTTPRequest(t)
where t
is a table with any of the following optional fields:
Method
- Description: Replaces the HTTP request method in the request being processed.
- Type:
string
- Example:
'GET'
,'POST'
Path
- Description: Replaces the request path in the request being processed.
- Type:
string
- Example:
'/mycontent/superman.m3u8'
ClientIp
- Description: Replaces client IP address in the request being processed.
- Type:
string
- Example:
'172.16.238.128'
Body
- Description: Replaces body in the request being processed.
- Type:
string
ornil
- Example:
'{"foo": "bar"}'
QueryParameters
- Description: Adds, removes or replaces individual query parameters in the request being processed.
- Type: nested
table
(indexed by number) representing an array of query parameters as{[1]='Name',[2]='Value'}
pairs that are added to the request being processed, or overwriting existing query parameters with colliding names. To remove a query parameter from the request, specifynil
as value, i.e.QueryParameters={..., {[1]='foo',[2]=nil} ...}
. Returning a query parameter with a name but no value, such asa
in the request'/index.m3u8?a&b=22'
is currently not supported.
Headers
- Description: Adds, removes or replaces individual headers in the request being processed.
- Type: nested
table
(indexed by number) representing an array of request headers as{[1]='Name',[2]='Value'}
pairs that are added to the request being processed, or overwriting existing request headers with colliding names. To remove a header from the request, specifynil
as value, i.e.Headers={..., {[1]='foo',[2]=nil} ...}
. Duplicate names are supported. A multi-value header such asFoo: bar1,bar2
is defined by specifyingHeaders={..., {[1]='foo',[2]='bar1'}, {[1]='foo',[2]='bar2'}, ...}
.
Example of a request_translation_function
body that sets the request path
to a hardcoded value and adds the hardcoded query parameter a=b
:
-- Statements go here
print('Setting hardcoded Path and QueryParameters')
return HTTPRequest({
Path = '/content.mpd',
QueryParameters = {
{'a','b'}
}
})
Arguments
The following (iterable) arguments will be known by the function:
QueryParameters
Type: nested
table
(indexed by number).Description: Array of query parameters as
{[1]='Name',[2]='Value'}
pairs that were present in the query string of the request. Format identical to theHTTPRequest.QueryParameters
-field specified for the return value above.Example usage:
for _, queryParam in pairs(QueryParameters) do print(queryParam[1]..'='..queryParam[2]) end
Headers
Type: nested
table
(indexed by number).Description: Array of request headers as
{[1]='Name',[2]='Value'}
pairs that were present in the request. Format identical to theHTTPRequest.Headers
-field specified for the return value above. A multi-value header such asFoo: bar1,bar2
is seen inrequest_translation_function
asHeaders={..., {[1]='foo',[2]='bar1'}, {[1]='foo',[2]='bar1'}, ...}
.Example usage:
for _, header in pairs(Headers) do print(header[1]..'='..header[2]) end
Additional data
In addition to the arguments above, the following Lua tables, documented in Global Lua Tables, provide additional data that is available when executing the request translation function:
If the request translation function modifies the request, the request
,
request_query_params
and request_headers
tables will be updated with the
modified request and made available to the routing rules.
1.1.5.5.5.4 - Session Translation Function
Specifies the body of a Lua function that inspects a newly created session and may override its suggested type from “initial” to “instream” or vice versa. A number of helper functions are provided to simplify changing the session type.
Returns nil
when the session type is to remain unchanged, or Session(t)
where t
is a table with a single field:
Type
- Description: New type of the session.
- Type:
string
- Example:
'instream'
,'initial'
Basic Configuration
It is possible to configure the maximum number of simultaneous managed sessions
on the router. If the maximum number is reached, no more managed sessions can be
created. Using confcli
, it can be configured by running
$ confcli services.routing.tuning.general.maxActiveManagedSessions
{
"maxActiveManagedSessions": 1000
}
$ confcli services.routing.tuning.general.maxActiveManagedSessions 900
services.routing.tuning.general.maxActiveManagedSessions = 900
Common Arguments
While executing the session translation function, the following arguments are available:
Type
: The current type of the session ('instream'
or'initial'
).
Usage examples:
-- Flip session type
local newType = 'initial'
if Type == 'initial' then
newType = 'instream'
end
print('Changing session type from ' .. Type .. ' to ' .. newType)
return Session({['Type'] = newType})
Session Translation Helper Functions
The standard Lua library prodives four helper functions to simplify the configuration of the session translation function:
set_session_type(session_type)
This function will set the session type to the supplied session_type
and
the maximum number of sessions of that type has not been reached.
Parameters
session_type
: The type of session to create, possible values are ‘initial’ or ‘instream’.
Usage Examples
return set_session_type('instream')
return set_session_type('initial')
set_session_type_if_in_group(session_type, session_group)
This function will set the session type to the supplied session_type
if the
session is part of session_group
and the maximum number of sessions of that
type has not been reached.
Parameters
session_type
: The type of session to create, possible values are ‘initial’ or ‘instream’.session_group
: The name of the session group.
Usage Examples
return set_session_type_if_in_group('instream', 'sg1')
set_session_type_if_in_all_groups(session_type, session_groups)
This function will set the session type to the supplied session_type
if the
session is part of all session groups given by session_groups
and the maximum
number of sessions of that type has not been reached.
Parameters
session_type
: The type of session to create, possible values are ‘initial’ or ‘instream’.session_groups
: A list of session group names.
Usage Examples
return set_session_type_if_in_all_groups('instream', {'sg1', 'sg2'})
set_session_type_if_in_any_group(session_type)
This function will set the session type to the supplied session_type
if the
session is part of one or more of the session groups given by session_groups
and the maximum number of sessions of that type has not been reached.
Parameters
session_type
: The type of session to create, possible values are ‘initial’ or ‘instream’.session_groups
: A list of session group names.
Usage Examples
return set_session_type_if_in_any_group('instream', {'sg1', 'sg2'})
Configuration
Using confcli
, example of how the functions above can be used in the session
translation function can be configured by running any of
$ confcli services.routing.translationFunctions.session "return set_session_type('instream')"
services.routing.translationFunctions.session = "return set_session_type('instream')"
$ confcli services.routing.translationFunctions.session "return set_session_type_if_in_group('instream', 'sg1')"
services.routing.translationFunctions.session = "return set_session_type_if_in_group('instream', 'sg1')"
$ confcli services.routing.translationFunctions.session "return set_session_type_if_in_all_groups('instream', {'sg1', 'sg2'})"
services.routing.translationFunctions.session = "return set_session_type_if_in_all_groups('instream', {'sg1', 'sg2'})"
$ confcli services.routing.translationFunctions.session "return set_session_type_if_in_any_group('instream', {'sg1', 'sg2'})"
services.routing.translationFunctions.session = "return set_session_type_if_in_any_group('instream', {'sg1', 'sg2'})"
Additional data
In addition to the arguments above, the following Lua tables, documented in Global Lua Tables, provide additional data that is available when executing the response translation function:
The selection_input
table will not change while a routing request is handled.
A request_translation_function
and the corresponding
response_translation_function
will see the same selection_input
table, even
if the selection data is updated while the request is being handled.
1.1.5.5.5.5 - Host Request Translation Function
The host request translation function defines a Lua function that modifies
HTTP requests sent to a host. These hosts are configured in
services.routing.hostGroups
.
Hosts can receive requests for a manifest. A regular host will respond with the manifest itself, while a redirecting host and a DNS host will respond with a redirection to a streamer. This function can modify all these types of requests.
The function returns nil
when nothing is to be changed, or HTTPRequest(t)
where t
is a table with any of the following optional fields:
Method
- Description: Replaces the HTTP request method in the request being processed.
- Type:
string
- Example:
'GET'
,'POST'
Path
- Description: Replaces the request path in the request being processed.
- Type:
string
- Example:
'/mycontent/superman.m3u8'
Body
- Description: Replaces body in the request being processed.
- Type:
string
ornil
- Example:
'{"foo": "bar"}'
QueryParameters
- Description: Adds, removes or replaces individual query parameters in the request being processed.
- Type: nested
table
(indexed by number) representing an array of query parameters as{[1]='Name',[2]='Value'}
pairs that are added to the request being processed, or overwriting existing query parameters with colliding names. To remove a query parameter from the request, specifynil
as value, i.e.QueryParameters={..., {[1]='foo',[2]=nil} ...}
. Returning a query parameter with a name but no value, such asa
in the request'/index.m3u8?a&b=22'
is currently not supported.
Headers
- Description: Adds, removes or replaces individual headers in the request being processed.
- Type: nested
table
(indexed by number) representing an array of request headers as{[1]='Name',[2]='Value'}
pairs that are added to the request being processed, or overwriting existing request headers with colliding names. To remove a header from the request, specifynil
as value, i.e.Headers={..., {[1]='foo',[2]=nil} ...}
. Duplicate names are supported. A multi-value header such asFoo: bar1,bar2
is defined by specifyingHeaders={..., {[1]='foo',[2]='bar1'}, {[1]='foo',[2]='bar2'}, ...}
.
Host
- Description: Replaces the host that the request is sent to.
- Type:
string
- Example:
'new-host.example.com'
,'192.0.2.7'
Port
- Description: Replaces the TCP port that the request is sent to.
- Type:
number
- Example:
8081
Protocol
- Description: Decides which protocol that will be used for sending the
request. Valid protocols are
'HTTP'
and'HTTPS'
. - Type:
string
- Example:
'HTTP'
,'HTTPS'
- Description: Decides which protocol that will be used for sending the
request. Valid protocols are
Example of a host_request_translation_function
body that sets the request path
to a hardcoded value and adds the hardcoded query parameter a=b
:
-- Statements go here
print('Setting hardcoded Path and QueryParameters')
return HTTPRequest({
Path = '/content.mpd',
QueryParameters = {
{'a','b'}
}
})
Arguments
The following (iterable) arguments will be known by the function:
QueryParameters
Type: nested
table
(indexed by number).Description: Array of query parameters as
{[1]='Name',[2]='Value'}
pairs that are present in the query string of the request from the client to the router. Format identical to theHTTPRequest.QueryParameters
-field specified for the return value above.Example usage:
for _, queryParam in pairs(QueryParameters) do print(queryParam[1]..'='..queryParam[2]) end
Headers
Type: nested
table
(indexed by number).Description: Array of request headers as
{[1]='Name',[2]='Value'}
pairs that are present in the request from the client to the router. Format identical to theHTTPRequest.Headers
-field specified for the return value above. A multi-value header such asFoo: bar1,bar2
is seen inhost_request_translation_function
asHeaders={..., {[1]='foo',[2]='bar1'}, {[1]='foo',[2]='bar1'}, ...}
.Example usage:
for _, header in pairs(Headers) do print(header[1]..'='..header[2]) end
Global tables
The following non-iterable global tables are available for use by the
host_request_translation_function
.
Table outgoing_request
The outgoing_request
table contains the request that is to be sent to the
host.
outgoing_request.method
- Description: HTTP request method.
- Type:
string
- Example:
'GET'
,'POST'
outgoing_request.body
- Description: HTTP request body string.
- Type:
string
ornil
- Example:
'{"foo": "bar"}'
outgoing_request.major_version
- Description: Major HTTP version such as
x
inHTTP/x.1
. - Type:
integer
- Example:
1
- Description: Major HTTP version such as
outgoing_request.minor_version
- Description: Minor HTTP version such as
x
inHTTP/1.x
. - Type:
integer
- Example:
1
- Description: Minor HTTP version such as
outgoing_request.protocol
- Description: Transfer protocol variant.
- Type:
string
- Example:
'HTTP'
,'HTTPS'
Table outgoing_request_headers
Contains the request headers from the request that is to be sent to the host, keyed by name.
Example:
print(outgoing_request_headers['X-Forwarded-For'])
Multiple values are separated with a comma.
Additional data
In addition to the arguments above, the following Lua tables, documented in Global Lua Tables, provide additional data that is available when executing the request translation function:
1.1.5.5.5.6 - Response Translation Function
Specifies the body of a Lua function that inspects every outgoing HTTP response and overwrites individual fields before being sent to the client.
Returns nil
when nothing is to be changed, or HTTPResponse(t)
where t
is a table with any of the following optional fields:
Code
- Description: Replaces status code in the response being sent.
- Type:
integer
- Example:
200
,404
Text
- Description: Replaces status text in the response being sent.
- Type:
string
- Example:
'OK'
,'Not found'
MajorVersion
- Description: Replaces major HTTP version such as
x
inHTTP/x.1
in the response being sent. - Type:
integer
- Example:
1
- Description: Replaces major HTTP version such as
MinorVersion
- Description: Replaces minor HTTP version such as
x
inHTTP/1.x
in the response being sent. - Type:
integer
- Example:
1
- Description: Replaces minor HTTP version such as
Protocol
- Description: Replaces protocol in the response being sent.
- Type:
string
- Example:
'HTTP'
,'HTTPS'
Body
- Description: Replaces body in the response being sent.
- Type:
string
ornil
- Example:
'{"foo": "bar"}'
Headers
- Description: Adds, removes or replaces individual headers in the response being sent.
- Type: nested
table
(indexed by number) representing an array of response headers as{[1]='Name',[2]='Value'}
pairs that are added to the response being sent, or overwriting existing request headers with colliding names. To remove a header from the response, specifynil
as value, i.e.Headers={..., {[1]='foo',[2]=nil} ...}
. Duplicate names are supported. A multi-value header such asFoo: bar1,bar2
is defined by specifyingHeaders={..., {[1]='foo',[2]='bar1'}, {[1]='foo',[2]='bar2'}, ...}
.
Example of a response_translation_function
body that sets the Location
header to a hardcoded value:
-- Statements go here
print('Setting hardcoded Location')
return HTTPResponse({
Headers = {
{'Location', 'cdn1.com/content.mpd?a=b'}
}
})
Arguments
The following (iterable) arguments will be known by the function:
Headers
Type: nested
table
(indexed by number).Description: Array of response headers as
{[1]='Name',[2]='Value'}
pairs that are present in the response being sent. Format identical to theHTTPResponse.Headers
-field specified for the return value above. A multi-value header such asFoo: bar1,bar2
is seen inresponse_translation_function
asHeaders={..., {[1]='foo',[2]='bar1'}, {[1]='foo',[2]='bar1'}, ...}
.Example usage:
for _, header in pairs(Headers) do print(header[1]..'='..header[2]) end
Additional data
In addition to the arguments above, the following Lua tables, documented in Global Lua Tables, provide additional data that is available when executing the response translation function:
1.1.5.6 - Trusted proxies
When a request with the header X-Forwarded-For
is sent to the router, the
router will check if the client is in the list of trusted proxies. If the client
is not a trusted proxy, the router will drop the connection, returning an empty
reply to the client. If the client is a trusted proxy, the IP address defined
in the X-Forwarded-For
will be regarded as the client’s IP address.
The list of trusted proxies can be configured by modifying the configuration
field services.routing.settings.trustedProxies
with the IP addresses of
trusted proxies:
$ confcli services.routing.settings.trustedProxies -w
Running wizard for resource 'trustedProxies'
<A list of IP addresses from which the proxy IP address of requests with the X-Forwarded-For header defined are checked. If the IP isn't in this list, the connection is dropped. (default: [])>
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
trustedProxies <A list of IP addresses from which the proxy IP address of requests with the X-Forwarded-For header defined are checked. If the IP isn't in this list, the connection is dropped. (default: [])>: [
trustedProxy (default: ): 1.2.3.4
Add another 'trustedProxy' element to array 'trustedProxies'? [y/N]: n
]
Generated config:
{
"trustedProxies": [
"1.2.3.4"
]
}
Merge and apply the config? [y/n]: y
Note that by configuring 0.0.0.0/0
as a trusted proxy, all proxied requests
will be trusted.
1.1.5.7 - Confd Auto Upgrade Tool
The confd-auto-upgrade
tool is a simple utility to automatically migrate the confd
configuration schema between different versions of the Director. Starting with version
1.12.0, it is possible to automatically apply the necessary configuration changes in a
controlled and predictable manner. While this tool is intended to help transition the
configuration format between the different versions, it is not a substitute for proper
backups, and while downgrading to an earlier version, it may not be possible to recover
previously modified or deleted configuration values.
When using the tool, both the “from” and “to” versions must be specified. Internally, the tool will calculate a list of migrations which must be applied to transition between the given versions, and apply them, outputting the final configuration to standard output. The current configuration can either be piped in to the tool via standard input, or supplied as a static file. Providing a “from” version which is later than the “to” version will result in the downgrade migrations being applied in reverse order, effectively downgrading the configuration to the lower version.
For convenience, the tool is deployed to the ACD Nodes automatically at install time as a standard Podman container, however since it is not intended to run as a service, only the image will be present, not a running container.
Performing the Upgrade
In the following example scenario, a system with version 1.10.1 has been upgraded
to 1.14.0. Before upgrading a backup of the configuration was taken and saved to
current_config.json
.
Using the image and tag as determined in the above section. Issue the following command:
cat current_config.json | \
podman run -i --rm images.edgeware.tv/acd-confd-migration:1.14.0 \
--in - --from 1.10.1 --to 1.14.0 \
| tee upgraded_config.json
In the above example, the updated configuration is saved to upgraded_config.json
.
It is recommended to manually verify the generated configuration, and after
which apply the config to confd by using cat upgraded_config.json | confcli -i
.
It is also possible to combine the two commands, by piping the output of the auto-upgrade
tool directly to confcli -i
. E.g.
cat current_config.json | podman run ... | tee upgraded_config.json | confcli -i
This will save a backup of the upgraded configuration to upgraded_config.json
and
at the same time apply the changes to confd immediately.
Downgrading the Configuration
The steps for downgrading the configuration are exactly the same as for upgrade except for the --from
and --to
versions should be swapped. E.g. --from 1.14.0 --to 1.10.1
. Keep in mind however,
that during an upgrade some configuration properties may have been deleted or modified, and while
downgrading over those steps, some data loss may occur. In those cases, it may be easier and safer
to simply restore from backup. In most cases where configuration properties are removed during upgrade,
the corresponding downgrade will simply restore the default values of those properties.
1.1.6 - Operations
This guide describes how to perform day-to-day operations of the ACD Router and its associated services, collectively known as the Director.
Component Overview
To effectively operate the Director software, it is important to understand the composition of the various software components and how they are deployed.
Each Director instance functions as an independent system, comprising multiple containerized services. These containers are managed by a standard container runtime and are seamlessly integrated with the host’s operating system to enhance the overall operator experience.
The containers are managed by the Podman container runtime, which operates without additional daemon services running on the host. Unlike Docker, Podman manages each container as a separate process, eliminating the reliance on a shared daemon and mitigating the risk of a single-point-of-failure scenario.
Although several distinct services make up the Director, the primary component is the router. The router is responsible for listening for incoming requests, processing the request, and redirecting the client to the appropriate host, or CDN to deliver the requested content.
Two additional containers are responsible for configuration management. Those are
confd
and confd-transformer
. The former manages a local database of configuration
metadata and provides a REST API for managing the configuration. The confd-transformer
simply listens for configuration changes from confd
and adapts that configuration
to a format suitable for the router to ingest. For additional information about
setting up and using confd see here..
The next two components, the edns-proxy
and the convoy-bridge
allow the router
to communicate with an EDNS server for EDNS-based routing, and with synchronization
with Convoy respectively. Additional information about the EDNS-Proxy is available
here.. For the Convoy Bridge
service see here..
The remaining containers are useful for metrics, monitoring, and alerting. These
include prometheus
and grafana
for monitoring and analytics, and alertmanager
for monitoring and alarms.
1.1.6.1 - Services
Each container shipped with the Director is fully-integrated with the systemd
service on the host, enabling easy management using standard systemd
commands.
The logs for each container are also full-integrated with journald to simplify
troubleshooting.
In order to integrate the Podman containers with systemd
, a common prefix
of acd-
has been applied to each service name. For example the router
container is managed by the service acd-router
, and the confd
container
is managed by the service acd-confd
. These same prefixed names apply while
fetching logs via journald
. This common prefix aids in grouping the related
services as well as provides simpler filtering for tab-completion.
Starting / Stopping Services
Standard systemd commands should be used to start and stop the services.
systemctl start acd-router
- Starts therouter
container.systemctl stop acd-router
- Stops therouter
container.systemctl status acd-router
- Displays the status of therouter
container.
Due to the limitation of needing the acd-
prefix, it provides the ability to
work with all ACD services in a group. For example:
systemctl status 'acd-*'
- Display the status of all installed ACD components.systemctl start 'acd-*'
- Start all ACD components.
Logging
Each ACD component corresponds to a journal entry with the same unit name, with
the acd-
prefix. Standard journald commands can be used to view and manage the
logging.
journalctl -u acd-router
- Display the logs for therouter
container
Access Log
Refer to Access Logging.
Troubleshooting
Some additional logging may be available in the filesystem, the paths of which can
be determined by executing the ew-sysinfo
command. See
Diagnostics. for additional details.
1.1.7 - Convoy Bridge
The convoy-bridge is an optional integration service, pre-installed alongside the router which provides two-way communication between the router and a separate Convoy installation.
The convoy-bridge is designed to allow the Convoy account metadata to be available from within the router for such use-cases as inserting the account specific prefixes in the redirect URL and validating per-account internal security tokens. The service works by periodically polling the Convoy server for changes to the configuration, and when detected, the relevant configuration information is pushed to the router.
In addition, the convoy-bridge has the ability to integrate the router with the Convoy analytics service, such that client sessions started by the router are properly collected by Convoy, and are available in the dashboards.
Configuration
The convoy-bridge service is configured using confcli
on the router host.
All configuration for the convoy-bridge exists under the path
integration.convoy.bridge
.
{
"logLevel": "info",
"accounts": {
"enabled": true,
"dbUrl": "mysql://convoy:eith7jee@convoy:3306",
"dbPollInterval": 60
},
"analytics": {
"enabled": true,
"brokers": ["broker1:9092", "broker2:9092"],
"batchInterval": 10,
"maxBatchSize": 500
},
"otherRouters": [
{
"url": "https://router2:5001",
"apiKey": "key1",
"validateCerts": true
}
]
}
In the above configuration block, there are three main sections. The accounts
section enables fetching account metadata from Convoy towards the router. The
analytics
section controls the integration between the router and the Convoy
analytics service. The otherRouters
section is used to synchronize additional
router instances. The local router instance will always be implicitly included.
Additional routers listed in this section will be handled by this instance of
the convoy-bridge service.
Logging
The logs are available in the system journal and can be viewed using:
journalctl -u acd-convoy-bridge
1.1.8 - Monitoring
1.1.8.1 - Access logging
Access logging is activated by default and can be enabled/disabled by running
$ confcli services.routing.tuning.general.accessLog true
$ confcli services.routing.tuning.general.accessLog false
Requests are logged in the combined log format and can be found at
/var/log/acd-router/access.log
. Additionally, the symbolic link
/opt/edgeware/acd/router/log
points to /var/log/acd-router
, allowing
the access logs to also be found at /opt/edgeware/acd/router/log/access.log
.
Example output
$ cat /var/log/acd-router/access.log
May 29 07:20:00 router[52236]: ::1 - - [29/May/2023:07:20:00 +0000] "GET /vod/batman.m3u8 HTTP/1.1" 302 0 "-" "curl/7.61.1"
Access log rotation
Access logs are rotated and compressed once the access log file reaches a size of 100 MB. By default, 10 rotated logs are stored before being rotated out. These rotation parameters can be reconfigured by editing the lines
size 100M
rotate 10
in /etc/logrotate.d/acd-router-access-log
. For more log rotation configuration
possibilites, refer to the
Logrotate documentation.
1.1.8.2 - System troubleshooting
ESB3024 contains the tool ew-sysinfo
that gives an overview of how the
system is doing. Simply use the command and the tool will output information
about the system and the installed ESB3024 services.
The output format can be changed using the --format
flag, possible values
are human
(default) and json
, e.g.:
$ ew-sysinfo
system:
os: ['5.4.17-2136.321.4.el8uek.x86_64', 'Oracle Linux Server 8.8']
cpu_cores: 2
cpu_load_average: [0.03, 0.03, 0.0]
memory_usage: 478 MB
memory_load_average: [0.03, 0.03, 0.0]
boot_time: 2023-09-08T08:30:57Z
uptime: 6 days, 3:43:44.640665
processes: 122
open_sockets:
ipv4: 12
ipv6: 18
ip_total: 30
tcp_over_ipv4: 9
tcp_over_ipv6: 16
tcp_total: 25
udp_over_ipv4: 3
udp_over_ipv6: 2
udp_total: 5
total: 145
system_disk (/):
total: 33271 MB
used: 7978 MB (24.00%)
free: 25293 MB
journal_disk (/run/log/journal):
total: 1954 MB
used: 217 MB (11.10%)
free: 1736 MB
vulnerabilities:
meltdown: Mitigation: PTI
spectre_v1: Mitigation: usercopy/swapgs barriers and __user pointer sanitization
spectre_v2: Mitigation: Retpolines, STIBP: disabled, RSB filling, PBRSB-eIBRS: Not affected
processes:
orc-re:
pid: 177199
status: sleeping
cpu_usage_percent: 1.0%
cpu_load_average: 131.11%
memory_usage: 14 MB (0.38%)
num_threads: 10
hints:
get_raw_router_config: cat /opt/edgeware/acd/router/cache/config.json
get_confd_config: cat /opt/edgeware/acd/confd/store/__active
get_router_logs: journalctl -u acd-router
get_edns_proxy_logs: journalctl -u acd-edns-proxy
check_firewall_status: systemctl status firewalld
check_firewall_config: iptables -nvL
# For --format=json, it's recommended to pipe the output to a JSON interpreter
# such as jq
$ ew-sysinfo --format=json | jq
{
"system": {
"os": [
"5.4.17-2136.321.4.el8uek.x86_64",
"Oracle Linux Server 8.8"
],
"cpu_cores": 2,
"cpu_load_average": [
0.01,
0.0,
0.0
],
"memory_usage": "479 MB",
"memory_load_average": [
0.01,
0.0,
0.0
],
"boot_time": "2023-09-08 08:30:57",
"uptime": "6 days, 5:12:24.617114",
"processes": 123,
"open_sockets": {
"ipv4": 13,
"ipv6": 18,
"ip_total": 31,
"tcp_over_ipv4": 10,
"tcp_over_ipv6": 16,
"tcp_total": 26,
"udp_over_ipv4": 3,
"udp_over_ipv6": 2,
"udp_total": 5,
"total": 146
}
},
"system_disk (/)": {
"total": "33271 MB",
"used": "7977 MB (24.00%)",
"free": "25293 MB"
},
"journal_disk (/run/log/journal)": {
"total": "1954 MB",
"used": "225 MB (11.50%)",
"free": "1728 MB"
},
"vulnerabilities": {
"meltdown": "Mitigation: PTI",
"spectre_v1": "Mitigation: usercopy/swapgs barriers and __user pointer sanitization",
"spectre_v2": "Mitigation: Retpolines, STIBP: disabled, RSB filling, PBRSB-eIBRS: Not affected"
},
"processes": {
"orc-re": {
"pid": 177199,
"status": "sleeping",
"cpu_usage_percent": "0.0%",
"cpu_load_average": "137.63%",
"memory_usage": "14 MB (0.38%)",
"num_threads": 10
}
}
}
Note that your system might have different monitored processes and field names.
The field hints
is different from the rest. It lists common commands
that can be used to further monitor system performance, useful for
quickly troubleshooting a faulty system.
1.1.8.3 - Scraping data with Prometheus
Prometheus is a third-party data scraper which is installed as a containerized service in the default installation of ESB3024 Router. It periodically reads metrics data from different services, such as acd-router, aggregates it and makes it available to other services that visualize the data. Those services include Grafana and Alertmanager.
The Prometheus configuration file can be found on the host at
/opt/edgeware/acd/prometheus/prometheus.yaml
.
Accessing Prometheus
Prometheus has a web interface that is listening for HTTP connections on port 9090. There is no authentication, so anyone who has access to the host that is running Prometheus can access the interface.
Starting / Stopping Prometheus
After the service is configured, it can be managed via systemd, under the
service unit acd-prometheus
.
systemctl start acd-prometheus
Logging
The container logs are automatically published to the system journal, under
the same unit descriptor, and can be viewed using journalctl
journalctl -u acd-prometheus
1.1.8.4 - Visualizing data with Grafana
1.1.8.4.1 - Managing Grafana
Grafana displays graphs based on data from Prometheus. A default deployment of Grafana is running in a container alongside ESB3024 Router.
Grafana’s configuration and runtime files are stored under
/opt/edgeware/acd/grafana
. It comes with default dashboards that are
documented at Grafana dashboards.
Accessing Grafana
Grafana’s web interface is listening for HTTP connections on port
3000. It has two default accounts, edgeware
and admin
.
The edgeware
account can only view graphs, while the admin
account can also
edit graphs. The accounts with default passwords are shown in the table below.
Account | Default password |
---|---|
edgeware | edgeware |
admin | edgeware |
Starting / Stopping Grafana
Grafana can be managed via systemd, under the service unit acd-grafana
.
systemctl start acd-grafana
Logging
The container logs are automatically published to the system journal, under
the same unit descriptor, and can be viewed using journalctl
journalctl -u acd-grafana
1.1.8.4.2 - Grafana Dashboards
Grafana will be populated with pre-configured graphs which present some metrics on a time scale. Below is a comprehensive list of those dashboards, along with short descriptions.
Router Monitoring dashboard
This dashboard is by default set as home
directory - it’s what user will see
after logging in.
Number Of Initial Routing Decisions
HTTP Status Codes
Total number of responses sent back to incoming requests, shown by their status codes. Metric: client-response-status
Incoming HTTP and HTTPS Requests
Total number of incoming requests that were deemed valid, divided into SSL
and Unencrypted
categories.
Metric: num_valid_http_requests
Debugging Information dashboard
Number of Lua Exceptions
Number of exceptions encountered so far while evaluating Lua rules. Metric: lua_num_errors
Number of Lua Contexts
Number of active Lua interpreters, both running and idle. Metric: lua_num_evaluators
Time Spent In Lua
Number of microseconds the Lua interpreters were running. Metric: lua_time_spent
Router Latencies
Histogram-like graph showing how many responses were sent within the given latency interval. Metric: orc_latency_bucket
Internal debugging
A folder that contains dashboards intended for internal use.
ACD: Incoming Internet Connections dashboard
SSL Warnings
Rate of warnings logged during TLS connections Metric: num_ssl_warnings_total
SSL Errors
Rate of errors logged during TLS connections Metric: num_ssl_errors_total
Valid Internet HTTPS Requests
Rate of incoming requests that were deemed valid, HTTPS only. Metric: num_valid_http_requests
Invalid Internet HTTPS Requests
Rate of incoming requests that were deemed invalid, HTTPS only. Metric: num_invalid_http_requests
Valid Internet HTTP Requests
Rate of incoming requests that were deemed valid, HTTP only. Metric: num_valid_http_requests
Invalid Internet HTTP Requests
Rate of incoming requests that were deemed invalid, HTTP only. Metric: num_invalid_http_requests
Prometheus: ACD dashboard
Logged Warnings
Rate of logged warnings since the router has started, divided into CDN-related and CDN-unrelated. Metric: num_log_warnings_total
Logged Errors
Rate of logged errors since the router has started. Metric: num_log_errors_total
HTTP Requests
Rate of responses sent to incoming connections. Metric: orc_latency_count
Number Of Active Sessions
Number of sessions opened on router that are still active. Metric: num_sessions
Total Number Of Sessions
Total number of sessions opened on router. Metric: num_sessions
Session Type Counts (Non-Stacked)
Number of active sessions divided by type; see metric documentation linked below for up-to-date list of types. Metric: num_sessions
Prometheus/ACD: Subrunners
Client Connections
Number of currently open client connections per subrunner. Metric: subrunner_client_conns
Asynchronous Queues (Current)
Number of queued events per subrunner, roughly corresponding to load. Metric: subrunner_async_queue
Used <Send/receive> Data Blocks
Number of send or receive data blocks currently in use per subrunner, as decided by the “Send/receive” drop down box. Metric: subrunner_used_send_data_blocks and subrunner_used_receive_data_blocks
Asynchronous Queues (Max)
Maximum number of events waiting in queue. Metric: subrunner_max_async_queue
Total <Send/receive> Data Blocks
Number of send or receive data blocks allocated per subrunner, as decided by the “Send/receive” drop down box. Metric: subrunner_total_send_data_blocks and subrunner_total_receive_data_blocks
Low Queue (Current)
Number of low priority events queued per subrunner. Metric: subrunner_low_queue
Medium Queue (Current)
Number of medium priority events queued per subrunner. Metric: subrunner_medium_queue
High Queue (Current)
Number of high priority events queued per subrunner. Metric: subrunner_high_queue
Low Queue (Max)
Maximum number of events waiting in low priority queue. Metric: subrunner_max_low_queue
Medium Queue (Max)
Maximum number of events waiting in medium priority queue. Metric: subrunner_max_medium_queue
High Queue (Max)
Maximum number of events waiting in high priority queue. Metric: subrunner_max_high_queue
Wakeups
The number of times a subrunner has been waken up from sleep. Metric: subrunner_io_wakeups
Overloaded
The number of times the number of queued events for a subrunner exceeded its maximum. Metric: subrunner_times_worker_overloaded
Autopause
Number of sockets that have been automatically paused. This happens when the work manager is under heavy load. Metric: subrunner_io_autopause_sockets
1.1.8.5 - Alarms and Alerting
Alerts are generated by the third-party service Prometheus, which sends them to the Alertmanager service. A default containerized instance of Alertmanager is deployed alongside ESB3024 Router. Out of the box, Alertmanager ships with only a sample configuration file, and will require manual configuration prior to enabling the alerting functionality. Due to the many different possible configurations for how alerts are both detected and where they are pushed, the official Alertmanager documentation should be followed for how to configure the service.
The router ships with Alertmanager 0.25, the documentation
for which can be found at prometheus.io.
The Alertmanager configuration file can be found on the host at
/opt/edgeware/acd/alertmanager/alertmanager.yml
.
Accessing Alertmanager
Alertmanager has a web interface that is listening for HTTP connections on port 9093. There is no authentication, so anyone who has access to the host that is running Alertmanager can access the interface.
Starting / Stopping Alertmanager
After the service is configured, it can be managed via systemd, under the
service unit acd-alertmanager
.
systemctl start acd-alertmanager
Logging
The container logs are automatically published to the system journal, under
the same unit descriptor, and can be viewed using journalctl
journalctl -u acd-alertmanager
1.1.8.6 - Monitoring multiple routers
By default an instance of Prometheus only monitors the ESB3024 Router that is installed on the same host as where Prometheus is installed. It is possible to make it monitor other router instances and visualize all instances on one Grafana instance.
Configuring of Prometheus
This is configured in the scraping configuration of Prometheus, which is found
in the file /opt/edgeware/acd/prometheus/prometheus.yaml
, which typically
looks like this:
global:
scrape_interval: 15s
rule_files:
- recording-rules.yaml
# A scrape configuration for router metrics
scrape_configs:
- job_name: 'router-scraper'
scheme: https
tls_config:
insecure_skip_verify: true
static_configs:
- targets:
- acd-router-1:5001
metrics_path: /m1/v1/metrics
honor_timestamps: true
- job_name: 'edns-proxy-scraper'
scheme: http
static_configs:
- targets:
- acd-router-1:8888
metrics_path: /metrics
honor_timestamps: true
More routers can be added to the scrape configuration by simply adding more
routers under targets
in the scraper jobs.
For instance, to monitor acd-router-2
and acd-router-3
along acd-router-1
,
the configuration file needs to be modified like this:
global:
scrape_interval: 15s
rule_files:
- recording-rules.yaml
# A scrape configuration for router metrics
scrape_configs:
- job_name: 'router-scraper'
scheme: https
tls_config:
insecure_skip_verify: true
static_configs:
- targets:
- acd-router-1:5001
- acd-router-2:5001
- acd-router-3:5001
metrics_path: /m1/v1/metrics
honor_timestamps: true
- job_name: 'edns-proxy-scraper'
scheme: http
static_configs:
- targets:
- acd-router-1:8888
- acd-router-2:8888
- acd-router-3:8888
metrics_path: /metrics
honor_timestamps: true
After the file has been modified, Prometheus needs to be restarted by typing
systemctl restart acd-prometheus
It is possible to use the same configuration on multiple routers, so that all routers in a deployment can monitor each other.
Selecting router in Grafana
In the top left corner the Grafana dashboards have a drop-down menu labeled “ACD Router”, which allows to choose which router to monitor.
1.1.8.7 - Routing Rule Evaluation Metrics
ESB3024 Router counts the number of times a node and any of its children is selected in the routing table.
The visit counters can be retrieved with the following end points:
/v1/node_visits
Returns visit counters for each node as a flat list of
host:counter
pairs in JSON.Example output:
{ "node1": "1", "node2": "1", "node3": "1", "top": "3" }
/v1/node_visits_graph
Returns a full graph of nodes with their respective visit counters in GraphML.
Example output:
<?xml version="1.0"?> <graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd"> <key id="visits" for="node" attr.name="visits" attr.type="string" /> <graph id="G" edgedefault="directed"> <node id="routing_table"> <data key="visits">5</data> </node> <node id="cdn1"> <data key="visits">1</data> </node> <node id="node1"> <data key="visits">1</data> </node> <node id="cdn2"> <data key="visits">2</data> </node> <node id="node2"> <data key="visits">2</data> </node> <node id="cdn3"> <data key="visits">2</data> </node> <node id="node3"> <data key="visits">2</data> </node> <edge id="e0" source="cdn1" target="node1" /> <edge id="e1" source="routing_table" target="cdn1" /> <edge id="e2" source="cdn2" target="node2" /> <edge id="e3" source="routing_table" target="cdn2" /> <edge id="e4" source="cdn3" target="node3" /> <edge id="e5" source="routing_table" target="cdn3" /> </graph> </graphml>
To receive the graph as JSON, specify
Accept:application/json
in the request headers.Example output:
{ "edges": [ { "source": "cdn1", "target": "node1" }, { "source": "routing_table", "target": "cdn1" }, { "source": "cdn2", "target": "node2" }, { "source": "routing_table", "target": "cdn2" }, { "source": "cdn3", "target": "node3" }, { "source": "routing_table", "target": "cdn3" } ], "nodes": [ { "id": "routing_table", "visits": "5" }, { "id": "cdn1", "visits": "1" }, { "id": "node1", "visits": "1" }, { "id": "cdn2", "visits": "2" }, { "id": "node2", "visits": "2" }, { "id": "cdn3", "visits": "2" }, { "id": "node3", "visits": "2" } ] }
Resetting Visit Counters
A node visit counter with an id
not matching any node id
of a newly applied
routing table is destroyed.
Reset all counters to zero by momentarily applying a configuration with a
placeholder routing
root node, that has unique id
and an empty members
list, e.g:
"routing": {
"id": "empty_routing_table",
"members": []
}
… and immediately reapply the desired configuration.
1.1.8.8 - Metrics
ESB3024 Router collects a large number of metrics that can give insight into
it’s condition at runtime. Those metrics are available in
Prometheus’ text-based exposition format
at endpoint :5001/m1/v1/metrics
.
Below is the description of these metrics along with their labels.
client_response_status
Number of responses sent back to incoming requests.
- Type: counter
lua_num_errors
Number of errors encountered when evaluating Lua rules.
- Type:
counter
lua_num_evaluators
Number of Lua rules evaluators (active interpreters).
- Type: gauge
lua_time_spent
Time spent by running Lua evaluators, in microseconds.
- Type:
counter
num_configuration_changes
Number of times configuration has been changed since the router has started.
- Type:
counter
num_endpoint_requests
Number of requests redirected per CDN endpoint.
- Type:
counter
- Labels:
endpoint
- CDN endpoint address.selector
- whether the request was counted duringinitial
orinstream
selection.
num_invalid_http_requests
Number of client requests that either use wrong method or wrong URL path. Also number of all requests that cannot be parsed as HTTP.
- Type:
counter
- Labels:
source
- name of internal filter function that classified request as invalid. Probably not of much use outside debugging.type
- whether the request was HTTP (Unencrypted
) or HTTPS (SSL
).
num_log_errors_total
Number of logged errors since the router has started.
- Type:
counter
num_log_warnings_total
Number of logged warnings since the router has started.
- Type:
counter
num_managed_redirects
Number of redirects to the router itself, which allows session management.
- Type:
counter
num_manifests
Number of cached manifests.
- Type:
gauge
- Labels:
count
- state of manifest in cache, can be eitherlru
,evicted
ortotal
.
num_qoe_losses
Number of “lost” QoE decisions per CDN.
- Type:
counter
- Labels:
cdn_id
- ID of CDN that loose QoE battle.cdn_name
- name of CDN that loose QoE battle.selector
- whether the decision was taken duringinitial
orinstream
selection.
num_qoe_wins
Number of “won” QoE decisions per CDN.
- Type:
counter
- Labels:
cdn_id
- ID of CDN that won QoE battle.cdn_name
- name of CDN that won QoE battle.selector
- whether the decision was taken duringinitial
orinstream
selection.
num_rejected_requests
Deprecated, should always be at 0.
- Type:
counter
- Labels:
selector
- whether the request was counted duringinitial
orinstream
selection.
num_requests
Total number of requests received by the router.
- Type:
counter
- Labels:
selector
- whether the request was counted duringinitial
orinstream
selection.
num_sessions
Number of sessions opened on router.
- Type:
gauge
- Labels:
state
- eitheractive
orinactive
.type
- one of:initial
,instream
,qoe_on
,qoe_off
,qoe_agent
orsp_agent
.
num_ssl_errors_total
Number of all errors logged during TLS connections, both incoming and outgoing.
- Type:
counter
num_ssl_warnings_total
Number of all warnings logged during TLS connections, both incoming and outgoing.
- Type:
counter
- Labels:
category
- which kind of TLS connection triggered the warning. Can be one of:cdn
,content
,generic
,repeated_session
or empty.
num_unhandled_requests
Number of requests for which no CDN could be found.
- Type:
counter
- Labels:
selector
- whether the request was counted duringinitial
orinstream
selection.
num_unmanaged_redirects
Number of redirects to “outside” the router - usually to CDN.
- Type:
counter
- Labels:
cdn_id
- ID of CDN picked for redirection.cdn_name
- name of CDN picked for redirection.selector
- whether the redirect was result ofinitial
orinstream
selection.
num_valid_http_requests
Number of received requests that were not deemed invalid, see
num_invalid_http_requests
.
- Type:
counter
- Labels:
source
- name of internal filter function that classified request as invalid. Probably not of much use outside debugging.type
- whether the request was HTTP (Unencrypted
) or HTTPS (SSL
).
orc_latency_bucket
Total number of responses sorted into “latency buckets” - labels denoting latency interval.
- Type:
counter
- Labels:
le
- latency bucket that given response falls into.orc_status_code
- HTTP status code of given response.
orc_latency_count
Total number of responses.
- Type:
counter
- Labels:
tls
- whether the response was sent via SSL/TLS connection or not.orc_status_code
- HTTP status code of given response.
ssl_certificate_days_remaining
Number of days until a SSL certificate expires.
- Type:
gauge
- Labels:
domain
- the common name of the domain that the certificate authenticates.not_valid_after
- the expiry time of the certificate.not_valid_before
- when the certificate starts being valid.usable
- if the certificate is usable to the router, see the ssl_certificate_usable_count metric for an explanation.
ssl_certificate_usable_count
Number of usable SSL certificates. A certificate is usable if it is valid and authenticates a domain name that points to the router.
- Type:
gauge
1.1.8.8.1 - Internal Metrics
A subrunner is an internal module of ESB3024 Router which handles routing requests. The subrunner metrics are technical and mainly of interest for Agile Content. These metrics will be briefly described here.
subrunner_async_queue
Number of queued events per subrunner, roughly corresponding to load.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_client_conns
Number of currently open client connections per subrunner.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_high_queue
Number of high priority events queued per subrunner.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_io_autopause_sockets
Number of sockets that have been automatically paused. This happens when the work manager is under heavy load.
- Type:
counter
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_io_send_data_fast_attempts
A fast data path was added that in many cases increases the performance of the router. This metric was added to verify that the fast data path is taken.
- Type:
counter
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_io_wakeups
The number of times a subrunner has been waken up from sleep.
- Type:
counter
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_low_queue
Number of low priority events queued per subrunner.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_max_async_queue
Maximum number of events waiting in queue.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_max_high_queue
Maximum number of events waiting in high priority queue.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_max_low_queue
Maximum number of events waiting in low priority queue.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_max_medium_queue
Maximum number of events waiting in medium priority queue.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_medium_queue
Number of medium priority events queued per subrunner.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_times_worker_overloaded
Number of times when queued events for given subrunner exceeded
the tuning.overload_threshold
value (defaults to 32).
- Type:
counter
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_total_receive_data_blocks
Number of receive data blocks allocated per subrunner.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_total_send_data_blocks
Number of send data blocks allocated per subrunner.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_used_receive_data_blocks
Number of receive data blocks currently in use per subrunner. Same as subrunner_total_receive_data_blocks.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
subrunner_used_send_data_blocks
Number of send data blocks currently in use per subrunner. Same as subrunner_total_send_data_blocks.
- Type:
gauge
- Labels:
subrunner_id
- ID of given subrunner.
1.1.9 - Releases
1.1.9.1 - Release esb3024-1.16.0
Build date
2024-12-04
Release status
Type: production
Compatibility
This release is compatible with the following product versions:
- Orbit, ESB2001-3.6.0 (see Known limitations below)
- SW-Streamer, ESB3004-1.36.0
- Convoy, ESB3006-3.4.0
- Request Router, ESB3008-3.2.1
Breaking changes from previous release
- Access logs are now saved to disk at
/var/log/acd-router/access.log
instead of being handled byjournald
.
Change log
- NEW: Collect metrics per account [ESB3024-911]
- NEW: Strip whitespace from beginning and end of names in configuration [ESB3024-954]
- NEW: Improved reselection logging [ESB3024-1089]
- NEW: Access log to file instead of journald. Access logs can now be found in
/var/log/acd-router/access.log
[ESB3024-1164] - NEW: Additional Lua checksum functions [ESB3024-1229]
- NEW: Symlink logging directory
/var/log/acd-router
to/opt/edgeware/acd/router/log
[ESB3024-1232] - FIXED: Convoy Bridge retries errors too fast [ESB3024-1120]
- FIXED: Memory safety issue. Certain circumstances could cause the director to crash [ESB3024-1123]
- FIXED: Too high severity on some log messages [ESB3024-1171]
- FIXED: Session Proxy sends lowercase header names, which are not supported by Agile Cache [ESB3024-1183]
- FIXED: Translation functions
hostRequest
andrequest
fail when used together [ESB3024-1184] - FIXED: Lua hashing functions do not accept binary data [ESB3024-1196]
- FIXED: Session Proxy has poor throughput [ESB3024-1197]
- FIXED: Configuration doesn’t handle nested Lua tables as argument to conditions [ESB3024-1218]
Deprecations from previous release
- None
System requirements
- The ACD Router requires a minimum CPU architecture level of x86-64-v2 due to inclusion of Oracle Linux 9 inside the container. While all modern CPUs support this archetecture level, virtual hypervisors may default to a CPU type that has more compatibility with older processors. If this minimum CPU architecture level is not attained the containers may refuse to start. See Operating System Compatibility and Building Red Hat Enterprise Linux 9 for the x86-64-v2 Microarchitecture Level for more information.
Known limitations
- The Telegraf metrics agent might not be able to read all relevant network
interface data on ESB2001 releases older than 3.6.0. The predictive load balancing
function
host_has_bw()
and the health check functioninterfaces_online()
might therefore not work as expected.- The recommended workaround for
host_has_bw()
is to usehost_has_bw_custom()
, documented in Built-in Lua functions.host_has_bw_custom()
accepts a numeric argument for the host’s network interface capacity which can be used if the data supplied by the Telegraf metrics agents do not contain this information. - It is not recommended to use
interfaces_online()
until the issue is resolved on ESB2001.
- The recommended workaround for
1.1.9.2 - Release esb3024-1.14.2
Build date
2024-10-01
Release status
Type: production
Breaking changes
- If upgrading from a release prior to 1.10.0, the Director needs to be upgraded to 1.10.0, 1.10.1 or 1.10.2 before installing 1.14.2. See Installing a 1.14 release for more information.
- In esb3024-1.14.0, the configuration setting
services.routing.settings.allowedProxies
has been renamed toservices.routing.settings.trustedProxies
and has changed default behavior. If empty, proxy connections are now denied by default. See Trusted proxies for more information. - Starting with esb3024-1.14.0, a minimum CPU architecture level of x86-64-v2 is required. See system requirements below for more information.
Change log
- NEW: Define
custom_capacity_var
as a number inhost_has_bw_custom()
. Using a selection input variable forcustom_capacity_var
is no longer necessary. [ESB3024-1119] - FIXED: Predictive load balancing functions do not handle missing interface [ESB3024-1100]
- FIXED: Client closing socket can cause proxy IP to resolve to “?” [ESB3024-1139]
- FIXED: ACD crashes when attempting to read corrupt cached data. The cached data can become corrupt if the filesystem is manipulated by a user or the system runs out of storage. [ESB3024-1147]
- FIXED: Subnets are not being persisted to disk [ESB3024-1149]
- FIXED: ACD overwrites custom GeoIP MMDB files with the default shipped MMDB files when upgrading [ESB3024-1150]
Deprecations
- From esb3024-1.14.0, the grafana-loki and fluentbit containers have been deprecated and are no longer installed with the system. Previously installed containers may be manually stopped and removed if not used, but they will not be uninstalled automatically.
System Requirements
- Starting with esb3024-1.14.0 the ACD Router now requires a minimum CPU architecture level of x86-64-v2 due to inclusion of Oracle Linux 9 inside the container. While all modern CPUs support this archetecture level, virtual hypervisors may default to a CPU type that has more compatibility with older processors. If this minimum CPU architecture level is not attained the containers may refuse to start. See Operating System Compatibility and Building Red Hat Enterprise Linux 9 for the x86-64-v2 Microarchitecture Level for more information.
Known Limitations
- The GUI is not working for this release.
- The Telegraf metrics agent might not be able to read all relevant network
interface data on some releases of ESB2001. The predictive load balancing
function
host_has_bw()
and the health check functioninterfaces_online()
might therefore not work as expected.- The recommended workaround for
host_has_bw()
is to usehost_has_bw_custom()
, documented in Built-in Lua functions.host_has_bw_custom()
accepts a numeric argument for the host’s network interface capacity which can be used if the data supplied by the Telegraf metrics agents do not contain this information. - It is not recommended to use
interfaces_online()
until the issue is resolved on ESB2001.
- The recommended workaround for
1.1.9.3 - Release esb3024-1.14.0
Build date
2024-09-03
Release status
Type: production
Breaking changes
If upgrading from a release prior to 1.10.0, the Director needs to be upgraded to 1.10.0, 1.10.1 or 1.10.2 before installing 1.14.0. See Installing an 1.14 release for more information.
In esb3024-1.14.0, the configuration setting
services.routing.settings.allowedProxies
has been renamed toservices.routing.settings.trustedProxies
and has changed default behavior. If empty, proxy connections are now denied by default. See Trusted proxies for more information.Starting with esb3024-1.14.0, a minimum CPU architecture level of x86-64-v2 is required. See system requirements below for more information.
Change log
- NEW: Remove grafana-loki and fluentbit containers [ESB3024-774]
- NEW: Extend
num_endpoint_requests
metric with host ID [ESB3024-975] - NEW: Improved subnets endpoint. See API overview documentation for details. [ESB3024-1018]
- NEW: Support RHEL-9 / OL9 [ESB3024-1022]
- NEW: Support OpenSSL 3 [ESB3024-1025]
- NEW: Changed the router base image to oracle linux 9. See breaking changes [ESB3024-1034]
- NEW: Rename allowedProxies to trustedProxies [ESB3024-1085]
- NEW: Deny proxy connections by default if trustedProxies is empty [ESB3024-1088]
- FIXED: Too long classifier name crashes confd-transformer [ESB3024-949]
- FIXED: Lua condition si() doesn’t handle boolean values [ESB3024-1017]
- FIXED: Classifiers of type stringMatcher and regexMatcher can’t use content query params as source [ESB3024-1032]
- FIXED: ConsistentHashing algorithm is not content aware [ESB3024-1053]
- FIXED: Large configurations fail to apply. The REST API max body size is now configurable. [ESB3024-1056]
- FIXED: Convoy-bridge DB connection failure spams logs [ESB3024-1080]
- FIXED: Convoy-bridge does not send correctly formatted session-id [ESB3024-1081]
- FIXED: Response translation removes message body [ESB3024-1082]
Deprecations
- From esb3024-1.14.0, the grafana-loki and fluentbit containers have been deprecated and are no longer installed with the system. During upgrade these containers may manually be stopped and removed if not used.
System Requirements
- Starting with esb3024-1.14.0 the ACD-Router now requires a minimum CPU architecture level of x86-64-v2 due to inclusion of Oracle Linux 9 inside the container. While all modern CPUs support this archetecture level, virtual hypervisors may default to a CPU type that has more compatibility with older processors. If this minimum CPU architecture level is not attained the containers may refuse to start. See Operating System Compatibility and Building Red Hat Enterprise Linux 9 for the x86-64-v2 Microarchitecture Level for more information.
Known Limitations
The GUI is not working for this release.
The Telegraf metrics agent might not be able to read all relevant network interface data on some releases of ESB2001. The predictive load balancing function
host_has_bw()
and the health check functioninterfaces_online()
might therefore not work as expected.- The recommended workaround for
host_has_bw()
is to usehost_has_bw_custom()
, documented in Built-in Lua functions. A manual integration of setting a custom selection input variable representing the network interface capacity and using this inhost_has_bw_custom()
is necessary. See API Overview for details on using the selection input API. to usehost_has_bw_custom()
- The recommended workaround for
interfaces_online()
is to not use the function until the issue is resolved.
- The recommended workaround for
1.1.9.4 - Release esb3024-1.12.1
Build date
2024-07-03
Release status
Type: production
Breaking changes
If upgrading from a release prior to 1.10.0, the Director needs to be upgraded to 1.10.0, 1.10.1 or 1.10.2 before installing 1.12.1. See Installing an 1.12 release for more information.
Change log
- NEW: Remove support for EL7 [ESB3024-1046]
- FIXED: Large configuration causes crash [ESB3024-1043]
Known Limitations
The GUI is not working for this release.
1.1.9.5 - Release esb3024-1.12.0
Build date
2024-06-19
Release status
Type: production
Breaking changes
If upgrading from a release prior to 1.10.0, the Director needs to be upgraded to 1.10.0, 1.10.1 or 1.10.2 before installing 1.12.0. See Installing release 1.12.0 for more information.
Change log
- NEW: Move managed session creation to Lua. Creating managed sessions is now handled by using the session translation function. [ESB3024-454]
- NEW: Grafana dashboards to monitor Quality [ESB3024-511]
- NEW: Measure and expose quality scores. A quality score per host and session group is now available when making routing decisions. [ESB3024-512]
- NEW: Add default session classifiers. When resetting the list of classifiers in confd, it is now populated with commonly used classifers. [ESB3024-769]
- NEW: Add configuration migration tool [ESB3024-824]
- NEW: Add new Random classifier [ESB3024-899]
- NEW: Add URL parsing code to Lua library. An URL parser based on https://github.com/golgote/neturl/ with extensions for path splitting and joining [ESB3024-936]
- NEW: Standard library Lua functions now use the same log mechanism as the Director [ESB3024-966]
- NEW: Extend ’num_sessions’ metric to include a label with the selected host [ESB3024-973]
- NEW: Add quality level metrics [ESB3024-974]
- NEW: Add host request translation function [ESB3024-996]
- FIXED: ConsistentHashing Algorithm only supports MD5. MD5, SDBM and Murmur are now supported. [ESB3024-929]
- FIXED: Confd IPv4 validation rejects IPs with /29 netmask [ESB3024-1010]
- FIXED: Stale timestamped selection input not being pruned. Added configurable timestamped selection input timeout limit. [ESB3024-1016]
Known Limitations
The GUI is not working for this release.
1.1.9.6 - Release esb3024-1.10.2
Build date
2024-06-03
Release status
Type: production
Breaking changes
If upgrading from a release prior to 1.10.0, the configuration needs to be manually updated after upgrading to 1.10.2. See Installing release 1.10.x for more information.
Change log
- FIXED: ConsistentHashing rule broken [ESB3024-969]
- FIXED: Increase configuration size limit [ESB3024-983]
Known Limitations
None
1.1.9.7 - Release esb3024-1.10.1
Build date
2024-04-18
Release status
Type: production
Breaking changes
If upgrading from a release prior to 1.10.0, the configuration needs to be manually updated after upgrading to 1.10.1. See Installing release 1.10.x for more information.
Change log
- NEW: Change predictive load balancing functions to use megabits/s [ESB3024-932]
- FIXED: Logic classifier statements can consume all memory [ESB3024-937]
Known Limitations
None
1.1.9.8 - Release esb3024-1.10.0
Build date
2024-04-02
Release status
Type: production
Breaking changes
The configuration needs to be manually updated after upgrading to 1.10.0. See Installing release 1.10.0 for more information.
Change log
- NEW: Use metrics from streamers in routing decisions. Added standard library Lua support to use hardware metrics in routing decisions. Added host health checks in the configuration. [ESB3024-154]
- NEW: Remove unused field “apiKey” from configuration [ESB3024-426]
- NEW: Support integration with Convoy Analytics [ESB3024-694]
- NEW: Support combining classifiers using AND/OR in session groups [ESB3024-776]
- NEW: Enable access logging by default [ESB3024-816]
- NEW: Improved Lua translation function error handling [ESB3024-874]
- NEW: Updated predictive load balancing functions to support hardware metrics [ESB3024-887]
- NEW: Remove apiKey from documentation [ESB3024-927]
- FIXED: Condition with ‘or’ statement sometimes generate faulty Lua [ESB3024-863]
Known Limitations
None
1.1.9.9 - Release esb3024-1.8.0
Build date
2024-02-07
Release status
Type: production
Breaking changes
The configuration needs to be manually updated after upgrading to 1.8.0. See Installing release 1.8.0 for more information.
Change log
- NEW: Remove ESB3026 Account Monitor from installer. [ESB3024-354]
- NEW: Improve selection input endpoint flexibility and security. See API overview documentation for details. [ESB3024-423]
- NEW: Support anonymous geoip rules [ESB3024-699]
- NEW: Add ASN IDs list classifiers to confd [ESB3024-778]
- NEW: Enable content popularity tracking by default. Added option to enable/disable in confd/confcli. [ESB3024-781]
- NEW: Remove dependency on session from security token verification [ESB3024-809]
- FIXED: A lot of JSON output on failed routing. HTTP response no longer contains internal routing information. [ESB3024-523]
- FIXED: Returning Lua table from Lua debug endpoint can crash router. Selection Input values now support floating point values in a Lua context [ESB3024-691]
- FIXED: Floating point selection inputs are truncated to ints when passed to Lua context [ESB3024-710]
- FIXED: Race condition between
RestApi
andSession
[ESB3024-753] - FIXED: confd/concli doesn’t support “forward_host_header” on hostGroups [ESB3024-761]
- FIXED: Support Lua vector keys in reverse order [ESB3024-780]
Known Limitations
None
1.1.9.10 - Release esb3024-1.6.0
Build date
2023-12-20
Release status
Type: production
Breaking changes
The configuration needs to be manually updated after upgrading to 1.6.0. See configuration changes between 1.4.0 and 1.6.0 for more information.
Change log
- NEW: Remove the
lua_paths
array from the config . Lua scripts are now added using a REST API on the/v1/lua/
endpoint. [ESB3024-204] - NEW: Separate “account-monitor” from installer [ESB3024-238]
- NEW: Consistent hashing based routing . Added support for content distribution control for load balancing and cache partitioning [ESB3024-274]
- NEW: Predictive load balancing . Account for in-transit traffic to prevent cache overload when there is a sudden burst of new sessions. [ESB3024-275]
- NEW: Support Convoy security tokens [ESB3024-386]
- NEW: Expose quality, host and session ID in the session object in Lua context [ESB3024-429]
- NEW: Support upgrade of system python in installer [ESB3024-442]
- NEW: Do not configure selinux and firewalld in installer [ESB3024-493]
- NEW: Convoy Distribution/Account integration [ESB3024-503]
- NEW: Make eDNS server port configurable . The router configuration
hosts.proxy_address
has been renamed tohosts.proxy_url
and now accepts a port that is used when connecting to the proxy. Thecdns.http_port
andcdns.https_port
configurations now configure the port that is used for connecting to the EDNS server, before they configured the port that was used for connecting to the proxy. [ESB3024-509] - NEW: Expand node table in Lua context . New fields are:
node.id
,node.visits
,host.id
,host.recent_selections
[ESB3024-630] - FIXED: DNS lookup can fail . DNS lookup can fail when same content requested from both IPv4 and IPv6 clients [ESB3024-427]
- FIXED: Failed DNS requests are not retried . Fixed bug where failed eDNS requests were not retried [ESB3024-504]
- FIXED: Lua functions are not updated when uploaded [ESB3024-544]
- FIXED: Undefined metatable fields evaluate to
false
rather thannil
[ESB3024-642] - FIXED:
Evaluator::evaluate()
doesn’t support different types of its variadic arguments [ESB3024-687] - FIXED: Segfault when accessing REST api with empty path [ESB3024-752]
- FIXED: Container UID/GID may change between versions [ESB3024-755]
1.1.9.11 - Release esb3024-1.4.0
Build date
2023-09-29
Release status
Type: production
Breaking changes
- All configuration is now stored under /opt/edgeware/acd, see [ESB3024-425]. Any configuration that is to be kept needs to be manually migrated.
Typically/opt/edgeware/etc/confd/store/store.json
needs to be copied to/opt/edgeware/acd/confd/store/store.json
,/opt/edgeware/var/lib/acd-router/cached-acd-router-config.json
needs to be copied to/opt/edgeware/acd/router/cache/config.json
and/opt/edgeware/var/lib/acd-router/cached-router-rest-api-key.json
needs to be copied to/opt/edgeware/acd/router/cache/rest-api-key.json
. Custom Lua functions need to be migrated from/opt/edgeware/acd/var/lib/custom_lua
to/opt/edgeware/acd/router/lib/custom_lua
. The Prometheus and Grafana configurations also need to be copied if they have been modified.
The following changes were made to the confcli configuration [ESB3024-455]. - The
rule
fields inside the routing rule items were renamed tocondition
to avoid confusion with therules
list. This applies to the blocksallow
,deny
,split
andweighted
. - The
popularityThreshold
in thecontentPopularity
routing rule was renamed tocontentPopularityCutoff
.
Change log
- NEW: 1-Page Status Report . Added command
ew-sysinfo
that can be used on any machine with an ESB3024 installation. The command outputs various information about the system and installed services which can be used for monitoring and diagnostics. [ESB3024-391] - NEW: Update routing rule property names . Routing rule property names updated for consistency and clarity [ESB3024-455]
- FIXED: Deleting confd API array element inside oneOf object fails [ESB3024-355]
- FIXED: Container logging not captured by
systemd
until services are restarted [ESB3024-359] - FIXED: Alertmanager restricts the configuration to a single file [ESB3024-381]
- FIXED: Split rules in routing configuration should terminate on error [ESB3024-420]
- FIXED: Improve alert configuration in Prometheus [ESB3024-422]
- FIXED: Inconsistent storage paths of service configuration and data [ESB3024-425]
- FIXED: confd-transformer is not working in el7 [ESB3024-430]
1.1.9.12 - Release acd-router-1.2.3
Build date
2023-08-16
Release status
Type: production
Breaking changes
None
Change log
- NEW: Add more classifiers . New classifiers are
hostName
,contentUrlPath
,userAgent
,contentUrlQueryParameters
[ESB3024-298] - NEW: Add allow- and denylist rule blocks [ESB3024-380]
- NEW: Add enhanced validation of scriptable field in routing rules [ESB3024-393]
- NEW: Add
services
to the config tree [ESB3024-410] - NEW: Prohibit unknown configuration properties [ESB3024-416]
- FIXED: Duplicate session group IDs are allowed [ESB3024-49]
- FIXED: Invalid URL returned for IPv4 requests when using a DNS backend [ESB3024-374]
- FIXED: Not possible to set log level in eDNS proxy [ESB3024-378]
- FIXED: Instream selection fails when DASH manifest has template paths using “../” [ESB3024-384]
1.1.9.13 - Release acd-router-1.2.0
Build date
2023-06-27
Release status
Type: production
Breaking changes
None
Change log
- NEW: Add meta fields to the configuration . The API now allows the meta data fields “created_at”, “source” and “source_checksum” that can be used for the API consumer to track who did what change when.
- NEW: Control routing behavior based on backend response code . This gives control over when to return backend response codes to the end user and when to trigger a failover to another CDN or host.
- NEW: Manage Lua scripts via API
- NEW: Support popularity-based routing . Content can be ordered in multiple groups with descending popularity. Popularity can also be tracked per session group.
- NEW: Improved support for IPv6 routing . It is now possible to select backend depending on the IP protocol version.
- NEW: Add DNS backend support . This allows delegating routing decisions to an EDNS0 server.
- NEW: Support HMAC with SHA256 in Lua scripts
- NEW: Add alarm support . The alarms are handled by Prometheus and Alertmanager.
- NEW: Support saving Grafana Dashboards
- NEW: Add simplified configuration API and CLI tool . A new configuration API with an easier to use model has been added. The “confcli” tool present in many other Edgeware products is now supported.
- NEW: Add authentication to the REST API
- FIXED: Host headers not forwarded to Request Router when ‘redirecting: true’ is enabled
- FIXED: IP range classifier 0.0.0.0/0 does not work in session groups
1.1.9.14 - Release acd-router-1.0.0
Build date
2022-11-22
Release status
Type: First production release
Known Limitations
The setting “allowed_clients” should not be used since the functionality does not work as expected.
Change log
- Flexible routing rule engine with support for Lua plugins. Support many use cases, including CDN Offload and CDN Selection.
- Advanced client classification mechanisms for routing based on group memberships (device type, content type, etc).
- Geobased routing including dedicated high-performing API for subnet matching, associating an incoming request with a region.
- Integration API to feed the service with arbitrary variables to use for routing decisions. Can be used to get streaming bitrate in public CDNs, status from network probes, etc.
- Flexible request/response translation manipulation on the client facing interface. Can be used for URL manipulation, encoding/decoding tokens or adapting the interface to e.g. the PowerDNS backend protocol.
- Metrics API that can be monitored with standard monitoring software. Out-of-the-box integration with Prometheus and Grafana.
- Robust deployment with each service instance running independently, and allowing the service to stay in operational state even when backends become temporarily unavailable.
- RHEL 7/8 support.
- Online documentation at https://docs.agilecontent.com/
1.1.10 - Glossary
- ACD
- Agile CDN Director. See “Director”.
- Confd
- A backend service that hosts the service configuration. Comes with an API, a CLI and a GUI.
- Classifier
- A filter that associate a request with a tag that can be used to define session groups.
- Director
- The Agile Delivery OTT router and related services.
- ESB
- A software bundle that can be separately installed and upgraded, and is released as one entity with one change log. Each ESB is identified with a number. Over time, features and functions within an ESB can change.
- Lua
- A widely available scripting language that is often used to extend the capabilities of a piece of software.
- Router
- Unless otherwise specified, an HTTP router that manages an OTT session using HTTP redirect. There are also ways to use DNS instead of HTTP.
- Selection Input API
- Data posted to this API can be accessed by the routing rules and hence influence the routing decisions.
- Subnet API
- An API to define mappings between subnets and names (typically regions) for those subnets. Routing rules can then refer to the names rather than the subnets.
- Session Group
- A handle on a group of requests, defined via classifiers.
1.2 - ESB3032 ACD Aggregator
1.2.1 - Getting Started
The account aggregator is a service responsible for monitoring various input streams, compiling and aggregating statistics, and selectively reporting to one or more output streams. It acts primarily as a centralized collector of metrics which may have various aggregations applied before being published to one or more endpoints.
Modes of operation
There are two primary modes of operation, a live-monitoring mode, as well as a reporting mode. The live-monitoring mode measures the account records in real-time, filters, and aggregates the data to the various outputs in real-time. In this mode, only the most recent data will be considered, and any historical context upon startup may be skipped. In the reporting mode, the account record data will be consumed and processed in the order in which they were published to Kafka, and the service will guarantee that all records, still available within the Kafka topic will be processed and reported upon.
Activating the various modes of operation is performed by way of the set of
input
and output
blocks within the configuration file. The file may
contain one or more input
blocks which specify where the data is sourced,
e.g. account records from Kafka, and one or more output
blocks which
determine how and where the aggregated statistics are published.
While it is possible to specify multiple input
and output
blocks within
a single configuration file, it is highly recommended to separate each pairing
of input
and output
blocks into separate instances running on different
nodes. This will yield the best performance and provide for better load
balancing, since each instance will be responsible for a single mode of
operation.
Real-time account monitoring
In the real-time account monitoring mode, account records, which are sent from each streaming server through the Kafka message broker, are processed by the account aggregator, and current real-time throughput metrics are updated in a Redis database. These metrics, which are constantly being updated, reflect the most current state of the CDN, and can be used by the Convoy Request Router to make real-time routing decisions.
PCX Reporting
In the PCX collector mode, account records are consumed in such a way that past throughput and session statistics can be aggregated to produce billing related reports. These reports are not considered real-time metrics, but represent usage statistics over fixed time intervals. This mode of operation requires a PCX API compatible reporting endpoint. See Appendix B for additional information regarding the PCX reporting format.
Installation
Prerequisites
The account aggregator is shipped as a compressed OCI formatted container image and as such, it requires a supported container runtime such as one of the following:
- Docker
- Podman
- Kubernetes
Any runtime capable of running a Linux container should work the same. For simplicity, the following installation instructions assume that Docker is being used, and that Docker is already configured and running on the target system.
To test that Docker is setup and running, and that the current user has the required privileges to create a container, you may execute the following command.
$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
If you get a permission denied error, ensure that the current user is a member
of the docker
group or execute all Docker commands under sudo
.
Loading the container image
The container image is delivered as a compressed OCI formatted image, which
can be loaded directly via the docker load
command. The following assumes
that the image is in /tmp/esb3032-acd-aggregator-0.0.0.gz
docker load --input /tmp/esb3032-acd-aggregator-0.0.0.gz
You will now be able to verify that the image was loaded successfully by executing the following and looking for the image name in the output.
$ docker images | grep acd-aggregator
images.edgeware.tv/esb3032-acd-aggregator latest 4bbe28b444d3 1 day ago 2.08GB
Creating the configuration file
The configuration file may be located anywhere on the filesystem, however
it is recommended to keep everything under the /opt/edgeware/acd/aggregator
folder to be consistent with other products under the ACD product family.
If that folder doesn’t already exist, you may create the folder with the
following command.
mkdir -p /opt/edgeware/acd/aggregator
If using a different location, you will need to map the folder to the container while creating the Docker container. Additional information describing how to map the volume is available in the section “Creating and starting the container” below.
The configuration file for the account aggregator is divided into several
sections, input
, output
and tuning
. One or more input
blocks may
be specified to configure from where the data should be sourced. One or
more output
blocks may be configured which determine to where the resulting
aggregated data is published. Finally the tuning
block configures various
global settings for how the account aggregator operates, such as the global
log_level
.
Configuring the input source
As of the current version of the account aggregator, there is only a single
type of input source supported, and that is account_records
. This input
source connects to a Kafka message broker, and consumes account records.
Depending on which output types are configured, the Kafka consumer may either
start by processing the oldest or most recent records first.
The following configuration block sample will be used as an example in the description below.
Note that the key input
is surrounded by double-square-brackets. This is
a syntax element to indicate that there may be multiple input
sections in
the configuration.
[[input]]
type = "account_records"
servers = [
"kafka://192.0.2.1:9092",
"kafka://192.0.2.2:9092",
]
group_name = "acd-aggregator"
kafka_max_poll_interval_ms = 30000
kafka_session_timeout_ms = 3000
log_level = "off"
The type
property is used to determine the type of input, and the only
valid value is account_records
.
The servers
list must contain at least 1 Kafka URL, prefixed with the
URL scheme kafka://
. If not specified, the default Kafka port of 9092
will be used. It is recommended but not required to specify all servers
here, as the Kafka client library will obtain the full list of endpoints
from the server on startup, however, the initial connection will be made
to one or more of the provided URLs.
The group_name
property identifies to which consumer group the aggregator
should belong. Due to the type of data which account records represent, each
instance of the aggregator connecting to the same Kafka message broker
MUST have a unique group name. If two instances belong to the same group,
the data will be partitioned among both instances, and the resulting
aggregations may not be correct. If only a single instance of the account
aggregator is used, this property is optional and defaults to “acd-aggregator”.
The kafka_*
properties, for max_poll_interval
and session_timeout
are
used to tune the connection parameters for the internal Kafka consumer. More
details for these properties can be found in the documentation for the
rdkafka
library. See Kafka documentation
for more details.
The log_level
property configures the logging level for the Kafka library
and supports the values “off”, “trace”, “debug”, “info”, “warn”, and “error”.
By default, logging from this library is disabled. This should only be
enabled for troubleshooting purposes, as it is extremely verbose, and any
warnings or error messages will be repeated in the account aggregator’s log.
The logging level for the Kafka library must be higher then the general
logging level for the aggregator, as defined in the “tuning” section or the
lower-level messages from the Kafka library will be skipped.
Configuring output
The account aggregator currently supports two types of output blocks, depending on the desired mode of operation. For reference purposes, both types will be described within this section, but it is recommended to only use a single type per instance of the account aggregator.
Note that the key output
is surrounded by double-square-brackets. This is
a syntax element to indicate that there may be multiple output
sections in
the configuration.
[[output]]
type = "account_monitor"
redis_servers = [
"redis://192.0.2.7:6379/0",
"redis://:password@192.0.2.8:6379/1",
]
stale_threshold_s = 12
throughput_correction_mbps = 0
minimum_check_interval_ms = 1000
[[output]]
type = "pcx_collector"
report_url = "https://192.0.2.5:8000/v1/collector"
client_id = "edgeware"
secret = "abc123"
report_timeout_ms = 2000
report_interval_s = 30
report_delay_s = 30
Real-time account monitor output
The first output block has the type account_monitor
and represents the
live account monitoring functionality, which publishes per-account bandwidth
metrics to one or more Redis servers. When this type of output
block is
configured, the account records will be consumed starting with the most
recent messages first, and offsets will not be committed. Stopping or
restarting the service may cause account records to be skipped. This type
of output
is suitable for making real-time routing decisions, but should
not be relied upon for critical billing or reporting metrics.
The redis_servers
list consists of URLs to Redis instances which shall be
updated with the current real-time bandwidth metrics. If the Redis instance
requires authentication, the global instance password can be specified as part
of the URL as in the second entry in the list. Since Redis does not support
usernames, anything before the :
in the credentials part of the URL will
be ignored. At least 1 Redis URL must be provided.
The stale_threshold_s
property determines the maximum timeout in seconds,
after which, if no account records have been received for a given host,
the host will be considered stale and removed.
The throughput_correction_mbps
property can be used to add or subtract a
fixed correction factor to the bandwidth reported in Redis. This is specified
in megabits per second, and this may be either positive or negative. If
the value is negative, and the calculated bandwidth is less than the correction
factor, a minimum bandwidth of 0 will be reported.
The minimum_check_interval_ms
property is used to throttle how frequently
the statistics will be processed. By default, the account aggregator will not
recalculate the statistics more than once per second. Setting this value too
low will result in potentially higher CPU usage, while setting it too high may
result in some account records being missed. The default of 1 second should
be adequate for most situations.
PCX Collector output
The pcx_collector
type configures the account aggregator as a reporting
agent for the PCX API. Whenever this configuration is present, the account
record consumer will be configured to always start at the oldest records
retained within the Kafka topic. It then processes the records one at a time,
committing the Kafka offset each time a report is successfully received.
This mode does not make any guarantees as to how recent the data is on which
the reports are made, but does guarantee that every record will be counted
in the aggregated report. Stopping or restarting the service will result in
the account record consumer resuming processing from the last successful
report. This type of reporting is suitable for billing purposes assuming
that there are multiple replicated Kafka nodes, and that the service is not
stopped for longer than the maximum retention period configured within
Kafka. Stopping the service for longer than the retention period will result
in messages being unavailable. Because this type of output
requires that
the Kafka consumer is processed in a specific order, and will not proceed
with reading additional messages until all reports have been successfully
received, it is not recommended to have both pcx_collector
and the
account_monitor
type output
blocks configured within the same instance.
The report_url
property is a single HTTP endpoint URL where the PCX API
can be reached. This property is required and may be either an HTTP or HTTPS
URL. For HTTPS, the validity of the TLS certificate will be enforced, meaning
that self-signed certificates will not be considered valid.
The client_id
and secret
fields are used to authenticate the client with
the PCX API via token-based authentication. These fields are both required,
however if not used by the specific PCX API instance, the empty string ""
may be provided.
The report_timeout_ms
field is an optional maximum timeout for the HTTP
connection to the PCX API before the connection will fail. Failed reports
will be retried indefinitely.
The report_interval_s
property represents the interval bucket size for
reporting metrics. The timing for this type of output
is based solely
on the embedded timestamp
value of the account records, meaning that this
property is not an absolute period on which the reports will be sent, but
instead represents the duration between the start and ending timestamps of
the report. Especially upon startup, reports may be sent much more frequently
than this interval, but will always cover this duration of time.
The report_delay_s
property is an optional offset used to account for both
clock synchronization between servers as well as propagation delay of the
account records through the message broker. The default delay is 30 seconds.
This means that the ending timestamp of a given report will be no more recent
than this many seconds in the past. It is important to include this delay, as
any account records received with a timestamp that would be within period which
has already been reported upon, will be dropped.
Tuning the account aggregator
The tuning
configuration block represents the global properties for tuning
how the account aggregator functions. Currently only one tuning property
can be configured, and that is the log_level
. The default log_level
is
“info”, which should be used in normal operation of the account aggregator,
however, other possible values in order of verbosity include “trace”,
“debug”, “info”, “warn”, “error”, and “off”.
Note that the tuning
key is surrounded by single square-brackets. This is
TOML syntax meaning that only one instance of tuning
is allowed.
[tuning]
log_level = "info"
Example configurations
This section describes some example configuration files which can be used as a starting template depending on which mode of operation is desired.
Real-time account monitoring
This configuration will consume account records from a Kafka server running on 3 hosts, kafka-1, kafka-2, and kafka-3. The account records will be consumed starting with the most recent records. The resulting aggregations will be published to two Redis instances, running on redis-1 and redis-2. The reported bandwidth will have a 2Gb/s correction factor applied.
[[input]]
type = "account_records"
servers = [
"kafka://kafka-1:9092",
"kafka://kafka-2:9092",
"kafka://kafka-3:9092"
]
group_name = "acd-aggregator-live"
# kafka_max_poll_interval_ms = 30000
# kafka_session_timeout_ms = 3000
# log_level = "off"
[[output]]
type = "account_monitor"
redis_servers = [
"redis://redis-1:6379/0",
"redis://redis-2:6379/0",
]
# stale_threshold_s = 12
throughput_correction_mbps = 2000
# minimum_check_interval_ms = 1000
[tuning]
log_level = "info"
The keys prefixed by #
are commented out, since the default values will
be used. They are included in the example for completeness.
PCX collector
This configuration will consume account records starting from the earliest
record, calculate aggregated statistics for every 30 seconds, offset with
a delay of 30 seconds, and publish the results to
https://pcx.example.com/v1/collector
.
[[input]]
type = "account_records"
servers = [
"kafka://kafka-1:9092",
"kafka://kafka-2:9092",
"kafka://kafka-3:9092"
]
group_name = "acd-aggregator-pcx"
# kafka_max_poll_interval_ms = 30000
# kafka_session_timeout_ms = 3000
# log_level = "off"
[[output]]
type = "pcx_collector"
report_url = "https://pcx.example.com/v1/collector"
client_id = "edgeware"
secret = "abc123"
# report_timeout_ms = 2000
# report_interval_s = 30
# report_delay_s = 30
[tuning]
log_level = "info"
The keys prefixed by #
are commented out, since the default values will
be used. They are included in the example for completeness.
Combined PCX collector with real-time account monitoring
While this configuration is possible, it is not recommended, since the
pcx_collector
output
type will force all records to be consumed starting
at the earliest record. This will cause the live statistics to be delayed
until ALL earlier records have been consumed, and reports have been
successfully accepted by the PCX API. This combined role configuration can be
used to minimize the number of servers or services running if the above
limitations are acceptable.
Note: This is simply the combination of the above two output
blocks in the
same configuration file.
[[input]]
type = "account_records"
servers = [
"kafka://kafka-1:9092",
"kafka://kafka-2:9092",
"kafka://kafka-3:9092"
]
group_name = "acd-aggregator-combined"
# kafka_max_poll_interval_ms = 30000
# kafka_session_timeout_ms = 3000
# log_level = "off"
[[output]]
type = "account_monitor"
redis_servers = [
"redis://redis-1:6379/0",
"redis://redis-2:6379/0",
]
# stale_threshold_s = 12
throughput_correction_mbps = 2000
# minimum_check_interval_ms = 1000
[[output]]
type = "pcx_collector"
report_url = "https://pcx.example.com/v1/collector"
client_id = "edgeware"
secret = "abc123"
# report_timeout_ms = 2000
# report_interval_s = 30
# report_delay_s = 30
[tuning]
log_level = "info"
Upgrading
The upgrade procedure for the aggregator consists of simply stopping
the existing container with docker stop acd-aggregator
, removing the
existing container with docker rm acd-aggregator
, and following the
steps in “Creating and starting the container” below with the upgraded
Docker image.
To roll back to a previous version, simply perform the same steps with the
previous image. It is recommended to keep at least one previous image
around until such time that you are satisfied with the new version. After
which, you may remove the previous image with
docker rmi images.edgeware.tv/esb3032-acd-aggregator:1.2.3
where “1.2.3”
represents the previous version number.
Creating and starting the container
Now that the configuration file has been created, and the image has been
loaded, we will need to create and start the container instance. The
following docker run
command will create a new container called
“acd-aggregator”, start the process, and automatically resume the container
once the Docker daemon is loaded at startup.
docker run \
--name "acd-aggregator" \
--detach \
--restart=always \
-v <PATH_TO_CONFIG_FOLDER>:/opt/edgeware/acd/aggregator:ro \
<IMAGE NAME>:<VERSION> \
--config /opt/edgeware/acd/aggregator/aggregator.toml
As an example using version 1.4.0:
docker run \
--name "acd-aggregator" \
--detach \
--restart=always \
-v /opt/edgeware/acd/aggregator:/opt/edgeware/acd/aggregator:ro \
images.edgeware.tv/esb3032-acd-aggregator:1.4.0 \
--config /opt/edgeware/acd/aggregator/aggregator.toml
Note: The image tag in the example is “1.4.0”, you will need to replace
that tag with the image tag loaded from the compressed OCI formatted image
file, which can be obtained by running docker images
and searching for the
account aggregator image as described in the step “Loading the container image”
above.
If the configuration file saved in the previous step was at a different
location from /opt/edgeware/acd/aggregator/aggregator.toml
you will need to
change both the -v
option and the --config
option in the above
command to represent that location. The -v
option mounts the containing
folder from the host system on the left to the corresponding path inside
the container on the right, and the :ro
tells Docker that the volume is
mounted read-only. The --config
should be the absolute path to the
configuration file from INSIDE the container. For example, if you saved
the configuration file as /host/path/config.toml
on the host, and you need
to map that to /container/path/config.toml
within the container, the lines
should be -v /host/path:/container/path:ro
and
--config /container/path/config.toml
respectively.
The --restart=always
line tells Docker to automatically restart the
container when the Docker runtime is loaded, and is the equivalent in
systemd to “enabling” the service.
Starting and Stopping the container
To view the status of the running container, use the docker ps
command.
This will give a line of output for the acd-aggregator
container if it
is currently running. Appending the -a
flag, will list the aggregator
container if is not running as well.
Execute the following:
docker ps -a
You should see a line for the container with the container name “acd-aggregator” along with the current state of the container. If all is OK, you should see the container process running at this point, but it may show as “exited” if there was a problem.
To start and stop the container the docker start acd-aggregator
and
docker stop acd-aggregator
commands can be used.
Viewing the logs
By default, Docker will maintain the logs of the individual containers within its own internal logging subsystem, which requires the user to use the command
docker logs
to view them. It is possible however to configure the Docker daemon to send logs to the system journal, however configuring that is beyond the scope of this document. Additional details describing how to do that are described here
[https://docs.docker.com/config/containers/logging/journald/].
To view the complete log for the aggregator the following command can be used.
docker logs acd-aggregator
Supplying the -f
flag, can be used to “follow” the log until either the
process terminates or CTRL+C is pressed.
docker logs -f acd-aggregator
Appendix A: Real-time account monitoring
Redis key value pairs
Each account will have a single key-value stored in Redis with the current throughput with any correction factor applied, which will be updated in real-time every time all hosts for the given account have received a new account record. This should be approximately every 10 seconds, but may vary slightly due to processing time.
The keys are structured in the following format:
bandwidth:<account>:value
and the value is reported in bits-per-second.
For example for accounts foo
, bar
and baz
we may see the following:
bandwidth:foo:value = 123456789
bandwidth:bar:value = 234567890
bandwidth:baz:value = 102400
These values represent the most current throughput for each account, and will be updated periodically. A TTL of 48 hours is added to the keys, such that they will be pruned automatically after 48 hours since the last update. This is to prevent stale keys from remaining in Redis indefinitely. This TTL is not configurable by the end user.
Appendix B: PCX collector reporting
PCX reporting format
The following is an example of the report sent to the PCX HTTP endpoint.
{
timestamp_begin: 1674165540,
timestamp_end: 1674165570,
writer_id: "writer-1",
traffic: [
Traffic {
account_id: "unknown",
num_ongoing_sessions: 0,
bytes_transmitted: 0,
edges: [
Edge {
server: "orbit-1632",
num_ongoing_sessions: 0,
bytes_transmitted: 0,
},
],
},
Traffic {
account_id: "default",
num_ongoing_sessions: 747,
bytes_transmitted: 75326,
edges: [
Edge {
server: "orbit-1632",
num_ongoing_sessions: 747,
bytes_transmitted: 75326,
},
],
},
],
}
The report can be broken down into 3 parts. The outer root section includes
the starting and stopping timestamps, as well as a writer_id
field which is
currently unused. For each account a Traffic
section contains the
aggregated statistics for that account, as well as a detailed breakdown of
each Edge
. An Edge
is the portion of traffic for the account streamed
by each server. Within an Edge
the num_ongoing_sessions
represents the
peak ongoing sessions during the reporting interval, while the
bytes_transmitted
represents the total egress bandwidth in bytes over the
entire period. For each outer Traffic
section, the num_ongoing_sessions
and bytes_transmitted
represent the sum of the corresponding entries in
all Edges
.
Data protection and consistency
The ACD aggregator works by consuming messages from Kafka. Once a report has successfully been submitted, as determined by a 200 OK HTTP status from the reporting endpoint, the position in the Kafka topic will be committed. This means that if the aggregator process stops and is restarted, reporting will resume from the last successful report, and no data will be lost. There is a limitation to this, however, and that has to do with the data retention time of the messages in Kafka and the TTL value specified in the aggregator configuration. Both default to the same value of 24 hours. This means that if the aggregator process is stopped for more than 24 hours, data loss will result since the source account records will have expired from Kafka before they can be reported on by the aggregator.
Upon startup of the aggregator, all records stored in Kafka will be reported on in the order they are read, starting from either the last successful report or the oldest record currently in Kafka. Reports will be sent each time the timestamp in the current record read from Kafka exceeds the reporting interval meaning a large burst of reports will be sent at startup to cover each interval. Once the aggregator has caught up with the backlog of account records, it will send a single report roughly every 30 seconds (configurable).
It is not recommended to have more than a single account aggregator instance reading from Kafka at a time, as this will result in partial reports being sent to the HTTP endpoint which will require the endpoint to reconstruct the data upon receipt. All redundancy in the account aggregator is handled by the redundancy within Kafka itself. With this in mind, it is important to ensure that there are multiple Kafka instances running and that the aggregator is configured to read from all of them.
1.2.2 - Releases
1.2.2.1 - Release esb3032-0.2.0
Build date
2022-12-21
Release status
Type: devdrop
Change log
- NEW: Use config file instead of command line switches
- NEW: Reports are now aligned with wall-clock time
- NEW: Reporting time no longer contains gaps in coverage
- FIX: Per-account number of sessions only shows largest host
1.2.2.2 - Release esb3032-1.0.0
Build date
2023-02-14
Release status
Type: production
Change log
- NEW: Create user documentation for ACD Aggregator
- NEW: Simplify configuration . Changed from YAML to TOML format.
- NEW: Handle account records arriving late
- FIXED: Aggregator hangs if committing to Kafka delays more than 5 minutes
1.2.2.3 - Release esb3032-1.2.1
Build date
2023-04-24
Release status
Type: production
Breaking changes
No breaking changes
Change log
- NEW: Port Account Monitor functionality for Convoy Request Router
- NEW: Aggregator Performance Improvements
- FIXED: Reports lost when restarting acd-aggregator
1.2.2.4 - Release esb3032-1.4.0
Build date
2023-09-28
Release status
Type: production
Breaking changes
None
Change log
- NEW: Extend aggregator with additional metrics. Per streamer bandwidth and total bandwidth are now updated in Redis. [ESB3032-98]
- FIXED: Not all Redis instances are updated after a failure [ESB3032-99]
- FIXED: Kafka consumer restarts on Partition EOF [ESB3032-100]
1.3 - ACD Cache
Information about the ACD Cache Orbit version is available in
EDGS-103 Orbit TV Server User Guide
1 and information about the
ACD Cache SW Streamer version is available in EDGS-171 SW Streamer User Guide
1.
1.4 - ESB3013 BGP Sniffer
Information about the ACD BGP Sniffer is available in
EDGS-214 ESB3013 User Guide
1.
1.5 - Edgeware CDN Management System
Information about the classic Edgeware CDN Management System (aka Convoy)
is available in EDGS-069 Convoy Management Software User Guide
1.
1.6 - ESB3008 Edgeware CDN Request Router
Information about the convoy based Edgeware CDN Request Router
is available in EDGS-197 ESB3008 HTTP Request Router - User Guide
1.
2 - Introduction
ACD is a software suite for CDNs (Content Delivery Networks). The services built around the ACD software come into play when a user starts to view video content on a video streaming portal. The focus is on the server side of the delivery. The portal and players or set-to-boxes are not part of ACD, although they are part of Agile Contents product portfolio as a whole. Likewise, video stream repackaging and DRM are not part of ACD (see ACP).
The ACD software can be used to build a complete private CDN and/or to select between CDNs. A typical deployment is a nation-wide TV streaming service operated by a telecom operator or a broadcaster.
At the core of the system sits ESB3024 Router which will process an incoming client request and make an optimized decision from where to serve the user. The router will classify each playback request according to a variety of configured parameters such as device type, geographic region or content class. Based on the classification together with the current state of the network, its streaming capacity and public offload CDNs status, the best streaming location is determined. If desired, business rules for when to use public CDNs can be applied.
The ACD software suite includes the ACD Cache that is installed on servers placed in data centers that will guarantee an efficient and robust distribution of the content. Depending on the core network architecture, a centralized or decentralized approach is best. Agile Content can help find a good system solution.
Behind the client facing interfaces, there are backend services to monitor and measure the system. This information can be used to troubleshoot network problems, analyze viewing patterns or to plan upgrades of capacity.
Returning to the router, it is built around a very flexible routing rule evaluation engine that can route between and within CDNs. The rule evaluation engine weighs together multiple sources of information to optimize system behavior. Apart from the request classification of device type, content class etc, there are also several integration APIs that can be used to extend the product and solve problems that are particular to a specific customer. One of those integration APIs is the Selection Input API which can take any external information such as streaming bitrate of public CDNs, load balancing settings from a GUI slide bar or status from network probes and include it in the routing decision rules. Another is the Subnet API that can efficiently handle thousands of subnets and map each one to a high-level group membership (typically a region) and route based on that. There is also a metrics API to monitor the service. It integrates out-of-the-box with Prometheus and Grafana.
All APIs are documented under the API overview.
In addition to the flexibility provided by the APIs, there is also a possibility to extend functionality via plugins (using Lua). This can be used to support the decoding and encoding of security tokens (digital signatures of the URLs) or advanced routing rules that are not available out-of-the-box.
A modern TV CDN is considered as critical infrastructure and has very high requirements on uptime and robustness. This is in obtained via multiple independent and stand-alone instances of each service (although particular use cases might require sharing data via a redundant database). The ACD software services run on RHEL8 or RHEL9 Linux or compatible distributions such as Oraclelinux.
Note: Not all products or components that are part of ACD are documented here. Please contact our Sales for a complete overview.
3 - Solutions
3.1 - Cache hardware metrics: monitoring and routing
Observability and monitoring is important in a CDN. To facilitate this, we use the open source application Telegraf for two key roles: as a metrics agent and as a metrics aggregator. Telegraf performs both these roles depending on its configuration.
The image below demonstrates an example architecture when using multiple caches and instances of ESB3024 Router.
Metrics agent
All deployments of ESB2001 Orbit TV Server (from version 3.6.0) and ESB3004 SW Streamer (from version 1.36.0) come bundled with an instance of Telegraf running as a metrics agent, collecting information on hardware metrics such as usage of CPU, memory and network interfaces. If you are using other caches in your CDN, Telegraf can be manually installed and configured.
Telegraf is configured via a configuration daemon, telegraf-configd
. On the
Orbit TV Server is starts automatically, but on the SW Streamer it needs to be
started manually by typing this:
systemctl start telegraf-configd
To make it always start during boot, type
systemctl enable telegraf-configd
Once the configuration daemon is running, Telegraf can be configured using
confcli
:
$ confcli integration.acd.telegraf
{
"telegraf": {
"enable": true,
"exportUrls": [],
"hostname": ""
}
}
To enable or disable the Telegraf instance, type
$ confcli integration.acd.telegraf.enable true
integration.acd.telegraf.enable = True
$ confcli integration.acd.telegraf.enable false
integration.acd.telegraf.enable = false
The Telegraf agents need to export their metrics to an aggregator to be useful.
The field integration.acd.telegraf.exportUrls
is a list of Telegraf aggregator
instances to which the metrics will be exported. It is recommended to use at
least two Telegraf aggregator instances for redundancy.
As an example, type this to configure a cache to export metrics to
http://aggregator-host-1:8086
and http://aggregator-host-2:8086
:
$ confcli integration.acd.telegraf.exportUrls -w
Running wizard for resource 'exportUrls'
<A list of aggregator URLs to export metrics to (default: [])>
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
exportUrls <A list of aggregator URLs to export metrics to (default: [])>: [
exportUrl (default: ): http://aggregator-host-1:8086
Add another 'exportUrl' element to array 'exportUrls'? [y/N]: y
exportUrl (default: ): http://aggregator-host-2:8086
Add another 'exportUrl' element to array 'exportUrls'? [y/N]: n
]
Generated config:
{
"exportUrls": [
"http://aggregator-host-1:8086",
"http://aggregator-host-2:8086"
]
}
Merge and apply the config? [y/n]: y
The Telegraf agent tags all metrics it collects with the hostname of the machine
it is running on. To change this hostname, the field
integration.acd.telegraf.hostname
can be modifed:
$ confcli integration.acd.telegraf.hostname 'new-hostname'
integration.acd.telegraf.hostname = 'new-hostname'
Metrics aggregator
Telegraf is not installed together with the router since it is dependent on your system. Either follow these steps or ask an Agile Content integrator to set it up for your system. To set up Telegraf as an aggregator on your desired hosts, two steps need to be performed.
- Install Telegraf according to official instructions.
- Configure Telegraf to act as an aggregator by adding the following
configuration file:
/etc/telegraf/telegraf.conf
This makes Telegraf export the metrics to the timestamped selection input
endpoint (i.e. https://acd-host:5001/v2/timestamped_selection_input
). By
default, metrics sent to this endpoint have a timeout of 15 seconds, after which
the data will be removed to avoid using stale data. This timeout is configurable
by running
$ confcli services.routing.tuning.general.timestampedSelectionInputTimeoutSeconds 30
services.routing.tuning.general.timestampedSelectionInputTimeoutSeconds = 30
Due to a limitation of the [[outputs.http]]
plugin, multiple output URLs
cannot be defined as a list, the plugin only accepts a single URL. This means
that the whole [[outputs.http]]
and [outputs.http.headers]
needs to be
repeated for each router host.
Each router section needs to be modified in two places:
- Below
[[outputs.http]]
, theurl
directive has to be made direct to the router’s address. - Below
[outputs.http.headers]
,X-API-Key
has to contain the router’s REST API key. The key is stored in/opt/edgeware/acd/router/cache/rest-api-key.json
on the router server.
After the configuration file is complete, type systemctl reload telegraf
to
make Telegraf reload the new configuration.
When the aggregator is succesfully started and configured, the metrics JSON packet is sent to the selection input API with the following structure:
{
"cache-1": {
"hardware_metrics": {
"/": {
"free": 18113810432,
"total": 34887954432,
"used": 16774144000,
"used_percent": 48.08004445400899
},
"cpu_load1": 0.70,
"cpu_load5": 0.53,
"cpu_load15": 0.40,
"mem_available": 3129425920,
"mem_available_percent": 76.34786936062332,
"mem_total": 4098904064,
"mem_used": 519901184,
"n_cpus": 2
},
"per_interface_metrics": {
"eth0": {
"bytes_recv": 57585596566,
"bytes_recv_rate": 939.4, // bytes per second
"bytes_sent": 10127106702,
"bytes_sent_rate": 637.9, // bytes per second
"drop_in": 1800079,
"drop_in_rate": 0,
"drop_out": 0,
"drop_out_rate": 0,
"err_in": 0,
"err_in_rate": 0,
"err_out": 0,
"err_out_rate": 0,
"interface_up": true,
"link": 1,
"megabits_recv": 460684,
"megabits_recv_rate": 0.0075152, // megabits per second
"megabits_sent": 81016,
"megabits_sent_rate": 0.0051032, // megabits per second
"speed": 100
}
},
"timestamp": 1709296800
}
}
Monitoring
All router installations come bundled with a Prometheus instance that can be configured to scrape the aggregator instances, allowing alarms and visualization of all caches’ metrics.
On a router host, modify the configuration file /opt/edgeware/acd/prometheus/prometheus.yaml
to include a scrape job for the aggregators in your system:
global:
scrape_interval: 15s
rule_files:
- recording-rules.yaml
scrape_configs:
- job_name: 'router-scraper'
scheme: https
tls_config:
insecure_skip_verify: true
static_configs:
- targets:
- 10.16.48.106:5001
metrics_path: /m1/v1/metrics
honor_timestamps: true
- job_name: 'edns-proxy-scraper'
scheme: http
static_configs:
- targets:
- 10.16.48.106:8888
metrics_path: /metrics
honor_timestamps: true
- job_name: 'metric-aggregators'
scheme: http
static_configs:
- targets:
- aggregator-host-1:12001
- aggregator-host-2:12001
metrics_path: /metrics
honor_timestamps: true
Once the new configuration file is set, restart the Prometheus instance with
systemctl restart acd-prometheus
. When the Prometheus instance starts scraping the
aggregators, all caches’ metrics will be available for visualization in Grafana
and alarms in AlertManager.
Using hardware metrics in routing
When the hardware metrics have been succesfully injected into the selection input API, the data is ready to be used for routing decisions. In addition to creating routing conditions, hardware metrics are particularly suited to host health checks.
Host health checks
When the hardware metrics have been succesfully injected into the selection input API, the data is ready to make routing decisions with. In addition to creating routing conditions, hardware metrics are particularly suited to host health checks.
Host health checks
Host health checks are Lua functions that determine whether or not a host is ready to take on more clients. To configure host health checks, see configuring CDNs and hosts. See Built-in Lua functions for documentation on how to use the built-in health functions. Note that these health check functions can also be used for making routing conditions in the routing tree.
Routing based on cache metrics
Instead of using health check functions to incorporate hardware metrics into the routing decisions, regular routing conditions can be used.
As an example, using the health check function cpu_load_ok()
in routing can be
configured as follows:
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: firstMatch
Adding a 'firstMatch' element
rule : {
name (default: ): dont_overload_cache
type (default: firstMatch):
targets : [
target : {
onMatch (default: ): default_host
condition (default: always()): cpu_load_ok()
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
onMatch (default: ): offload_host
condition (default: always()):
}
Add another 'target' element to array 'targets'? [y/N]: n
]
}
Add another 'rule' element to array 'rules'? [y/N]: n
]
Generated config:
{
"rules": [
{
"name": "dont-overload-cache",
"type": "firstMatch",
"targets": [
{
"onMatch": "default_host",
"condition": "cpu_load_ok()"
},
{
"onMatch": "offload_host",
"condition": "always()"
}
]
}
]
}
Merge and apply the config? [y/n]: y
{
"routing": {
"id": "dont_overload_cache",
"member_order": "sequential",
"members": [
{
"id": "default-node",
"host_id": "default-host",
"weight_function": "return cpu_load_ok()"
}
{
"id": "offload-node",
"host_id": "offload-host",
"weight_function": "return 1"
}
]
}
}
3.2 - Private CDN Offload Routing with DNS
This shows a setup to do DNS based routing and 3rd party CDN offload from a private main CDN. The following components are used in the setup:
- A private Edgeware CDN with Convoy (ESB3006), Request Router (ESB3008) and SW Streamers (ESB3004)
- ESB3024 Router to offload traffic to an external CDN
- CoreDNS to support DNS based routing using ESB3024
The following figure shows an overview of the different components and how they interact.
A client retrieves a DNS name from it’s content portal. The DNS name is then resolved by CoreDNS that asks the router for a suitable host. The router either returns a host from ESB3008 Request Router or a configured offload host.
The following sequence diagram illustrates the flow.
Follow the links below to configure different use cases of this setup.
3.3 - Monitor ACD with Prometheus, Grafana and Alert Manager
ACD can be monitored with standard monitoring solutions. This allows you to adapt the monitoring to your needs. The following shows a setup with the following components
- Prometheus as metrics database
- Grafana to create Dashboards
- Alert Manager to create alarms from Prometheus alerts
Prometheus and Grafana can also be installed with ESB3024 Router installer in which case Grafana is also pre-populated with standard routing dashboards as well as the router troubleshooting Dashboards.
4 - Use Cases
4.1 - Route on GeoIP/ASN
This page describes how to write configuration for GeoIP and ASN-based routing. For configuration in general, see Configuration.
For more details on session groups and classifiers, see Session Groups and Classification.
ASN
Routing on ASN is done through a combination of session group classifiers and Lua script weight functions. We need to create ASN classifier(s) and then associate them with a session group:
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: geoip
Adding a 'geoip' element
classifier : {
name (default: ): allowed_asn_classifier
type (default: geoip): ⏎
inverted (default: False): ⏎
continent (default: ): ⏎
country (default: ): ⏎
cities : [
city (default: ): ⏎
Add another 'city' element to array 'cities'? [y/N]: ⏎
]
asn (default: ): Agile*ISP
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "allowed_asn_classifier",
"type": "geoip",
"inverted": false,
"continent": "",
"country": "",
"cities": [],
"asn": "Agile*ISP"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.sessionGroups -w
Running wizard for resource 'sessionGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
sessionGroups : [
sessionGroup : {
name (default: ): allowed_asn
classifiers : [
classifier (default: ): allowed_asn_classifier
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: ⏎
]
Generated config:
{
"sessionGroups": [
{
"name": "allowed_asn",
"classifiers": [
"allowed_asn_classifier"
]
}
]
}
Merge and apply the config? [y/n]: y
{
"session_groups": [
{
"id": 1,
"name": "allowed_asn",
"classifiers": [
[
{
"id": 1,
"inverted": false,
"name": "allowed_asn_classifier",
"rule": {
"rule_type": "geoip_rule",
"source": "session/client_ip",
"asn": "Agile*ISP"
}
}
]
]
}
]
}
The asn
value is a string, matching an ISP name or similar. It supports
wildcard matching using asterisks and is case insensitive.
Note that each ASN classifier has to be in its own list. This is because classifiers within the same inner list all have to match for the entire list to be true, and that is impossible for classifiers that match against different ASNs. When the classifiers are in their own lists, it’s enough that one of them matches for the outer classifier list to also match.
Simple Example
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: split
Adding a 'split' element
rule : {
name (default: ): asn_split_node
type (default: split): ⏎
rule (default: ): return in_session_group("allowed_asn")
onMatch (default: ): allowed-asn-host
onMiss (default: ): offload-host
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "asn_split_node",
"type": "split",
"condition": "in_session_group('allowed_asn')",
"onMatch": "allowed-asn-host",
"onMiss": "offload-host"
}
]
}
Merge and apply the config? [y/n]: y
{
"content_server": {
"http_enable": true,
"http_port": 80,
"https_enable": true,
"https_port": 443
},
"session_groups": [
{
"id": 1,
"name": "allowed_asn",
"classifiers": [
[
{
"id": 1,
"inverted": false,
"name": "allowed_asn_classifier_agile",
"rule": {
"rule_type": "geoip_rule",
"source": "session/client_ip",
"asn": "Agile*ISP"
}
}
],
[
{
"id": 1,
"inverted": false,
"name": "allowed_asn_classifier_edgeware",
"rule": {
"rule_type": "geoip_rule",
"source": "session/client_ip",
"asn": "Edgeware *"
}
}
]
]
}
],
"cdns": [
{
"id": "basic-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
},
{
"id": "offload-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
}
],
"hosts": [
{
"id": "default-host",
"cdn_id": "basic-cdn",
"address_family": "ipv4",
"host": "cdn-host.example"
},
{
"id": "offload-host",
"cdn_id": "offload-cdn",
"address_family": "ipv4",
"host": "offload-host.example"
}
],
"routing": {
"id": "routing_table",
"member_order": "sequential",
"members": [
{
"id": "default-node",
"host-id": "default-host",
"weight_function": "return in_session_group('allowed_asn')"
},
{
"id": "offload-node",
"host_id": "offload-host",
"weight_function": "return 1"
}
]
}
}
Geographical Location
Routing on geographical location is done in a similar fashion. Once again, we create location classifier(s) and associate them with a session group:
Geographical locations are provided by a MaxMind database.
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: geoip
Adding a 'geoip' element
classifier : {
name (default: ): allowed_location
type (default: geoip): ⏎
inverted (default: False): ⏎
continent (default: ): ⏎
country (default: ): ⏎
cities : [
city (default: ): stockholm
Add another 'city' element to array 'cities'? [y/N]: ⏎
]
asn (default: ): ⏎
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "allowed_location",
"type": "geoip",
"inverted": false,
"continent": "",
"country": "",
"cities": [
"stockholm"
],
"asn": ""
}
]
}
Merge and apply the config? [y/n]: y
"session_groups": [
{
"id": 1,
"name": "allowed_location",
"classifiers": [
[
{
"id": 1,
"inverted": false,
"name": "allowed_location_classifier",
"rule": {
"rule_type": "geoip_rule",
"source": "session/client_ip",
"continent": "",
"country": "",
"region": "",
"cities": ["stockholm"]
}
}
]
]
}
]
At least one of the optional fields continent
, country
, region
and
cities
are required. The classifier matches a request when all specified
fields do (for the cities
list, one matching member is sufficient). All
values support wildcard matching using asterisks and are case insensitive.
Note that each location classifier has to be in its own list. This is because classifiers within the same inner list all have to match for the entire list to be true, and that is impossible for classifiers that match against different locations. When the classifiers are in their own lists, it’s enough that one of them matches for the outer classifier list to also match.
Simple Example
confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: split
Adding a 'split' element
rule : {
name (default: ): geographic_location_split_node
type (default: split): ⏎
rule (default: ): return in_session_group("allowed_location")
onMatch (default: ): allowed-location-host
onMiss (default: ): offload-host
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "geographic_location_split_node",
"type": "split",
"condition": "in_session_group('allowed_location')",
"onMatch": "allowed-location-host",
"onMiss": "offload-host"
}
]
}
Merge and apply the config? [y/n]: y
{
"content_server": {
"http_enable": true,
"http_port": 80,
"https_enable": true,
"https_port": 443
},
"session_groups": [
{
"id": 1,
"name": "allowed_location",
"classifiers": [
[
{
"id": 1,
"inverted": false,
"name": "allowed_location_classifier_sweden",
"rule": {
"rule_type": "geoip_rule",
"source": "session/client_ip",
"country": "Sweden"
}
}
],
[
{
"id": 1,
"inverted": false,
"name": "allowed_location_classifier_oslo",
"rule": {
"rule_type": "geoip_rule",
"source": "session/client_ip",
"country": "Norway",
"cities": ["Oslo"]
}
}
]
]
}
],
"cdns": [
{
"id": "basic-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
},
{
"id": "offload-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
}
],
"hosts": [
{
"id": "default-host",
"cdn_id": "basic-cdn",
"address_family": "ipv4",
"host": "cdn-host.example"
},
{
"id": "offload-host",
"cdn_id": "offload-cdn",
"address_family": "ipv4",
"host": "offload-host.example"
}
],
"routing": {
"id": "routing_table",
"member_order": "sequential",
"members": [
{
"id": "default-node",
"host_id": "default-host",
"weight_function": "return in_session_group('allowed_location')"
}
{
"id": "offload-node",
"host_id": "offload-host",
"weight_function": "return 1"
}
]
}
}
4.2 - Route on Subnet
This page describes how to write configuration for subnet-based routing. For configuration in general, see Configuration.
For details on the subnets API, see Subnets API.
Subnet-based routing can be done either by using session groups and classifiers or by using the subnets API, allowing large amounts of named subnets.
Session Group Classifiers
Routing on subnets can be done through a combination of session group classifiers and Lua script weight functions. Configuring session groups and classifiers based on subnets and using them in routing may look like:
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: ipranges
Adding a 'ipranges' element
classifier : {
name (default: ): ip_ranges_classifier
type (default: ipranges): ⏎
inverted (default: False): ⏎
ipranges : [
iprange (default: ): 10.0.0.0/8
Add another 'iprange' element to array 'ipranges'? [y/N]: y
iprange (default: ): 192.168.0.0/16
Add another 'iprange' element to array 'ipranges'? [y/N]: y
iprange (default: ): 2001:0db8:85a3::/128
Add another 'iprange' element to array 'ipranges'? [y/N]: ⏎
]
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "ip_ranges_classifier",
"type": "ipranges",
"inverted": false,
"ipranges": [
"10.0.0.0/8",
"192.168.0.0/16",
"2001:0db8:85a3::/128"
]
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.sessionGroups -w
Running wizard for resource 'sessionGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
sessionGroups : [
sessionGroup : {
name (default: ): allowed_subnets
classifiers : [
classifier (default: ): ip_ranges_classifier
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: ⏎
]
Generated config:
{
"sessionGroups": [
{
"name": "allowed_subnets",
"classifiers": [
"ip_ranges_classifier"
]
}
]
}
Merge and apply the config? [y/n]: y
{
"session_groups": [
{
"id": 1,
"name": "allowed_subnets",
"classifiers": [
[
{
"id": 1,
"inverted": false,
"name": "allowed_subnet_classifier",
"rule": {
"rule_type": "ip_ranges_rule",
"source": "session/client_ip",
"ip_ranges": ["10.0.0.0/8", "192.168.0.0/16", "2001:0db8:85a3::/128"]
}
}
]
]
}
]
}
The field ip_ranges
is a list of CIDR-notation strings, supporting both IPv4
and IPv6 format. Incoming requests have their IP tested against the listed
ranges. The classifier evaluates as true if any of the ranges in the
classifier’s list matches the IP address of the request.
These session groups can then be used in routing as follows:
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: split
Adding a 'split' element
rule : {
name (default: ): subnet_split_node
type (default: split): ⏎
rule (default: ): return in_session_group('allowed_subnets')
onMatch (default: ): allowed-host
onMiss (default: ): offload-host
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "subnet_split_node",
"type": "split",
"condition": "in_session_group('allowed_subnets')",
"onMatch": "allowed-host",
"onMiss": "offload-host"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.entrypoint subnet_split_node
services.routing.entrypoint = 'subnet_split_node'
{
"content_server": {
"http_enable": true,
"http_port": 80,
"https_enable": true,
"https_port": 443
},
"cdns": [
{
"id": "allowed-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
},
{
"id": "offload-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
}
],
"hosts": [
{
"id": "allowed-host",
"cdn_id": "allowed-cdn",
"host": "allowed-host.example"
},
{
"id": "offload-host",
"cdn_id": "offload-cdn",
"host": "offload-host.example"
}
],
"routing": {
"id": "routing_table",
"member_order": "sequential",
"members": [
{
"id": "allowed-node",
"host_id": "allowed-host",
"weight_function": "return in_session_group('allowed_subnets')"
},
{
"id": "offload-node",
"host_id": "offload-host",
"weight_function": "return 1"
}
]
},
"session_groups": [
{
"id": 1,
"name": "allowed_subnets",
"classifiers": [
[
{
"id": 1,
"inverted": false,
"name": "allowed_subnet_classifier",
"rule": {
"rule_type": "ip_ranges_rule",
"source": "session/client_ip",
"ip_ranges": ["10.0.0.0/8", "192.168.0.0/16", "2001:0db8:85a3::/128"]
}
}
]
]
}
]
}
Named Subnets
Named subnets are injected into the router in the form of JSON payloads to the Subnets API. Once the subnet data has been fed to the router, Lua functions can access the subnet name associated with an incoming request and use the result when it performs routing.
Note that confcli
cannot be used to inject named subnets into the router.
Assume we have injected the following subnet configuration into the router:
{
"10.0.0.0/8": "test_net_4",
"192.168.0.0/16": "test_net_4",
"2001:0db8:85a3::/128": "test_net_6"
}
A router configuration using this subnet configuration can then be constructed as:
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: split
Adding a 'split' element
rule : {
name (default: ): subnet_split_node
type (default: split): ⏎
rule (default: ): in_subnet('test_net_4')
onMatch (default: ): allowed-host-4
onMiss (default: ): offload-host
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "subnet_split_node",
"type": "split",
"condition": "in_subnet('test_net_4')",
"onMatch": "allowed-host-4",
"onMiss": "offload-host"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.entrypoint subnet_split_node
services.routing.entrypoint = 'subnet_split_node'
{
"content_server": {
"http_enable": true,
"http_port": 80,
"https_enable": true,
"https_port": 443
},
"cdns": [
{
"id": "allowed-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
},
{
"id": "offload-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
}
],
"hosts": [
{
"id": "allowed-host-4",
"cdn_id": "allowed-cdn",
"address_family": "ipv4",
"host": "allowed-host4.example"
},
{
"id": "allowed-host-6",
"cdn_id": "allowed-cdn",
"address_family": "ipv6",
"host": "allowed-host6.example"
},
{
"id": "offload-host",
"cdn_id": "offload-cdn",
"address_family": "ipv4",
"host": "offload-host.example"
}
],
"routing": {
"id": "routing_table",
"member_order": "sequential",
"members": [
{
"id": "allowed-node-4",
"host_id": "allowed-host-4",
"weight_function": "return in_subnet('test_net_4')"
},
{
"id": "allowed-node-6",
"host_id": "allowed-host-6",
"weight_function": "return in_subnet('test_net_6')"
},
{
"id": "offload-node",
"host_id": "offload-host",
"weight_function": "return 1"
}
]
}
}
Note the absence of session groups in this configuration, instead relying solely on the named subnet API to making routing decisions based on subnets.
4.3 - Route on Content Type
This page describes how to write configuration for content-based routing. For configuration in general, see Configuration.
Two ways to route on content type will be demonstrated: content path
based
and hostname
based using session groups and classifiers. For more details on
session groups and classifiers, see
Session Groups and Classification.
Content Path
Routing on content path is done through a combination of session group
classifiers and Lua script weight functions. Two suitable classifiers for
content path matching are string_match_rule
and regex_rule
.
The pattern is a string matching the path segment of a request URL, including
the file name. When selecting the string_match_rule
type, use asterisks for
wildcard matching. When selecting the regex_rule
follow the C++11 std::regex
ECMAScript
syntax to make valid patterns. Make sure to write patterns that match the
entire path segment, not just part of it.
Note that each content classifier normally has to be in its own list. This is because classifiers within the same inner list all have to match for the entire list to be true, and that is impossible for classifiers that match against different content paths. When the classifiers are in their own lists, it’s enough that one of them matches for the outer classifier list to also match.
Simple Example
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: contentUrlPath
Adding a 'contentUrlPath' element
classifier : {
name (default: ): live_content_classifier
type (default: contentUrlPath): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): ⏎
pattern (default: ): *live_tv*
}
Add another 'classifier' element to array 'classifiers'? [y/N]: y
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: contentUrlPath
Adding a 'contentUrlPath' element
classifier : {
name (default: ): news_content_classifier
type (default: contentUrlPath): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): regex
pattern (default: ): /([^/]+/)?news_reports_\\d+/.*
}
Add another 'classifier' element to array 'classifiers'? [y/N]: y
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: contentUrlPath
Adding a 'contentUrlPath' element
classifier : {
name (default: ): hls_content_classifier
type (default: contentUrlPath): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): ⏎
pattern (default: ): *.m3u8
}
Add another 'classifier' element to array 'classifiers'? [y/N]: n
]
Generated config:
{
"classifiers": [
{
"name": "live_content_classifier",
"type": "contentUrlPath",
"inverted": false,
"patternType": "stringMatch",
"pattern": "*live_tv*"
},
{
"name": "news_content_classifier",
"type": "contentUrlPath",
"inverted": false,
"patternType": "regex",
"pattern": "/([^/]+/)?news_reports_\\\\d+/.*"
},
{
"name": "hls_content_classifier",
"type": "contentUrlPath",
"inverted": false,
"patternType": "stringMatch",
"pattern": "*.m3u8"
}
]
}
Merge and apply the config? [y/n]: y
confcli services.routing.sessionGroups -w
Running wizard for resource 'sessionGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
sessionGroups : [
sessionGroup : {
name (default: ): live_content
classifiers : [
classifier (default: ): live_content_classifier
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: y
sessionGroup : {
name (default: ): news_content
classifiers : [
classifier (default: ): news_content_classifier
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: y
sessionGroup : {
name (default: ): hls_content
classifiers : [
classifier (default: ): hls_content_classifier
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: ⏎
]
Generated config:
{
"sessionGroups": [
{
"name": "live_content",
"classifiers": [
"live_content_classifier"
]
},
{
"name": "news_content",
"classifiers": [
"news_content_classifier"
]
},
{
"name": "hls_content",
"classifiers": [
"hls_content_classifier"
]
}
]
}
Merge and apply the config? [y/n]: y
{
"content_server": {
"http_enable": true,
"http_port": 80,
"https_enable": true,
"https_port": 443
},
"session_groups": [
{
"id": 1,
"name": "live_content",
"classifiers": [
[
{
"id": 1,
"inverted": false,
"name": "live_content_classifier",
"rule": {
"rule_type": "string_match_rule",
"source": "session/content_url_path",
"pattern": "*live_tv*"
}
}
]
],
[
[
{
"id": 2,
"inverted": false,
"name": "news_content_classifier",
"rule": {
"rule_type": "regex_rule",
"source": "session/content_url_path",
"pattern": "/([^/]+/)?news_reports_\\d+/.*"
}
}
]
]
},
{
"id": 2,
"name": "hls_content",
"classifiers": [
[
{
"id": 1,
"inverted": false,
"name": "hls_content_classifier",
"rule": {
"rule_type": "string_match_rule",
"source": "session/content_url_path",
"pattern": "*.m3u8"
}
}
]
]
}
],
"cdns": [
{
"id": "live-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
},
{
"id": "vod-hls-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
},
{
"id": "vod-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
}
],
"hosts": [
{
"id": "live-host",
"cdn_id": "live-cdn",
"address_family": "ipv4",
"host": "live-host.example"
},
{
"id": "vod-hls-host",
"cdn_id": "vod-hls-cdn",
"address_family": "ipv4",
"host": "vod-hls-host.example"
},
{
"id": "vod-host",
"cdn_id": "vod-cdn",
"address_family": "ipv4",
"host": "vod-host.example"
}
],
"routing": {
"id": "routing_table",
"member_order": "sequential",
"members": [
{
"id": "live-node",
"host_id": "live-host",
"weight_function": "return session_groups.live_content and 1 or 0"
},
{
"id": "vod-hls-node",
"host_id": "vod-hls-host",
"weight_function": "return session_groups.hls_content and 1 or 0"
},
{
"id": "vod-node",
"host_id": "vod-host",
"weight_function": "return 1"
}
]
}
}
Hostname
In cases where there are separate domain or subdomain names for different content types, it’s simple to make classifiers that filter on those names.
The pattern is a string matching the hostname segment of a request URL,
including the file name. When selecting the string_match_rule
type, use
asterisks for wildcard matching. When selecting the regex_rule
follow the
C++11 std::regex
ECMAScript
syntax to make valid patterns. Make sure to write patterns that match the
entire hostname segment, not just part of it.
Note that each hostname classifier has to be in its own list. This is because classifiers within the same inner list all have to match for the entire list to be true, and that is impossible for classifiers that match against different hostnames. When the classifiers are in their own lists, it’s enough that one of them matches for the outer classifier list to also match.
Simple example
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: hostName
Adding a 'hostName' element
classifier : {
name (default: ): live_content_classifier
type (default: hostName): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): ⏎
pattern (default: ): live.example.com
}
Add another 'classifier' element to array 'classifiers'? [y/N]: y
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: hostName
Adding a 'hostName' element
classifier : {
name (default: ): news_content_classifier
type (default: hostName): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): regex
pattern (default: ): /([^\\.]+/)\\.news_reports_\\d+/\\.example\\.com
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "live_content_classifier",
"type": "hostName",
"inverted": false,
"patternType": "stringMatch",
"pattern": "live.example.com"
},
{
"name": "news_content_classifier",
"type": "hostName",
"inverted": false,
"patternType": "regex",
"pattern": "/([^\\\\.]+/)\\\\.news_reports_\\\\d+/\\\\.example\\\\.com"
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.sessionGroups -w
Running wizard for resource 'sessionGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
sessionGroups : [
sessionGroup : {
name (default: ): live_content
classifiers : [
classifier (default: ): live_content_classifier
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: y
sessionGroup : {
name (default: ): news_content
classifiers : [
classifier (default: ): news_content_classifier
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: y
sessionGroup : {
name (default: ): hls_content
classifiers : [
classifier (default: ): hls_content_classifier
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: ⏎
]
Generated config:
{
"sessionGroups": [
{
"name": "live_content",
"classifiers": [
"live_content_classifier"
]
},
{
"name": "news_content",
"classifiers": [
"news_content_classifier"
]
},
{
"name": "hls_content",
"classifiers": [
"hls_content_classifier"
]
}
]
}
{
"content_server": {
"http_enable": true,
"http_port": 80,
"https_enable": true,
"https_port": 443
},
"session_groups": [
{
"id": 1,
"name": "live_content",
"classifiers": [
[
{
"id": 1,
"inverted": false,
"name": "live_content_classifier",
"rule": {
"rule_type": "string_match_rule",
"source": "session/hostname",
"pattern": "live.example.com"
}
}
],
[
{
"id": 2,
"inverted": false,
"name": "news_content_classifier",
"rule": {
"rule_type": "regex_rule",
"source": "session/hostname",
"pattern": "/([^\\.]+/)\\.news_reports_\\d+/\\.example\\.com"
}
}
]
]
}
],
"cdns": [
{
"id": "live-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
},
{
"id": "vod-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
}
],
"hosts": [
{
"id": "live-host",
"cdn_id": "live-cdn",
"address_family": "ipv4",
"host": "live-host.example"
},
{
"id": "vod-host",
"cdn_id": "vod-cdn",
"address_family": "ipv4",
"host": "vod-host.example"
}
],
"routing": {
"id": "routing_table",
"member_order": "sequential",
"members": [
{
"id": "live-node",
"host_id": "live-host",
"weight_function": "return session_groups.live_content and 1 or 0"
},
{
"id": "vod-node",
"host_id": "vod-host",
"weight_function": "return 1"
}
]
}
}
Using the session groups in confcli
The constructed session groups can be used in routing by creating a rule that references them:
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: firstMatch
Adding a 'firstMatch' element
rule : {
name (default: ): classifier_routing_rule
type (default: firstMatch): ⏎
targets : [
target : {
onMatch (default: ): live-host
rule (default: ): in_all_session_groups('live_content', 'news_content')
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
onMatch (default: ): vod-hls-host
rule (default: ): in_session_group('hls_content')
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
onMatch (default: ): offload-host
rule (default: ): always()
}
Add another 'target' element to array 'targets'? [y/N]: ⏎
]
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "classifier_routing_rule",
"type": "firstMatch",
"targets": [
{
"onMatch": "live-host",
"condition": "in_all_session_groups('live_content', 'news_content')"
},
{
"onMatch": "vod-hls-host",
"condition": "in_session_group('hls_content')"
},
{
"onMatch": "offload-host",
"condition": "always()"
}
],
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.entrypoint classifier_routing_rule
services.routing.entrypoint = classifier_routing_rule
4.4 - Route on Content Popularity
This page describes how to write configuration for content popularity routing. For configuration in general, see Configuration.
For more details on content popularity tuning and routing, see
Content Popularity
The router tracks content popularity which can be utilized for routing.
Using the configuration tool confcli
, creating a content popularity based
routing configuration for hierarchical and multi-edge scenarios will be
demonstrated.
Hierarchical
Consider a CDN setup with edge streamers that has cached popular content and a central streamer where all content is available. You can decide where to route clients based on the requested content’s popularity.
Assuming that appropriate streamer hosts have already been configured, configuration will look like:
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: contentPopularity
Adding a 'contentPopularity' element
rule : {
name (default: ): contentPopularityHierarchy
type (default: contentPopularity): ⏎
popularityThreshold (default: 10): 2000
onPopular (default: ): edgeStreamer
onUnpopular (default: ): centralStreamer
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "contentPopularityHierarchy",
"type": "contentPopularity",
"popularityThreshold": 2000.0,
"onPopular": "edgeStreamer",
"onUnpopular": "offloadStreamer"
}
]
}
Merge and apply the config? [y/n]: y
where name
is the name of the rule, isPopular
is the rule to route to if
the content is popular, otherwise the rule isUnpopular
is routed to. Lastly,
popularityThreshold
is the threshold for which content is considered popular.
Configuring popularityThreshold
= 2001 means that the top 2000 most popular
assets will be routed to edgeStreamer
.
The rule contentPopularityHierarchy
can then be used to construct your routing
tree.
Multi-edge
Consider a CDN setup with three edge streamers, edge1
, edge2
and
edge3
, configured to cache very popular content, mildly popular content
and unpopular content respectively. Assuming that appropriate streamer
hosts have already been configured, the following configuration can be used for
correctly routing a request in this scenario:
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: contentPopularity
Adding a 'contentPopularity' element
rule : {
name (default: ): mildlyAndUnpopular
type (default: contentPopularity): ⏎
popularityThreshold (default: 10): 2001
onPopular (default: ): edge2
onUnpopular (default: ): edge3
}
Add another 'rule' element to array 'rules'? [y/N]: y
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: contentPopularity
Adding a 'contentPopularity' element
rule : {
name (default: ): multiLevelPopularity
type (default: contentPopularity): ⏎
popularityThreshold (default: 10): 101
onPopular (default: ): edge1
onUnpopular (default: ): mildyAndUnpopular
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "mildlyAndUnpopular",
"type": "contentPopularity",
"popularityThreshold": 2001.0,
"onPopular": "edge2",
"onUnpopular": "edge3"
},
{
"name": "multiLevelPopularity",
"type": "contentPopularity",
"popularityThreshold": 101.0,
"onPopular": "edge1",
"onUnpopular": "mildyAndUnpopular"
}
]
}
Merge and apply the config? [y/n]: y
By configuring a rule to route to multiLevelPopularity
,
this configuration will route requests for the top 100 most popular content
to edge1
, popularity ranking 101-2000 to edge2
and popularity
ranking > 2000 to edge3
. Since the contentPopularity
rule offers a binary
routing choice of either isPopular
or isUnpopular
, multiple
contentPopularity
rules can be utilized to construct multi-leveled content
popularity based routing configurations.
Application
To use any of these two configurations in your installation, you’ll need to
configure either a rule or the entrypoint
to route to either
multiLevelPopularity
or contentPopularityHierarchy
.
4.5 - Route on Selection Input
This page describes how to write configuration for routing on bandwidth usage or number of ongoing sessions in Edgeware CDN:s by using the selection input API. For configuration in general, see Configuration.
For more details on the selection input API, see Selection Input.
Selection Input
The selection input API allows you to inject custom data, that can be used during routing, into the router. For this use case, we will use CDN bandwidth and the number of ongoing sessions to demonstrate the capabilities of the selection input API.
Imagine that we have a monitor constantly polling the bandwidth and ongoing sessions of hosts in a CDN. These values are injected into the selection input API as JSON packets:
{
"edge-host-available-bps": 1000000000,
"edge-host-ongoing-sessions": 300
}
These keys will end up in the router Lua context as “selection inputs”, arbitrary values or JSON objects that can be used to calculate routing weights.
The selection input variable edge-host-available-bps
will be available
through either selection_input.edge-host-available-bps
or
si('edge-host-available-bps')
. The function si(si_var)
fetches the selection input value if it exists, otherwise it returns 0.
However, when comparing selection input variables with numbers, there are a
number of built-in Lua functions that will simplify things, such as gt()
and
lt()
. For a complete list and more details on these functions, see
Built-in Functions.
Simple Example
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: firstMatch
Adding a 'firstMatch' element
rule : {
name (default: ): selection-input-routing
type (default: firstMatch): ⏎
targets : [
target : {
onMatch (default: ): edge-host
rule (default: ): lt('edge_host_ongoing_sessions', 1000)
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
onMatch (default: ): edge-host
rule (default: ): gt('edge_host_available_bps', 5000000)
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
onMatch (default: ): offload-host
rule (default: ): always()
}
Add another 'target' element to array 'targets'? [y/N]: ⏎
]
onMiss (default: ): offload-host
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "selection-input-routing",
"type": "firstMatch",
"targets": [
{
"onMatch": "edge-host",
"condition": "lt('edge_host_ongoing_sessions', 1000)"
},
{
"onMatch": "edge-host",
"condition": "gt('edge_host_available_bps', 5000000)"
},
{
"onMatch": "offload-host",
"condition": "always()"
}
]
}
]
}
Merge and apply the config? [y/n]: y
$ confcli services.routing.entrypoint selection-input-routing
services.routing.entrypoint = 'selection-input-routing'
{
"content_server": {
"http_enable": true,
"http_port": 80,
"https_enable": true,
"https_port": 443
},
"cdns": [
{
"id": "edge-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
},
{
"id": "offload-cdn",
"http_port": 80,
"https_port": 443,
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
}
],
"hosts": [
{
"id": "edge-host",
"cdn_id": "edge-cdn",
"host": "edge-host.example"
},
{
"id": "offload-host",
"cdn_id": "offload-cdn",
"host": "offload-host.example"
}
],
"routing": {
"id": "routing-table",
"member_order": "sequential",
"members": [
{
"id": "edge-host-sessions-available",
"host_id": "edge-host",
"weight_function": "return (lt('edge_host_ongoing_sessions', 1000) ~= 0) and 1 or 0"
},
{
"id": "edge-host-bandwith-available",
"host_id": "edge-host",
"weight_function": "return (gt('edge_host_available_bps', 5000000) ~= 0) and 1 or 0"
},
{
"id": "offload",
"host_id": "offload-host",
"weight_function": "return 1"
}
]
}
}
4.6 - Adapt to multi-CDN
This page describes how to write configuration for setups that require the URL sent back to the client to be rewritten based on routing results, for example by adding a token to the path or as a query string parameter. For configuration in general, see Configuration.
In order to change the redirect location URL, we need to add a response translation function to the confd configuration. By default, all translation functions are empty:
$ confcli services.routing.translationFunctions
{
"translationFunctions": {
"request": "",
"response": ""
}
}
For full documentation of the response translation function, please see Lua Hooks.
Simple example
Translation functions tend to become large, which makes it hard to write
them inside a single string. Instead, the function can be written in a separate
.lua
script file that is uploaded to the router and referenced from the
confd
configuration.
The following example will take this approach, adding a script called
"rewrite.lua"
to the configured custom Lua folder /tmp/custom_lua
on
the router. For more information see the Configuration JSON Overview.
Lua script
The following script assumes the existence of the function parse_url
that
returns an object with URL segments that can be altered and then combined into
a URL string using the tostring(obj)
function again.
Beware that the router does not validate the URL in the Location header, it is up to the Lua script writer to ensure that only URLs are returned.
function response_rewrite(Headers)
-- The live server has the token as part of the path, added first before the
-- path from the request.
local location = parse_url(response_headers.location)
if location.host == "live-host.example" then
location.path = "/live-prefix" .. location.path
return HTTPResponse(
{
Headers = {
{"location", tostring(location)}
}
}
)
end
-- The VOD server expects the token as a query string rather than as part
-- of the path segment.
-- Since VOD can be used for offload, check that session_groups.is_vod
-- is set as well.
if location.host == "vod-host.example" and session_groups.is_vod then
location.query["token"] = md5sum(secret_key .. request.path)
return HTTPResponse(
{
Headers = {
{"location", tostring(location)}
}
}
)
end
-- If the request was neither in the is_live session group, nor contained
-- the path element "vod", we return a nil object to indicate that we don't
-- want to change anything in the response.
return nil
end
Configuration
To configure a response translation function using with the custom Lua function
response_rewrite()
using confcli
, run the command:
$ confcli services.routing.translationFunctions.response 'return response_rewrite()'
services.routing.translationFunctions.response = 'return response_rewrite()'
An example confd
configuration utilizing this translation function based on
session groups would look like this:
$ confcli services.routing
{
"translationFunctions": {
"request": "",
"response": "return response_rewrite()"
},
"sessionGroups": [
{
"name": "is_live",
"classifiers": [
"is_live_classifier"
]
},
{
"name": "is_vod",
"classifiers": [
"is_vod_classifier"
]
}
],
"classifiers": [
{
"name": "is_live_classifier",
"type": "contentUrlPath",
"inverted": false,
"patternType": "stringMatch",
"pattern": "*/live/*"
},
{
"name": "is_vod_classifier",
"type": "contentUrlPath",
"inverted": false,
"patternType": "stringMatch",
"pattern": "*/vod/*"
}
],
"hostGroups": [
{
"name": "live-cdn",
"type": "host",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "live-host",
"hostname": "live-host.example",
"ipv6_address": ""
}
]
},
{
"name": "vod-cdn",
"type": "host",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "vod-host",
"hostname": "vod-host.example",
"ipv6_address": ""
}
]
}
],
"rules": [
{
"name": "multiCdnExample",
"type": "firstMatch",
"targets": [
{
"condition": "in_session_group('is_live')",
"onMatch": "live-host"
},
{
"condition": "in_session_group('is_vod')",
"onMatch": "vod-host"
},
// Use the vod-host as offload for any content, but don't set any
// token for requests not classified as vod.
{
"condition": "always()",
"onMatch": "vod-host"
}
]
}
],
"entrypoint": "multiCdnExample",
"applyConfig": true
}
4.7 - How to use ACD router for EDNS routing
This page describes how to write configuration for EDNS routing. For configuration in general, see Configuration.
EDNS routing
In your CDN architecture, you might utilize EDNS routing to redirect clients. Whether it’s your own CDN or a third party CDN, the ESB3024 Router supports creating EDNS requests and sending them to any DNS server.
Configuration
To configure EDNS routing, you first need to create a DNS type host.
Using confcli
:
[user@acd-router ~]# confcli services.routing.hostGroups -w
Running wizard for resource 'hostGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
hostGroups : [
hostGroup can be one of
1: dns
2: host
3: redirecting
Choose element index or name: dns
Adding a 'dns' element
hostGroup : {
name (default: ): edns-cdn
type (default: dns): ⏎
hosts : [
host : {
name (default: ): edns-host
hostname (default: ): ednshost.com
ipv6_address (default: ): ⏎
}
Add another 'host' element to array 'hosts'? [y/N]: ⏎
]
}
Add another 'hostGroup' element to array 'hostGroups'? [y/N]: ⏎
]
Generated config:
{
"hostGroups": [
{
"name": "edns-cdn",
"type": "dns",
"hosts": [
{
"name": "edns-host",
"hostname": "ednshost.com",
"ipv6_address": ""
}
]
}
]
}
Merge and apply the config? [y/n]: y
Note that the hostname
is the name of the DNS server that will receive
the EDNS request. You can now use the DNS host when creating the routing tree,
e.g. in a random
type rule:
[user@router ~]# confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: random
Adding a 'random' element
rule : {
name (default: ): random-rule
type (default: random): ⏎
targets : [
target (default: ): edns-host
Add another 'target' element to array 'targets'? [y/N]: ⏎
]
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "random-rule",
"type": "random",
"targets": [
"edns-host"
]
}
]
}
Merge and apply the config? [y/n]: y
[user@router ~]# confcli services.routing.entrypoint random-rule
services.routing.entrypoint = 'random-rule'
Or simply use it as the entrypoint:
confcli services.routing.entrypoint edns-host
4.8 - How to use ESB3024 Router with Edgeware CDN Request Router
This page describes how to write configuration for using ESB3024 Router in conjunction with ESB3008 Request Router. It can request a host from an ESB3008 Request Router and forward the response to the client, perform load balancing between several Request Routers and handle fallback to an external CDN in case no Request Routers are able to service the request.
Simple Example
# The tuning parameter `target.requestAttempts` controls the number of attempts
# made to find a host. In this example we want to fallback to the external CDN
# in case both Request Routers are down, so we set it to at least 3.
$ confcli services.routing.tuning.target.requestAttempts 3
services.routing.tuning.target.requestAttempts = 3
# Create a classifier to filter incoming requests from Sweden.
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: geoip
Adding a 'geoip' element
classifier : {
name (default: ): sweden_classifier
type (default: geoip): ⏎
inverted (default: False): ⏎
continent (default: ): ⏎
country (default: ): sweden
cities : [
city (default: ): ⏎
Add another 'city' element to array 'cities'? [y/N]: ⏎
]
asn (default: ): ⏎
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "sweden_classifier",
"type": "geoip",
"inverted": false,
"continent": "",
"country": "sweden",
"cities": [
""
],
"asn": ""
}
]
}
Merge and apply the config? [y/n]: y
# Create a session group for the non-Swedish requests, to use in routing rules
$ confcli services.routing.sessionGroups -w
Running wizard for resource 'sessionGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
sessionGroups : [
sessionGroup : {
name (default: ): sweden
classifiers : [
classifier (default: ): sweden_classifier
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: ⏎
]
Generated config:
{
"sessionGroups": [
{
"name": "sweden",
"classifiers": [
"sweden_classifier"
]
}
]
}
Merge and apply the config? [y/n]: y
# The internal CDN uses Edgeware Request Routers that return a redirect
# location. The router takes the content from the client request,
# and makes its own request to the Request Router using that same
# content. The returned location is then forwarded to the client.
#
# In case the Request Router responds with an error code, or times out,
# the router will continue traversing the routing tree until a
# successful route is found or the entire tree has been evaluated.
$ confcli services.routing.hostGroups -w
Running wizard for resource 'hostGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
hostGroups : [
hostGroup can be one of
1: dns
2: host
3: redirecting
Choose element index or name: redirecting
Adding a 'redirecting' element
hostGroup : {
name (default: ): internal-cdn
type (default: redirecting): ⏎
httpPort (default: 80): ⏎
httpsPort (default: 443): ⏎
hosts : [
host : {
name (default: ): internal-host-1
hostname (default: ): internal-host-1.example.com
ipv6_address (default: ): ⏎
}
Add another 'host' element to array 'hosts'? [y/N]: y
host : {
name (default: ): internal-host-2
hostname (default: ): internal-host-2.example.com
ipv6_address (default: ): ⏎
}
Add another 'host' element to array 'hosts'? [y/N]: ⏎
]
}
Add another 'hostGroup' element to array 'hostGroups'? [y/N]: ⏎
]
Generated config:
{
"hostGroups": [
{
"name": "internal-cdn",
"type": "redirecting",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "internal-host-1",
"hostname": "internal-host-1.example.com",
"ipv6_address": ""
},
{
"name": "internal-host-2",
"hostname": "internal-host-2.example.com",
"ipv6_address": ""
}
]
}
]
}
Merge and apply the config? [y/n]: y
# The offload CDN is non-redirecting, meaning that router does not
# make any request to its associated hosts to ask for a redirect location
# to forward to the clients, instead it simply returns a location using
# one of the CDN's hosts substituted for the router's host in the
# client request possibly with some kind of respsonse translation to add
# e.g. tokens or path prefixes.
$ confcli services.routing.hostGroups -w
Running wizard for resource 'hostGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
hostGroups : [
hostGroup can be one of
1: dns
2: host
3: redirecting
Choose element index or name: host
Adding a 'host' element
hostGroup : {
name (default: ): offload-cdn
type (default: host): ⏎
httpPort (default: 80): ⏎
httpsPort (default: 443): ⏎
hosts : [
host : {
name (default: ): offload-host
hostname (default: ): offload-host.example.com
ipv6_address (default: ): ⏎
}
Add another 'host' element to array 'hosts'? [y/N]: ⏎
]
}
Add another 'hostGroup' element to array 'hostGroups'? [y/N]: ⏎
]
Generated config:
{
"hostGroups": [
{
"name": "offload-cdn",
"type": "host",
"httpPort": 80,
"httpsPort": 443,
"hosts": [
{
"name": "offload-host",
"hostname": "offload-host.example.com",
"ipv6_address": ""
}
]
}
]
}
Merge and apply the config? [y/n]: y
# Begin with adding a load balancing route rule between the two internal
# hosts. Let's make internal-host-2 have twice the capacity of
# internal-host-1 by using a weighted rule.
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: weighted
Adding a 'weighted' element
rule : {
name (default: ): balancer
type (default: weighted): ⏎
targets : [
target : {
target (default: ): internal-host-1
weight (default: 100): 1,
rule (default: always()): always()
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
target (default: ): internal-host-2
weight (default: 100): 2,
rule (default: always()): always()
}
Add another 'target' element to array 'targets'? [y/N]: ⏎
]
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "balancer",
"type": "weighted",
"targets": [
{
"target": "internal-host-1",
"weight": "1",
"rule:": "always()"
},
{
"target": "internal-host-2",
"weight": "2",
"rule:": "always()"
}
]
}
]
}
Merge and apply the config? [y/n]: y
# Make an offload rule to route any traffic not in Sweden to the
# offload CDN.
$ confcli services.routing.rules -w
Running wizard for resource 'rules'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
rules : [
rule can be one of
1: allow
2: consistentHashing
3: contentPopularity
4: deny
5: firstMatch
6: random
7: rawGroup
8: rawHost
9: split
10: weighted
Choose element index or name: firstMatch
Adding a 'firstMatch' element
rule : {
name (default: ): offload
type (default: firstMatch): ⏎
targets : [
target : {
onMatch (default: ): offload-host
rule (default: ): not in_session_group('sweden')
}
Add another 'target' element to array 'targets'? [y/N]: y
target : {
onMatch (default: ): balancer
rule (default: ): in_session_group('sweden')
}
Add another 'target' element to array 'targets'? [y/N]: ⏎
]
onMiss (default: ): offload-host
}
Add another 'rule' element to array 'rules'? [y/N]: ⏎
]
Generated config:
{
"rules": [
{
"name": "offload",
"type": "firstMatch",
"targets": [
{
"onMatch": "offload-host",
"condition": "not in_session_group('sweden')"
},
{
"onMatch": "balancer",
"condition": "in_session_group('sweden')"
}
],
"onMiss": "offload-host"
}
]
}
Merge and apply the config? [y/n]: y
{
"tuning": {
// "target_request_attempts" controls the number of attempts to find a host.
// In this example we want to fallback to the external CDN in case both
// Request Routers are down, so we set it to 3.
"target_request_attempts": 3
},
"session_groups": [
{
"id": 1,
"name": "not_sweden",
"classifiers": [
[
{
"id": 1,
"inverted": true,
"name": "not_sweden_classifier",
"rule": {
"rule_type": "geoip_rule",
"source": "session/client_ip",
"country": "Sweden"
}
}
]
]
}
],
"cdns": [
{
"id": "internal-cdn",
"http_port": 80,
"https_port": 443,
// The internal CDN uses Edgeware Request Routers that return a redirect
// location. The router takes the content from the client request,
// and makes its own request to the Request Router using that same
// content. The returned location is then forwarded to the client.
//
// In case the Request Router responds with an error code, or times out,
// the router will continue traversing the routing tree until a
// successful route is found or the entire tree has been evaluated.
"redirecting": true,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
},
{
"id": "offload-cdn",
"http_port": 80,
"https_port": 443,
// The offload CDN is non-redirecting, meaning that router does not
// make any request to its associated hosts to ask for a redirect location
// to forward to the clients, instead it simply returns a location using
// one of the CDN's hosts substituted for the router's host in the
// client request possibly with some kind of respsonse translation to add
// e.g. tokens or path prefixes.
"redirecting": false,
"manifest_availability_check": {
"enabled": false,
"session_group_ids": []
}
}
],
"hosts": [
{
"id": "internal-host-1",
"cdn_id": "internal-cdn",
"address_family": "ipv4",
"host": "internal-host-1.example"
},
{
"id": "internal-host-2",
"cdn_id": "internal-cdn",
"address_family": "ipv4",
"host": "internal-host-2.example"
},
{
"id": "offload-host",
"cdn_id": "offload-cdn",
"address_family": "ipv4",
"host": "offload-host.example"
}
],
"routing": {
"id": "routing_table",
// "sequential" - Go through the rules at this level one by one, and pick
// the first that returns a positive weight.
"member_order": "sequential",
"members": [
{
"id": "reject-node",
"host_id": "offload-host",
// Send filtered-out clients to an offload host.
"weight_function": "return session_groups.not_sweden and 1 or 0"
},
{
"id": "internal_routing",
// Return 1 to make sure this tree is evaluated at all.
"weight_function": "return 1"
// "weighted" - Evaluate all the rules at this level, and pick one of
// them randomly based on their respective weights.
//
// In this example host 1 has half the capacity of host 2, so we just
// randomly route twice as many clients to host 2.
"member_order": "weighted",
"members": [
{
"id": "internal-node-1",
"host_id": "internal-host-1",
// Half the weight of the other host -> half as likely to be picked.
"weight_function": "return 1"
},
{
"id": "internal-node-2",
"host_id": "internal-host-2",
// Twice the weight of the other -> twice as likely to be picked.
"weight_function": "return 2"
}
]
},
{
// Add a non-redirecting offload host as the last target in case all the
// internal redirecting hosts are unavailable or respond with some kind
// of error code.
"id": "offload-node",
"host_id": "offload-host",
"weight_function": "return 1"
}
]
}
}
4.9 - How to use ESB3024 Router with CoreDNS
CoreDNS can be configured to work as a DNS server in front of ESB3024 Router. In this mode of operation the client DNS request is terminated by CoreDNS which requests an optimal streaming location from the router.
Contact Edgeware for detailed information on how to set this up, if you don’t already have an Edgeware Support account please contact us here.
4.10 - How to use the Director as a replacement for Edgeware CDN Request Router
This guide outlines how to replicate Convoy’s ESB3008 HTTP Request Router using the Director. This is accomplished using the Convoy Bridge, which is an integration service designed to allow the router and an existing Convoy installation to work together seamlessly.
Following the steps outlined below enables the replacement of existing ESB3008 HTTP Request Router instances with the Director. As of the current documentation, the Director is not a direct substitute for the ESB3008 HTTP Request Router. However, most of the functionality is available, and in the majority of cases, the Director can be used without any loss of functionality.
The purpose of this guide is to detail the process of transitioning from an existing Convoy installation, which includes one or more instances of ESB3008 HTTP Request Router, to a configuration utilizing the Director. This transition allows the existing Convoy installation to continue its role in CDN provisioning, content management, and analytics. Simultaneously, the router assumes responsibility for managing HTTP(S) redirects directly from clients to selected streamers.
One functional difference between ESB3008 HTTP Request Router and the Director is
in communication with the streamer to determine specific interface assignments.
ESB3008 uses a proprietary protocol to communicate with the streamer to help determine
the best interface for the client. For the Director, in order to work with both
streamers and third-party CDNs, the use of the proprietary protocol is not supported,
and the router’s rule engine must be used to accomplish the same result. It is for
this reason that it is recommended to configure each streaming interface as a distinct
host
entry in the router’s configuration. This allows the router to address each
interface separately.
Prerequisites
This guide assumes both an existing Convoy installation and a partiality configured
instance of the Director. The router should have all the necessary routing configuration
in place for hostGroups
, sessionGroups
and routing rules
to direct traffic
to the streaming interfaces of the streamers. It is also assumed that the proper
firewall rules are allowing the required traffic between the router, the Convoy
Bridge integration and the Convoy management nodes.
The minimum compatible version of Convoy which can be used with the analytics synchronization feature is Convoy 3.0.0. Previous versions may be used only if Convoy is not being used for analytics.
Firewall Configuration
Proper firewall configuration is required to allow both the router and Convoy to communicate through bidirectional connections established from the Convoy Bridge integration. The Convoy Bridge does not need to be enabled for each router instance, one instance of the Convoy Bridge can synchronize multiple instances of the router simultaneously. Only the router nodes with the Convoy Bridge enabled will need access through the firewall to establish outgoing connections to the Convoy management nodes, or whichever nodes are running MariaDB and Kafka. Additionally, if multiple router instances are to be synchronized by a single Convoy Bridge instance, the firewall must allow the Convoy Bridge to establish connections to each router instance on port 5001.
The following table outlines the necessary connections to be allowed through the firewall.
Source | Destination | Port | Description |
---|---|---|---|
Convoy Bridge | Convoy Management | 3306/TCP | Account Configuration |
Convoy Bridge | Convoy Management | 9092/TCP | Analytics Data |
Convoy Bridge | The router | 5001/TCP | Rest API |
Response Translation Functions
In order to have the router generate a redirect URL which is compatible with the
format generated by the ESB3008 HTTP Request Router, a response translation function
must be used to modify the redirect URL before it is returned to the client. A built-in
function convoy_compatibility_response
is available to be
used for this purpose. This function appends both the session-id
and storage-prefix
based on the account and distribution information in Convoy.
On the Director, set the response translation function as follows:
> confcli services.routing.translationFunctions.response "return convoy_compatibility_response(Headers, '')"
This function depends on the account synchronization feature of the Convoy bridge to
determine the correct storage prefix based on the incoming Host
header value. It works
by using the Host
header to lookup the storage prefix from the account configuration,
and then modifying the redirect URL to prepend /session/<session-id>/<prefix>
. The
streamer can then use this information for both origin selection and session tracking.
Enabling the Convoy Bridge
Before using the Convoy Bridge integration, a valid configuration must first be applied.
All configuration for the Convoy Bridge is available in confd
under the key
integration.convoy.bridge
. The configuration is divided into three main sections:
The account synchronization feature, the convoy analytics integration, and a list of
additional router instances to synchronize.
By default, the Convoy Bridge will include the current local router instance in the synchronization process. The connection details are hard-wired into the container at startup, and do not need to be specified in the configuration. Because one Convoy Bridge can serve multiple instances of the router, it is not necessary to configure the Convoy Bridge on each node separately. However, if multiple Convoy Bridge instances are used, it is recommended to configure them so that one router is served by multiple instances of the Convoy Bridge. This provides high-availability in a production environment.
As an example, if there are 4 router instances, A, B, C, and D, and 4 Convoy Bridge instances, configuring them such that Bridge 1 synchronizes A & B, 2, synchronizes B & C, 3, synchronizes C & D and 4 synchronizes D & A will provide fault tolerance in the event that one bridge is unavailable.
The account synchronization feature of the Convoy Bridge watches the MariaDB database on the
Convoy management node for changes to the account and distribution configuration, and if
the configuration differs from what is currently known to the router the configuration
will be updated. This feature is required by both the router for use with the
convoy_compatibility_response
response translation function, and within the Convoy
Bridge integration for setting the correct account
, storage
, and distribution
fields
with the analytics integration feature.
The analytics integration feature monitors monitors the router’s Event API for
newly established sessions, and produces session-record
messages over the Kafka
message broker for use by Convoy. This feature requires that Convoy is both licensed
and configured for use with the analytics feature. The integration works by establishing
a persistent connection to each router, and listening for new session events. Due to the
protocol in use by the Event API to send the events, only one instance of the Convoy
Bridge will have the ability to receive each event, and only that instance will produce
the session-record
to the Kafka message brokers.
Synchrnoizing multiple routers with the same instance of the Convoy Bridge is performed
by configuring the otherRouters
section of the Convoy Bridge configuration. In addition
to the local router, each otherRouter
entry will have the same account and distribution
information updated simultaneously, and will be used as an event source for the analytics
integration. The otherRouters
entries require the URL to the router’s Rest API, an
optional API key, and a flag to indicate if SSL certificate validation should be enforced.
Enable the account synchronization feature by running the following command:
> confcli integration.convoy.bridge.accounts -w
Running wizard for resource 'accounts'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
accounts : {
enabled (default: False): true
dbUrl (default: mysql://user:pass@localhost:3306): mysql://convoy:password@convoy-mgmt1:3306
cmsUrl (default: http://localhost:5555): http://convoy-mgmt1:5555
pollInterval (default: 60):
}
Generated config:
{
"accounts": {
"enabled": true,
"dbUrl": "mysql://convoy:password@convoy-mgmt1:3306",
"cmsUrl": "http://convoy-mgmt1:5555",
"pollInterval": 60
}
}
Merge and apply the config? [y/n]:
This will result in the Convoy Bridge continuously polling the Convoy instance running on
convoy-mgmt1
every 60 seconds for changes to the account configuration. Only when the
account configuration differs from the current configuration on the router will it be
updated with the change.
The pollInterval
parameter can be used to control how frequently the Convoy Bridge will
attempt to poll the database for changes. Setting this value too high will result in a longer delay
between changes being made in Convoy and the changes being reflected in the router. Setting this
value too low will result in unnecessary load on the Convoy database. The default value of 60 seconds
should be sufficient for most use cases.
To enable the analytics integration feature, run the following command:
> confcli integration.convoy.bridge.analytics -w
Running wizard for resource 'analytics'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
analytics : {
enabled (default: False): true
brokers : [
broker (default: ): convoy-mgmt1:9092
Add another 'broker' element to array 'brokers'? [y/N]: y
broker (default: ): convoy-mgmt2:9092
Add another 'broker' element to array 'brokers'? [y/N]: y
broker (default: ): convoy-mgmt3:9092
Add another 'broker' element to array 'brokers'? [y/N]: n
]
batchInterval (default: 10):
maxBatchSize (default: 500):
}
Generated config:
{
"analytics": {
"enabled": true,
"brokers": [
"convoy-mgmt1:9092",
"convoy-mgmt2:9092",
"convoy-mgmt3:9092"
],
"batchInterval": 10,
"maxBatchSize": 500
}
}
Merge and apply the config? [y/n]:
This will enable the analytics integration, which will produce session-record
messages
to the configured Kafka
brokers. The brokers
configuration list is used to bootstrap the
initial set of brokers which will be contacted by the Convoy Bridge to then obtain the
current set of active brokers. It is not necessary to include all brokers here, as the
initial connection will automatically negotiate the correct broker list.
The batchInterval
parameter must be set to a value much less than 60 seconds, since the
streamer will send session-sample
messages to Kafka every 60 seconds, and if the
corresponding session-record
message is not received within that time, the sample will
be dropped, resulting in potentially inaccurate bandwidth statistics.
> confcli integration.convoy.bridge.otherRouters -w
Running wizard for resource 'otherRouters'
<Other routers which will be kept in sync (default: [])>
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
otherRouters <Other routers which will be kept in sync (default: [])>: [
otherRouter : {
url (default: ): https://router-2:5001
apiKey (default: ): 1234
validateCerts (default: True): false
}
Add another 'otherRouter' element to array 'otherRouters'? [y/N]: y
otherRouter : {
url (default: ): https://router-3:5001
apiKey (default: ):
validateCerts (default: True):
}
Add another 'otherRouter' element to array 'otherRouters'? [y/N]: n
]
Generated config:
{
"otherRouters": [
{
"url": "https://router-2:5001",
"apiKey": "1234",
"validateCerts": false
},
{
"url": "https://router-3:5001",
"apiKey": "",
"validateCerts": true
}
]
}
Merge and apply the config? [y/n]:
4.11 - Use In-Stream Sessions
In-Stream sessions is a routing mode where the router keeps control of the session for the full session lifetime instead of only routing the session to the preferred CDN (or cache node) once. Technically it works by letting the client return to the router for manifest and segments and redirect each segment to the preferred CDN. This enables changing CDN in the middle of a session.
Configure In-Stream Sessions
In-Stream routing for all incoming requests can be enabled by running
$ confcli services.routing.translationFunctions.session "return set_session_type('instream')"
services.routing.translationFunctions.session = "return set_session_type('instream')"
See Session Translation Function for more details on controlling session types.
The following example illustrate the request flow for in-stream session
$ curl -i http://test-acd-router/content/playlist.m3u8
HTTP/1.1 302 Found
Access-Control-Allow-Origin: *
Content-Length: 0
Location: http://test-acd-router/__s/test-acd-router-a4fd86-0000004f_WyIvY2RuL3BsYXlsaXN0Lm0zdTgiLDAsImRldmNkbjEiLCJkZXZjZG4xLnN0cmVhbXBpbG90LnR2Iiw4MCwiL2Nkbi9wbGF5bGlzdC5tM3U4IiwwLDE2ODAyNjQwOTNd_c1e3fc663a47156e0894b30eddf995bb/content/playlist.m3u8
X-Service-Identity:
$ curl -i http://test-acd-router/__s/test-acd-router-a4fd86-0000004f_WyIvY2RuL3BsYXlsaXN0Lm0zdTgiLDAsImRldmNkbjEiLCJkZXZjZG4xLnN0cmVhbXBpbG90LnR2Iiw4MCwiL2Nkbi9wbGF5bGlzdC5tM3U4IiwwLDE2ODAyNjQwOTNd_c1e3fc663a47156e0894b30eddf995bb/content/playlist.m3u8
HTTP/1.1 200 OK
Accept-Ranges: bytes
Access-Control-Allow-Headers: DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length,Content-Range
Connection: keep-alive
Content-Length: 306
Content-Type: application/vnd.apple.mpegurl
Date: Fri, 31 Mar 2023 10:18:42 GMT
ETag: "5d711f08-132"
Last-Modified: Thu, 05 Sep 2019 14:43:20 GMT
Server: nginx/1.14.0 (Ubuntu)
X-Service-Identity:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=500000,RESOLUTION=320x180
abr_500k.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1000000,RESOLUTION=640x360
abr_1000k.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1700000,RESOLUTION=1280x720
abr_1700k.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=3300000,RESOLUTION=1280x720
abr_3300k.m3u8
$ curl -i http://test-acd-router/__s/test-acd-router-a4fd86-0000004f_WyIvY2RuL3BsYXlsaXN0Lm0zdTgiLDAsImRldmNkbjEiLCJkZXZjZG4xLnN0cmVhbXBpbG90LnR2Iiw4MCwiL2Nkbi9wbGF5bGlzdC5tM3U4IiwwLDE2ODAyNjQwOTNd_c1e3fc663a47156e0894b30eddf995bb/content/abr_500k.m3u8
HTTP/1.1 200 OK
Accept-Ranges: bytes
Access-Control-Allow-Headers: DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length,Content-Range
Connection: keep-alive
Content-Length: 276
Content-Type: application/vnd.apple.mpegurl
Date: Fri, 31 Mar 2023 12:02:04 GMT
ETag: "6426cbbc-114"
Last-Modified: Fri, 31 Mar 2023 12:02:04 GMT
Server: nginx/1.14.0 (Ubuntu)
X-Service-Identity:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:1
#EXT-X-MEDIA-SEQUENCE:112583846
#EXTINF:0.834444,
abr_500k112583846.ts
#EXTINF:0.834444,
abr_500k112583847.ts
#EXTINF:0.834444,
abr_500k112583848.ts
#EXTINF:0.834444,
abr_500k112583849.ts
#EXTINF:0.834444,
abr_500k112583850.ts
$ curl -i http://test-acd-router/__s/test-acd-router-a4fd86-0000004f_WyIvY2RuL3BsYXlsaXN0Lm0zdTgiLDAsImRldmNkbjEiLCJkZXZjZG4xLnN0cmVhbXBpbG90LnR2Iiw4MCwiL2Nkbi9wbGF5bGlzdC5tM3U4IiwwLDE2ODAyNjQwOTNd_c1e3fc663a47156e0894b30eddf995bb/content/abr_500k112583846.ts
HTTP/1.1 302 Found
Access-Control-Allow-Origin: *
Content-Length: 0
Location: http://testcdn.tv/abr_500k112583846.ts
X-Service-Identity:
A Note on Confcli
Begin with configuring routing, classification and session groups in confcli, for example based on content types.
# Create a classifier for live sessions
$ confcli services.routing.classifiers -w
Running wizard for resource 'classifiers'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
classifiers : [
classifier can be one of
1: anonymousIp
2: asnIds
3: contentUrlPath
4: contentUrlQueryParameters
5: geoip
6: hostName
7: ipranges
8: random
9: regexMatcher
10: stringMatcher
11: subnet
12: userAgent
Choose element index or name: contentUrlPath
Adding a 'contentUrlPath' element
classifier : {
name (default: ): live
type (default: contentUrlPath): ⏎
inverted (default: False): ⏎
patternType (default: stringMatch): ⏎
pattern (default: ): *live*
}
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
Generated config:
{
"classifiers": [
{
"name": "live",
"type": "contentUrlPath",
"inverted": false,
"patternType": "stringMatch",
"pattern": "*live*"
}
]
}
Merge and apply the config? [y/n]: y
# Create a session group called "live" using the classifier
$ confcli services.routing.sessionGroups -w
Running wizard for resource 'sessionGroups'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
sessionGroups : [
sessionGroup : {
name (default: ): live
classifiers : [
classifier (default: ): live
Add another 'classifier' element to array 'classifiers'? [y/N]: ⏎
]
}
Add another 'sessionGroup' element to array 'sessionGroups'? [y/N]: ⏎
]
Generated config:
{
"sessionGroups": [
{
"name": "live",
"classifiers": [
"live"
]
}
]
}
Merge and apply the config? [y/n]: y
# Make all sessions belonging to the session group "live" In-Stream sessions
$ confcli services.routing.translationFunctions.session "return set_session_type_if_in_group('instream', 'live')"
services.routing.translationFunctions.session = "return set_session_type_if_in_group('instream', 'live')"
Monitoring
If Grafana is installed together with the ESB-3024 installer, the number of started In-Stream sessions as well as currently active In-Stream sessions can be seen in a Dashboard.
Notes on Performance
Be aware that In-Stream sessions puts additional load on the request router since all clients continuously comes back to the router to fetch new manifests and to get segment redirects. The necessary router capacity therefore scaled both with number of started sessions and the average session length.
5 - Ready for Production?
5.1 - Setup for Redundancy
The Agile Director is designed to support the coexistence of multiple independent instances, facilitating either high-availability and/or horizontal scaling based on specific use cases and requirements. Each instance of the Director operates as a complete isolated entity, offering multiple solutions for redundancy. The choice between these solutions depends on the particular use case and requirements of the deployment.
A few select use-cases and example deployments are described in the sections that follow. Note however that this list is not exhaustive, and other approaches not listed here may be more applicable to a specific use-case.
Third-party load balancer
In this first scenario, multiple instances of the router are positioned behind an off-the-shelf load balancer. This could range from a simple
Nginx
proxy to a more sophisticated commercial load balancer, depending on the user’s specific needs and requirements.
DNS Round Robin
A straightforward alternative to deploying multiple instances of the router behind a load balancer is to employ round-robin DNS. In this approach, each router instance is associated with a distinct A record in DNS, all sharing the same domain name. Clients querying this DNS server receive multiple entries in response. Typically, clients utilize a round-robin algorithm to resolve the IP address of the router, although some clients might randomly select an address from the available servers.
While this method effectively distributes the load across multiple instances, it has a few drawbacks. First, clients are not guaranteed to use any specific algorithm for selecting the IP address. Second, due to DNS caching in the network and on the client’s device, modifications to the DNS record (e.g., taking a node out of service or adding additional nodes) will experience higher latency compared to some other techniques. Third, there is a dependency on the client being able to recognize this DNS server as authoritative. And fourth, this approach does not provide any means of high-availability. If a node is offline, requests to that node will fail.
VRRP
If horizontal scaling is not required, employing a shared Virtual IP Address
through the VRRP protocol may suffice. This can be easily configured on one or
more instances of the router by installing and setting up keepalived
. While the
specific details of keepalived
are beyond the scope of this document, it can
establish a Primary/Backup node configuration, where heartbeat packets and
server priority determine each node’s role. Using VRRP, a Virtual IP address
always points to the current primary node, and in the event of a primary node
failure, the remaining nodes elect a new primary node based on the predetermined
priority.
This solution has some drawbacks compared to others; the VIP may only be assigned to a single node at a time, rendering horizontal scaling impossible using this solution alone. Another drawback is that if the primary node fails, there will be a brief period, usually around one second, before the failure is detected and the VIP is reassigned.
This solution can be more effectively utilized as part of a larger combined solution in which several groups of nodes are employed. In each group, there is a Primary and Backup, and selection between the groups is handled via one of the other scaling solutions.
An Example Combined Approach Scenario
The following represents an example scenario where eight independent router nodes are utilized employing all three of the above approaches.
The scenario consists of four pairs of nodes, with each pair using VRRP to designate one node as Master and one as a Backup. In front of two pairs of pairs would sit two load balancers, each configured with the VIP address of each pair below. The two load balancers would then be placed into Round Robin DNS with the same FQDN.
Incoming requests would resolve the DNS record, yielding two servers. They would then send requests to each in turn. The load balancer at the end of those IP addresses would then select among the two VIP addresses below, and the request would be routed to the current Primary node in the group.
In this scenario, if any one of the primary nodes becomes offline, the VRRP protocol will move the VIP to the backup node. While that process is ongoing, the load balancer will detect that the VIP is unreachable, routing all traffic in the meantime to the VIP of the second pair. The DNS round-robin would be used to evenly distribute the traffic between the two load balancers.
5.2 - HTTPS Certificates
Configuration of ESB3024 Router is done through a REST API over HTTPS. While the router installer generates a self-signed certificate in order to enable the interface at all, this is not considered safe and secure so a properly generated certificate should be used instead.
For SSL to work, the router needs to have both an x509 certificate and a key in ASCII armored PEM format:
-----BEGIN PRIVATE KEY-----
[...]
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----
The files can be either separate .crt
and .key
files or a combined .pem
file.
Simply copy the file(s) generated by your CA service, into the
/opt/edgeware/acd/ssl
folder on the host machine and they will automatically
be used by the router. The filenames must match the associated hostname, and
there’s currently no support for wildcard matching or multiple domains per
certificate. Several key/crt pairs can be placed in the folder in order to
support more than one domain name.