How to use the Director as a replacement for Edgeware CDN Request Router
This guide outlines how to replicate Convoy’s ESB3008 HTTP Request Router using the Director. This is accomplished using the Convoy Bridge, which is an integration service designed to allow the router and an existing Convoy installation to work together seamlessly.
Following the steps outlined below enables the replacement of existing ESB3008 HTTP Request Router instances with the Director. As of the current documentation, the Director is not a direct substitute for the ESB3008 HTTP Request Router. However, most of the functionality is available, and in the majority of cases, the Director can be used without any loss of functionality.
The purpose of this guide is to detail the process of transitioning from an existing Convoy installation, which includes one or more instances of ESB3008 HTTP Request Router, to a configuration utilizing the Director. This transition allows the existing Convoy installation to continue its role in CDN provisioning, content management, and analytics. Simultaneously, the router assumes responsibility for managing HTTP(S) redirects directly from clients to selected streamers.
One functional difference between ESB3008 HTTP Request Router and the Director is
in communication with the streamer to determine specific interface assignments.
ESB3008 uses a proprietary protocol to communicate with the streamer to help determine
the best interface for the client. For the Director, in order to work with both
streamers and third-party CDNs, the use of the proprietary protocol is not supported,
and the router’s rule engine must be used to accomplish the same result. It is for
this reason that it is recommended to configure each streaming interface as a distinct
host
entry in the router’s configuration. This allows the router to address each
interface separately.
Prerequisites
This guide assumes both an existing Convoy installation and a partiality configured
instance of the Director. The router should have all the necessary routing configuration
in place for hostGroups
, sessionGroups
and routing rules
to direct traffic
to the streaming interfaces of the streamers. It is also assumed that the proper
firewall rules are allowing the required traffic between the router, the Convoy
Bridge integration and the Convoy management nodes.
The minimum compatible version of Convoy which can be used with the analytics synchronization feature is Convoy 3.0.0. Previous versions may be used only if Convoy is not being used for analytics.
Firewall Configuration
Proper firewall configuration is required to allow both the router and Convoy to communicate through bidirectional connections established from the Convoy Bridge integration. The Convoy Bridge does not need to be enabled for each router instance, one instance of the Convoy Bridge can synchronize multiple instances of the router simultaneously. Only the router nodes with the Convoy Bridge enabled will need access through the firewall to establish outgoing connections to the Convoy management nodes, or whichever nodes are running MariaDB and Kafka. Additionally, if multiple router instances are to be synchronized by a single Convoy Bridge instance, the firewall must allow the Convoy Bridge to establish connections to each router instance on port 5001.
The following table outlines the necessary connections to be allowed through the firewall.
Source | Destination | Port | Description |
---|---|---|---|
Convoy Bridge | Convoy Management | 3306/TCP | Account Configuration |
Convoy Bridge | Convoy Management | 9092/TCP | Analytics Data |
Convoy Bridge | The router | 5001/TCP | Rest API |
Response Translation Functions
In order to have the router generate a redirect URL which is compatible with the
format generated by the ESB3008 HTTP Request Router, a response translation function
must be used to modify the redirect URL before it is returned to the client. A built-in
function convoy_compatibility_response
is available to be
used for this purpose. This function appends both the session-id
and storage-prefix
based on the account and distribution information in Convoy.
On the Director, set the response translation function as follows:
> confcli services.routing.translationFunctions.response "return convoy_compatibility_response(Headers, '')"
This function depends on the account synchronization feature of the Convoy bridge to
determine the correct storage prefix based on the incoming Host
header value. It works
by using the Host
header to lookup the storage prefix from the account configuration,
and then modifying the redirect URL to prepend /session/<session-id>/<prefix>
. The
streamer can then use this information for both origin selection and session tracking.
Enabling the Convoy Bridge
Before using the Convoy Bridge integration, a valid configuration must first be applied.
All configuration for the Convoy Bridge is available in confd
under the key
integration.convoy.bridge
. The configuration is divided into three main sections:
The account synchronization feature, the convoy analytics integration, and a list of
additional router instances to synchronize.
By default, the Convoy Bridge will include the current local router instance in the synchronization process. The connection details are hard-wired into the container at startup, and do not need to be specified in the configuration. Because one Convoy Bridge can serve multiple instances of the router, it is not necessary to configure the Convoy Bridge on each node separately. However, if multiple Convoy Bridge instances are used, it is recommended to configure them so that one router is served by multiple instances of the Convoy Bridge. This provides high-availability in a production environment.
As an example, if there are 4 router instances, A, B, C, and D, and 4 Convoy Bridge instances, configuring them such that Bridge 1 synchronizes A & B, 2, synchronizes B & C, 3, synchronizes C & D and 4 synchronizes D & A will provide fault tolerance in the event that one bridge is unavailable.
The account synchronization feature of the Convoy Bridge watches the MariaDB database on the
Convoy management node for changes to the account and distribution configuration, and if
the configuration differs from what is currently known to the router the configuration
will be updated. This feature is required by both the router for use with the
convoy_compatibility_response
response translation function, and within the Convoy
Bridge integration for setting the correct account
, storage
, and distribution
fields
with the analytics integration feature.
The analytics integration feature monitors monitors the router’s Event API for
newly established sessions, and produces session-record
messages over the Kafka
message broker for use by Convoy. This feature requires that Convoy is both licensed
and configured for use with the analytics feature. The integration works by establishing
a persistent connection to each router, and listening for new session events. Due to the
protocol in use by the Event API to send the events, only one instance of the Convoy
Bridge will have the ability to receive each event, and only that instance will produce
the session-record
to the Kafka message brokers.
Synchrnoizing multiple routers with the same instance of the Convoy Bridge is performed
by configuring the otherRouters
section of the Convoy Bridge configuration. In addition
to the local router, each otherRouter
entry will have the same account and distribution
information updated simultaneously, and will be used as an event source for the analytics
integration. The otherRouters
entries require the URL to the router’s Rest API, an
optional API key, and a flag to indicate if SSL certificate validation should be enforced.
Enable the account synchronization feature by running the following command:
> confcli integration.convoy.bridge.accounts -w
Running wizard for resource 'accounts'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
accounts : {
enabled (default: False): true
dbUrl (default: mysql://user:pass@localhost:3306): mysql://convoy:password@convoy-mgmt1:3306
cmsUrl (default: http://localhost:5555): http://convoy-mgmt1:5555
pollInterval (default: 60):
}
Generated config:
{
"accounts": {
"enabled": true,
"dbUrl": "mysql://convoy:password@convoy-mgmt1:3306",
"cmsUrl": "http://convoy-mgmt1:5555",
"pollInterval": 60
}
}
Merge and apply the config? [y/n]:
This will result in the Convoy Bridge continuously polling the Convoy instance running on
convoy-mgmt1
every 60 seconds for changes to the account configuration. Only when the
account configuration differs from the current configuration on the router will it be
updated with the change.
The pollInterval
parameter can be used to control how frequently the Convoy Bridge will
attempt to poll the database for changes. Setting this value too high will result in a longer delay
between changes being made in Convoy and the changes being reflected in the router. Setting this
value too low will result in unnecessary load on the Convoy database. The default value of 60 seconds
should be sufficient for most use cases.
To enable the analytics integration feature, run the following command:
> confcli integration.convoy.bridge.analytics -w
Running wizard for resource 'analytics'
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
analytics : {
enabled (default: False): true
brokers : [
broker (default: ): convoy-mgmt1:9092
Add another 'broker' element to array 'brokers'? [y/N]: y
broker (default: ): convoy-mgmt2:9092
Add another 'broker' element to array 'brokers'? [y/N]: y
broker (default: ): convoy-mgmt3:9092
Add another 'broker' element to array 'brokers'? [y/N]: n
]
batchInterval (default: 10):
maxBatchSize (default: 500):
}
Generated config:
{
"analytics": {
"enabled": true,
"brokers": [
"convoy-mgmt1:9092",
"convoy-mgmt2:9092",
"convoy-mgmt3:9092"
],
"batchInterval": 10,
"maxBatchSize": 500
}
}
Merge and apply the config? [y/n]:
This will enable the analytics integration, which will produce session-record
messages
to the configured Kafka
brokers. The brokers
configuration list is used to bootstrap the
initial set of brokers which will be contacted by the Convoy Bridge to then obtain the
current set of active brokers. It is not necessary to include all brokers here, as the
initial connection will automatically negotiate the correct broker list.
The batchInterval
parameter must be set to a value much less than 60 seconds, since the
streamer will send session-sample
messages to Kafka every 60 seconds, and if the
corresponding session-record
message is not received within that time, the sample will
be dropped, resulting in potentially inaccurate bandwidth statistics.
> confcli integration.convoy.bridge.otherRouters -w
Running wizard for resource 'otherRouters'
<Other routers which will be kept in sync (default: [])>
Hint: Hitting return will set a value to its default.
Enter '?' to receive the help string
otherRouters <Other routers which will be kept in sync (default: [])>: [
otherRouter : {
url (default: ): https://router-2:5001
apiKey (default: ): 1234
validateCerts (default: True): false
}
Add another 'otherRouter' element to array 'otherRouters'? [y/N]: y
otherRouter : {
url (default: ): https://router-3:5001
apiKey (default: ):
validateCerts (default: True):
}
Add another 'otherRouter' element to array 'otherRouters'? [y/N]: n
]
Generated config:
{
"otherRouters": [
{
"url": "https://router-2:5001",
"apiKey": "1234",
"validateCerts": false
},
{
"url": "https://router-3:5001",
"apiKey": "",
"validateCerts": true
}
]
}
Merge and apply the config? [y/n]: