Token blocking
A common attempt of CDN piracy is to share unique session tokens between multiple users. If your tokens do not encode device identifiers it might be possible for attackers to share a token between multiple devices.
This is a common problem in the industry, and one solution is to block tokens after they have been used. This can be accomplished by storing the used tokens in a central database and checking against that database before allowing a new session to be created.
How it works
AgileTV CDN Director can be configured to produce and consume messages to and from a Kafka broker/cluster, preferably the Kafka cluster deployed by the AgileTV CDN Manager. Utilizing Kafka, the Director can produce Kafka messages containing the used tokens which other Director instances can consume and store locally in selection input.
Configuration
Kafka
The Manager deploys a Kafka cluster that can be used to store used tokens. The
Kafka cluster has a default topic called selection_input
that can be used to
store selection input data. This can be verified by running the following command
on the Manager host and checking that the topic selection_input
is listed:
kubectl exec acd-manager-kafka-controller-0 -- kafka-topics.sh --bootstrap-server localhost:9092 --list
__consumer_offsets
selection_input
Note that the used tokens can be stored under any Kafka topic, but for this
example we will use the default selection_input
topic. If you want to use a
different topic, refer to the Kafka documentation
for more information on how to create a new topic. Note that other services
deployed by the Manager may also use the selection_input
topic, so be
careful when using a different topic.
Connecting to the Kafka cluster
All instances of the Director must be configured to connect to the same Kafka
cluster. Assuming that an instance of the Manager is running on the host
manager-host
, the Kafka cluster can be accessed through port 9095. The
following configuration will make the Director connect to the Kafka cluster
served by the Manager:
confcli integration.kafka.bootstrapServers
{
"bootstrapServers": [
"manager-host:9095"
]
}
Data streams
To consume and produce messages to and from the Kafka cluster, data streams
are used. Assuming that the Kafka topic selection_input
is used to store the used
tokens, the following configuration will make the Director consume messages from
the selection_input
topic and store them as selection input data:
confcli services.routing.dataStreams
{
"dataStreams": {
"incoming": [
{
"name": "incomingSelectionInput",
"source": "kafka",
"target": "selectionInput",
"kafkaTopics": [
"selection_input"
]
},
],
"outgoing": [
{
"name": "outgoingSelectionInput",
"type": "kafka",
"kafkaTopic": "selection_input"
}
]
}
}
The incoming
section specifies that the Director will consume messages from
the selection_input
topic and store them as selection input data. Messages are
consumed periodically and no more configuration is needed.
The outgoing
section specifies that the Director will produce messages to the
selection_input
topic. The kafkaTopic
option specifies the name of the topic
to produce messages to. However, to actually produce messages, we need to
employ a Lua function.
Lua function
The Lua function blocked_tokens.add
produces messages
to the Kafka topic. This message will be consumed by other Director instances
configured to consume messages from the selection_input
topic and be stored as
selection input data.
During routing we also need to check if the token attached to the request is
blocked and deny the request accordingly. This can be done using the Lua function
blocked_tokens.is_blocked
.
These two functions can be utilized in a custom Lua function to check if the token is blocked and produce a message to the Kafka topic if it is not.
Consider a scenario where the token is included in the query string of a request.
The Lua function below checks if the token exists in the query string, retrieves
it, and determines whether it is blocked using blocked_tokens.is_blocked
. If
the token is not blocked, it is added to the outgoing data stream with a
time-to-live (TTL) of 1 hour (3600 seconds) using blocked_tokens.add
. However,
if the token is blocked, the function returns 1, signaling that the token is
blocked and the request should be blocked.
function is_token_used()
-- This function checks if the token is present in the query string
-- and if it is blocked. If the token is missing or blocked, the function
-- returns 1. If the token has not been blocked, the function returns 0.
-- Verify if the token is present in the query string
local token = request_query_params.token
if token == nil then
-- Token is not present, return 1
return 1
end
-- Check if the token is blocked
if not blocked_tokens.is_blocked(token) then
-- Token is not blocked, add the blocked token to the outgoing data
-- stream with a TTL of 1 hour (3600 seconds)
local ttl_s = 3600
blocked_tokens.add('outgoingSelectionInput', token, ttl_s)
return 0
else
-- Token is blocked, return 1
return 1
end
end
Save this function in a file called is_token_used.lua
and upload it to the
Director using the Lua API.
When the function has been uploaded to the Director, it can be used in a deny
rule block to fully handle all token blocking:
confcli services.routing.rules
{
"rules": [
{
"name": "checkTokenRule",
"type": "deny",
"condition": "is_token_used()",
"onMiss": "someOtherRule"
}
]
}
This rule will call the function is_token_used()
for every request. If the
token is missing or blocked, the function will return 1 and the request will be
denied. If the token is not blocked, the function will add the token to the
outgoing data stream to block it for future use and return 0, allowing the
request to continue to the next rule in the chain, which is someOtherRule
in
this case.