このページは、まだ日本語ではご利用いただけません。翻訳中です。
古いプラグインバージョンのドキュメントを閲覧しています。
Looking for the plugin's configuration parameters? You can find them in the Kafka Log configuration reference doc.
Publish request and response logs to an Apache Kafka topic. For more information, see Kafka topics.
Kong also provides a Kafka plugin for request transformations. See Kafka Upstream.
Note: This plugin does not support message compression.
Quickstart
The following guidelines assume that both Kong Gateway Enterprise and Kafka
have been
installed on your local machine.
Note: We use
zookeeper
in the following example, which is not required or has been removed on some Kafka versions. Refer to the Kafka ZooKeeper documentation for more information.
-
Create a
kong-log
topic in your Kafka cluster:${KAFKA_HOME}/bin/kafka-topics.sh --create \ --zookeeper localhost:2181 \ --replication-factor 1 \ --partitions 10 \ --topic kong-log
-
Add the
kafka-log
plugin globally:curl -X POST http://localhost:8001/plugins \ --data "name=kafka-log" \ --data "config.bootstrap_servers[1].host=localhost" \ --data "config.bootstrap_servers[1].port=9092" \ --data "config.topic=kong-log"
-
Make sample requests:
for i in {1..50} ; do curl http://localhost:8000/request/$i ; done
-
Verify the contents of the Kafka
kong-log
topic:${KAFKA_HOME}/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --topic kong-log \ --partition 0 \ --from-beginning \ --timeout-ms 1000
Log format
Note: If the
max_batch_size
argument > 1, a request is logged as an array of JSON objects.
Every request is logged separately in a JSON object, separated by a new line \n
, with the following format:
{
"response": {
"size": 9982,
"headers": {
"access-control-allow-origin": "*",
"content-length": "9593",
"date": "Thu, 19 Sep 2024 22:10:39 GMT",
"content-type": "text/html; charset=utf-8",
"via": "1.1 kong/3.8.0.0-enterprise-edition",
"connection": "close",
"server": "gunicorn/19.9.0",
"access-control-allow-credentials": "true",
"x-kong-upstream-latency": "171",
"x-kong-proxy-latency": "1",
"x-kong-request-id": "2f6946328ffc4946b8c9120704a4a155"
},
"status": 200
},
"route": {
"updated_at": 1726782477,
"tags": [],
"response_buffering": true,
"path_handling": "v0",
"protocols": [
"http",
"https"
],
"service": {
"id": "fb4eecf8-dec2-40ef-b779-16de7e2384c7"
},
"https_redirect_status_code": 426,
"regex_priority": 0,
"name": "example_route",
"id": "0f1a4101-3327-4274-b1e4-484a4ab0c030",
"strip_path": true,
"preserve_host": false,
"created_at": 1726782477,
"request_buffering": true,
"ws_id": "f381e34e-5c25-4e65-b91b-3c0a86cfc393",
"paths": [
"/example-route"
]
},
"workspace": "f381e34e-5c25-4e65-b91b-3c0a86cfc393",
"workspace_name": "default",
"tries": [
{
"balancer_start": 1726783839539,
"balancer_start_ns": 1.7267838395395e+18,
"ip": "34.237.204.224",
"balancer_latency": 0,
"port": 80,
"balancer_latency_ns": 27904
}
],
"client_ip": "192.168.65.1",
"request": {
"id": "2f6946328ffc4946b8c9120704a4a155",
"headers": {
"accept": "*/*",
"user-agent": "HTTPie/3.2.3",
"host": "localhost:8000",
"connection": "keep-alive",
"accept-encoding": "gzip, deflate"
},
"uri": "/example-route",
"size": 139,
"method": "GET",
"querystring": {},
"url": "http://localhost:8000/example-route"
},
"started_at": 1726783839538,
"upstream_status": "200",
"latencies": {
"kong": 1,
"proxy": 171,
"request": 173,
},
"service": {
"write_timeout": 60000,
"read_timeout": 60000,
"updated_at": 1726782459,
"host": "httpbin.konghq.com",
"name": "example_service",
"id": "fb4eecf8-dec2-40ef-b779-16de7e2384c7",
"port": 80,
"enabled": true,
"created_at": 1726782459,
"protocol": "http",
"ws_id": "f381e34e-5c25-4e65-b91b-3c0a86cfc393",
"connect_timeout": 60000,
"retries": 5
}
}
Implementation details
This plugin uses the lua-resty-kafka client.
When encoding request bodies, several things happen:
- For requests with a content-type header of
application/x-www-form-urlencoded
,multipart/form-data
, orapplication/json
, this plugin passes the raw request body in thebody
attribute, and tries to return a parsed version of those arguments inbody_args
. If this parsing fails, an error message is returned and the message is not sent. - If the
content-type
is nottext/plain
,text/html
,application/xml
,text/xml
, orapplication/soap+xml
, then the body will be base64-encoded to ensure that the message can be sent as JSON. In such a case, the message has an extra attribute calledbody_base64
set totrue
.
TLS
Enable TLS by setting config.security.ssl
to true
.
mTLS
Enable mTLS by setting a valid UUID of a certificate in config.security.certificate_id
.
Note that this option needs config.security.ssl
set to true.
See Certificate Object
in the Admin API documentation for information on how to set up Certificates.
SASL Authentication
This plugin supports the following authentication mechanisms:
-
PLAIN: Enable this mechanism by setting
config.authentication.mechanism
toPLAIN
. You also need to provide a username and password with the config optionsconfig.authentication.user
andconfig.authentication.password
respectively. -
SCRAM: In cryptography, the Salted Challenge Response Authentication Mechanism (SCRAM) is a family of modern, password-based challenge–response authentication mechanisms providing authentication of a user to a server. The Kafka Log plugin supports the following:
-
SCRAM-SHA-256: Enable this mechanism by setting
config.authentication.mechanism
toSCRAM-SHA-256
. You also need to provide a username and password with the config optionsconfig.authentication.user
andconfig.authentication.password
respectively. -
SCRAM-SHA-512: Enable this mechanism by setting
config.authentication.mechanism
toSCRAM-SHA-512
. You also need to provide a username and password with the config optionsconfig.authentication.user
andconfig.authentication.password
respectively.
-
-
Delegation Tokens: Delegation Tokens can be generated in Kafka and then used to authenticate this plugin.
Delegation Tokens
leverage theSCRAM-SHA-256
authentication mechanism. ThetokenID
is provided with theconfig.authentication.user
field and thetoken-hmac
is provided with theconfig.authentication.password
field. To indicate that a token is used you have to set theconfig.authentication.tokenauth
setting totrue
.Read more on how to create, renew, and revoke delegation tokens.
Custom fields by Lua
The custom_fields_by_lua
configuration allows for the dynamic modification of
log fields using Lua code. Below is a snippet of an example configuration that
removes the route
field from the logs:
curl -i -X POST http://localhost:8001/plugins \
...
--data config.custom_fields_by_lua.route="return nil"
Similarly, new fields can be added:
curl -i -X POST http://localhost:8001/plugins \
...
--data config.custom_fields_by_lua.header="return kong.request.get_header('h1')"
Plugin precedence and managing fields
All logging plugins use the same table for logging.
If you set custom_fields_by_lua
in one plugin, all logging plugins that execute after that plugin will also use the same configuration.
For example, if you configure fields via custom_fields_by_lua
in File Log, those same fields will appear in Kafka Log, since File Log executes first.
If you want all logging plugins to use the same configuration, we recommend using the Pre-function plugin to call kong.log.set_serialize_value so that the function is applied predictably and is easier to manage.
If you don’t want all logging plugins to use the same configuration, you need to manually disable the relevant fields in each plugin.
For example, if you configure a field in File Log that you don’t want appearing in Kafka Log, set that field to return nil
in the Kafka Log plugin:
curl -i -X POST http://localhost:8001/plugins/ \
...
--data config.name=kafka-log \
--data config.custom_fields_by_lua.my_file_log_field="return nil"
See the plugin execution order reference for more details on plugin ordering.
Limitations
Lua code runs in a restricted sandbox environment, whose behavior is governed
by the untrusted_lua
configuration properties configuration
properties.
Sandboxing consists of several limitations in the way the Lua code can be executed, for heightened security.
The following functions are not available because they can be used to abuse the system:
-
string.rep
: Can be used to allocate millions of bytes in one operation. -
{set|get}metatable
: Can be used to modify the metatables of global objects (strings, numbers). -
collectgarbage
: Can be abused to kill the performance of other workers. -
_G
: Is the root node which has access to all functions. It is masked by a temporary table. -
load{file|string}
: Is deemed unsafe because it can grant access to the global environment. -
raw{get|set|equal}
: Potentially unsafe because sandboxing relies on some metatable manipulation. -
string.dump
: Can display confidential server information (such as implementation of functions). -
math.randomseed
: Can affect the host system. Kong Gateway already seeds the random number generator properly. - All
os.*
(exceptos.clock
,os.difftime
, andos.time
).os.execute
can significantly alter the host system. -
io.*
: Provides access to the hard drive. -
dofile|require
: Provides access to the hard drive.
The exclusion of require
means that plugins must only use PDK functions kong.*
. The ngx.*
abstraction is
also available, but it is not guaranteed to be present in future versions of the plugin.
In addition to the above restrictions:
- All the provided modules (like
string
ortable
) are read-only and can’t be modified. -
Bytecode execution is disabled.
- The
kong.cache
points to a cache instance that is dedicated to the Serverless Functions plugins. It does not provide access to the global Kong Gateway cache. It only exposes theget
method. Explicit write operations likeset
orinvalidate
are not available.
Further, as code runs in the context of the log phase, only PDK methods that can run in said phase can be used.