このページは、まだ日本語ではご利用いただけません。翻訳中です。
旧バージョンのドキュメントを参照しています。 最新のドキュメントはこちらをご参照ください。
Performance fine-tuning
Reachable services
By default, when transparent proxying is used, every data plane proxy follows every other data plane proxy in the mesh. With large meshes, usually, a data plane proxy connects to just a couple of services in the mesh. By defining the list of such services, we can dramatically improve the performance of Kong Mesh.
The result is that:
- The control plane has to generate a much smaller XDS configuration (just a couple of Clusters/Listeners etc.) saving CPU and memory
- Smaller config is sent over a wire saving a lot of network bandwidth
- Envoy only has to keep a couple of Clusters/Listeners which means much fewer statistics and lower memory usage.
Follow the transparent proxying docs on how to configure it.
Config trimming by using MeshTrafficPermission
This feature only works with MeshTrafficPermission, if you’re using TrafficPermission you need to migrate to MeshTrafficPermission, otherwise enabling this feature could stop all traffic flow.
Due to a bug ExternalServices won’t work without Traffic Permissions without Zone Egress, if you’re using External Services you need to keep associated TrafficPermissions, or upgrade Kong Mesh to 2.6.x or newer.
Starting with release 2.5 the problem stated in reachable services section
can be also mitigated by defining MeshTrafficPermissions and configuring a zone control plane with KUMA_EXPERIMENTAL_AUTO_REACHABLE_SERVICES=true
.
Switching on the flag will result in computing a graph of dependencies between the services
and generating XDS configuration that enables communication only with services that are allowed to communicate with each other
(their effective action is not deny
).
For example: if a service b
can be called only by service a
:
apiVersion: kuma.io/v1alpha1
kind: MeshTrafficPermission
metadata:
namespace: kuma-system
name: mtp-b
spec:
targetRef:
kind: MeshService
name: b
from:
- targetRef:
kind: MeshService
name: a
default:
action: Allow
Then there is no reason to compute and distribute configuration of service b
to any other services in the Mesh since (even if they wanted)
they wouldn’t be able to communicate with it.
You can combine
autoReachableServices
with reachable services, but reachable services will take precedence.
Sections below highlight the most important aspects of this feature, if you want to dig deeper please take a look at the MADR.
Supported targetRef kinds
The following kinds affect the graph generation and performance:
- all levels of
MeshService
-
top level
MeshSubset
andMeshServiceSubset
withk8s.kuma.io/namespace
,k8s.kuma.io/service-name
,k8s.kuma.io/service-port
tags -
from level
MeshSubset
andMeshServiceSubset
with all tags
If you define a MeshTrafficPermission with other kind, like this one:
it won’t affect performance.
Changes to the communication between services
Requests from services trying to communicate with services that they don’t have access to will now fail with connection closed error like this:
root@second-test-server:/# curl -v first-test-server:80
* Trying [IP]:80...
* Connected to first-test-server ([IP]) port 80 (#0)
> GET / HTTP/1.1
> Host: first-test-server
> User-Agent: curl/7.81.0
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
instead of getting a 503
error.
root@second-test-server:/# curl -v first-test-server:80
* Trying [IP]:80...
* Connected to first-test-server ([IP]) port 80 (#0)
> GET / HTTP/1.1
> Host: first-test-server
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 503 Service Unavailable
< content-length: 118
< content-type: text/plain
< date: Wed, 08 Nov 2023 14:15:24 GMT
< server: envoy
<
* Connection #0 to host first-test-server left intact
upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection termination/
Migration
A recommended path of migration is to start with a coarse grain MeshTrafficPermission
targeting a MeshSubset
with k8s.kuma.io/namespace
and then drill down to individual services if needed.
Postgres
If you choose Postgres
as a configuration store for Kong Mesh on Universal,
please be aware of the following key settings that affect performance of Kong Mesh Control Plane.
-
KUMA_STORE_POSTGRES_CONNECTION_TIMEOUT
: connection timeout to the Postgres database (default:5s
) -
KUMA_STORE_POSTGRES_MAX_OPEN_CONNECTIONS
: maximum number of open connections to the Postgres database (default:unlimited
)
KUMA_STORE_POSTGRES_CONNECTION_TIMEOUT
The default value will work well in those cases where both kuma-cp
and Postgres database are deployed in the same data center / cloud region.
However, if you’re pursuing a more distributed topology, for example by hosting kuma-cp
on premise and using Postgres as a service in the cloud, the default value might no longer be enough.
KUMA_STORE_POSTGRES_MAX_OPEN_CONNECTIONS
The more data planes join your meshes, the more connections to Postgres database Kong Mesh might need to fetch configurations and update statuses.
As of version 1.4.1 the default value is 50.
However, if your Postgres database (for example as a service in the cloud) only permits a small number of concurrent connections, you will have to adjust Kong Mesh configuration respectively.
Snapshot Generation
This is advanced topic describing Kong Mesh implementation internals
The main task of the control plane is to provide config for data planes. When a data plane connects to the control plane, the control plane starts a new Goroutine.
This Goroutine runs the reconciliation process with given interval (1s
by default). During this process, all data planes and policies are fetched for matching.
When matching is done, the Envoy config (including policies and available endpoints of services) for given data plane is generated and sent only if there is an actual change.
-
KUMA_XDS_SERVER_DATAPLANE_CONFIGURATION_REFRESH_INTERVAL
: interval for re-generating configuration for data planes connected to the control plane (default:1s
)
This process can be CPU intensive with high number of data planes therefore you can control the interval time for a single data plane.
You can lower the interval scarifying the latency of the new config propagation to avoid overloading the control plane. For example,
changing it to 5 seconds means that when you apply a policy (like MeshTrafficPermission
) or the new data plane of the service is up or down, control plane will generate and send new config within 5 seconds.
For systems with high traffic, keeping old endpoints for such a long time (5 seconds) may not be acceptable. To solve this, you can use passive or active health checks provided by Kong Mesh.
Additionally, to avoid overloading the underlying storage there is a cache that shares fetch results between concurrent reconciliation processes for multiple dataplanes.
-
KUMA_STORE_CACHE_EXPIRATION_TIME
: expiration time for elements in cache (1 second by default).
You can also change the expiration time, but it should not exceed KUMA_XDS_SERVER_DATAPLANE_CONFIGURATION_REFRESH_INTERVAL
, otherwise CP will be wasting time building Envoy config with the same data.
Profiling
Kong Mesh’s control plane ships with pprof
endpoints so you can profile and debug the performance of the kuma-cp
process.
To enable the debugging endpoints, you can set the KUMA_DIAGNOSTICS_DEBUG_ENDPOINTS
environment variable to true
before starting kuma-cp
and use one of the following methods to retrieve the profiling information:
Then, you can analyze the retrieved profiling data using an application like Speedscope.
After a successful debugging session, please remember to turn off the debugging endpoints since anybody could execute heap dumps on them potentially exposing sensitive data.
Kubernetes client
Kubernetes client uses client level throttling to not overwhelm kube-api server. In larger deployments, bigger than 2000 services in a single kubernetes cluster, number of resources updates can hit this throttling. In most cases it’s safe to increase this limit as kube-api has it’s own throttling mechanism. To change client throttling configuration you need to update config.
runtime:
kubernetes:
clientConfig:
qps: ... # Qps defines maximum requests kubernetes client is allowed to make per second.
burstQps: ... # BurstQps defines maximum burst requests kubernetes client is allowed to make per second
Kubernetes controller manager
Kong Mesh is modifying some Kubernetes resources. Kubernetes calls the process of modification reconciliation. Every resource has its own working queue, and control plane adds reconciliation tasks to that queue. In larger deployments, bigger than 2000 services in a single Kubernetes cluster, size of the work queue for pod reconciliation can grow and slow down pods updates. In this situation you can change the number of concurrent pod reconciliation tasks, by changing configuration:
runtime:
kubernetes:
controllersConcurrency:
podController: ... # PodController defines maximum concurrent reconciliations of Pod resources
Envoy
Envoy concurrency tuning
Envoy allows configuring the number of worker threads used for processing requests. Sometimes it might be useful to change the default number of worker threads e.g.: high CPU machine with low traffic. Depending on the type of deployment, there are different mechanisms in kuma-dp
to change Envoy’s concurrency level.