このページは、まだ日本語ではご利用いただけません。翻訳中です。
旧バージョンのドキュメントを参照しています。 最新のドキュメントはこちらをご参照ください。
Customizing load-balancing behavior with KongUpstreamPolicy
In this guide, we will learn how to use KongUpstreamPolicy resource to control proxy load-balancing behavior.
Before you begin ensure that you have Installed Kong Ingress Controller with Gateway API support in your Kubernetes cluster and are able to connect to Kong.
Before you begin ensure that you have Installed Kong Ingress Controller with Gateway API support in your Kubernetes cluster and are able to connect to Kong.
Prerequisites
Install the Gateway APIs
-
Install the Gateway API CRDs before installing Kong Ingress Controller.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml
-
Create a
Gateway
andGatewayClass
instance to use.echo " --- apiVersion: gateway.networking.k8s.io/v1 kind: GatewayClass metadata: name: kong annotations: konghq.com/gatewayclass-unmanaged: 'true' spec: controllerName: konghq.com/kic-gateway-controller --- apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: kong spec: gatewayClassName: kong listeners: - name: proxy port: 80 protocol: HTTP " | kubectl apply -f -
The results should look like this:
gatewayclass.gateway.networking.k8s.io/kong created gateway.gateway.networking.k8s.io/kong created
Install Kong
You can install Kong in your Kubernetes cluster using Helm.
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Install Kong Ingress Controller and Kong Gateway with Helm:
helm install kong kong/ingress -n kong --create-namespace
Test connectivity to Kong
Kubernetes exposes the proxy through a Kubernetes service. Run the following commands to store the load balancer IP address in a variable named PROXY_IP
:
-
Populate
$PROXY_IP
for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo $PROXY_IP
-
Ensure that you can call the proxy IP:
curl -i $PROXY_IP
The results should look like this:
HTTP/1.1 404 Not Found Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 48 X-Kong-Response-Latency: 0 Server: kong/3.0.0 {"message":"no Route matched with those values"}
Deploy an echo service
To proxy requests, you need an upstream application to send a request to. Deploying this echo server provides a simple application that returns information about the Pod it’s running in:
kubectl apply -f https://docs.jp.konghq.com/assets/kubernetes-ingress-controller/examples/echo-service.yaml
The results should look like this:
service/echo created
deployment.apps/echo created
Deploy additional echo replicas
To demonstrate Kong’s load balancing functionality we need multiple echo
Pods. Scale out the echo
deployment.
kubectl scale --replicas 2 deployment echo
Add routing configuration
Create routing configuration to proxy /echo
requests to the echo server:
The results should look like this:
Test the routing rule:
curl -i $PROXY_IP/echo
The results should look like this:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 140
Connection: keep-alive
Date: Fri, 21 Apr 2023 12:24:55 GMT
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/3.2.2
Welcome, you are connected to node docker-desktop.
Running on Pod echo-7f87468b8c-tzzv6.
In namespace default.
With IP address 10.1.0.237.
...
If everything is deployed correctly, you should see the above response. This verifies that Kong Gateway can correctly route traffic to an application running inside Kubernetes.
Use KongUpstreamPolicy with a Service resource
By default, Kong will round-robin requests between upstream replicas. If you
run curl -s $PROXY_IP/echo | grep "Pod"
repeatedly, you should see the
reported Pod name alternate between two values.
You can configure the Kong upstream associated with the Service to use a different load balancing strategy, such as consistently sending requests to the same upstream based on a header value (please see the KongUpstreamPolicy reference for the full list of supported algorithms and their configuration options).
To modify these behaviours, let’s first create a KongUpstreamPolicy resource defining the new behaviour:
Now, let’s associate this KongUpstreamPolicy resource with our Service resource
using the konghq.com/upstream-policy
annotation.
With consistent hashing and client IP fallback, sending repeated requests without any x-lb
header now sends them to the same Pod:
If you add the header, Kong hashes its value and distributes it to the same replica when using the same value:
Increasing the replicas redistributes some subsequent requests onto the new replica:
Kong’s load balancer doesn’t directly distribute requests to each of the Service’s Endpoints. It first distributes them evenly across a number of equal-size buckets. These buckets are then distributed across the available Endpoints according to their weight. For Ingresses, however, there is only one Service, and the controller assigns each Endpoint (represented by a Kong upstream target) equal weight. In this case, requests are evenly hashed across all Endpoints.
Gateway API HTTPRoute rules support distributing traffic across multiple Services. The rule can assign weights to the Services to change the proportion of requests an individual Service receives. In Kong’s implementation, all Endpoints of a Service have the same weight. Kong calculates a per-Endpoint upstream target weight such that the aggregate target weight of the Endpoints is equal to the proportion indicated by the HTTPRoute weight.
For example, say you have two Services with the following configuration:
- One Service has four Endpoints
- The other Service has two Endpoints
- Each Service has weight
50
in the HTTPRoute
The targets created for the two-Endpoint Service have double the
weight of the targets created for the four-Endpoint Service (two weight 16
targets and four weight 8
targets). Scaling the
four-Endpoint Service to eight would halve the weight of its targets (two
weight 16
targets and eight weight 4
targets).
KongUpstreamPolicy can also configure upstream health checking behavior as well. See the KongUpstreamPolicy reference for the health check fields.