このページは、まだ日本語ではご利用いただけません。翻訳中です。
旧バージョンのドキュメントを参照しています。 最新のドキュメントはこちらをご参照ください。
High-availability, Scaling, and Robustness
High availability
The Kong Ingress Controller is designed to be reasonably easy to operate and be highly available, meaning, when some expected failures do occur, the Controller should be able to continue to function with minimum possible service disruption.
The Kong Ingress Controller is composed of two parts: 1. Kong, which handles the requests, 2. Controller, which configures Kong dynamically.
Kong itself can be deployed in a highly-available manner by deploying multiple instances (or pods). Kong nodes are state-less, meaning a Kong pod can be terminated and restarted at any point of time.
The controller itself can be stateful or stateless, depending on if a database is being used or not.
If a database is not used, then the Controller and Kong are deployed as co-located containers in the same pod and each controller configures the Kong container that it is running with.
For cases when a database is necessary, the Controllers can be deployed on multiple zones to provide redundancy. In such a case, a leader election process will elect one instance as a leader, which will manipulate Kong’s configuration.
Leader election
Multiple Kong Ingress Controller instances elect a leader when connected to a database-backed cluster. This ensures that only a single controller pushes configuration to Kong’s database to avoid potential conflicts and race conditions.
When a leader controller shuts down, other instances will detect that there is no longer a leader, and one will promote itself to the leader.
For this reason, the controller needs permission to create a Lease
resource.
By default, the permission is given at Namespace level.
It also needs permission to read and update this Lease.
This permission can be specific to the Lease that is being used
for leader-election purposes.
The name of the Lease is derived from the value of election-id CLI flag
(default: 5b374a9e.konghq.com
) and
election-namespace (default: ""
) as: “
Scaling
Kong is designed to be horizontally scalable, meaning as traffic increases, multiple instances of Kong can be deployed to handle the increase in load.
The configuration is either pumped into Kong directly via the Ingress Controller or loaded via the database. Kong containers can be considered stateless as the configuration is either loaded from the database (and cached heavily in-memory) or loaded in-memory directly via a config file.
One can use a HorizontalPodAutoscaler
(HPA) based on metrics
like CPU utilization, bandwidth being used, total request count per second
to dynamically scale Kong Ingress Controller as the traffic profile changes.