コンテンツにスキップ
Kong Logo | Kong Docs Logo
  • ドキュメント
    • API仕様を確認する
      View all API Specs すべてのAPI仕様を表示 View all API Specs arrow image
    • ドキュメンテーション
      API Specs
      Kong Gateway
      軽量、高速、柔軟なクラウドネイティブAPIゲートウェイ
      Kong Konnect
      SaaSのエンドツーエンド接続のための単一プラットフォーム
      Kong AI Gateway
      GenAI インフラストラクチャ向けマルチ LLM AI Gateway
      Kong Mesh
      Kuma と Envoy をベースにしたエンタープライズサービスメッシュ
      decK
      Kongの構成を宣言型で管理する上で役立ちます
      Kong Ingress Controller
      Kubernetesクラスタ内で動作し、Kongをプロキシトラフィックに設定する
      Kong Gateway Operator
      YAMLマニフェストを使用してKubernetes上のKongデプロイメントを管理する
      Insomnia
      コラボレーティブAPI開発プラットフォーム
  • Plugin Hub
    • Plugin Hubを探索する
      View all plugins すべてのプラグインを表示 View all plugins arrow image
    • 機能性 すべて表示 View all arrow image
      すべてのプラグインを表示
      AI's icon
      AI
      マルチ LLM AI Gatewayプラグインを使用してAIトラフィックを管理、保護、制御する
      認証's icon
      認証
      認証レイヤーでサービスを保護する
      セキュリティ's icon
      セキュリティ
      追加のセキュリティレイヤーでサービスを保護する
      トラフィック制御's icon
      トラフィック制御
      インバウンドおよびアウトバウンドAPIトラフィックの管理、スロットル、制限
      サーバーレス's icon
      サーバーレス
      他のプラグインと組み合わせてサーバーレス関数を呼び出します
      分析と監視's icon
      分析と監視
      APIとマイクロサービストラフィックを視覚化、検査、監視
      変革's icon
      変革
      Kongでリクエストとレスポンスをその場で変換
      ログ記録's icon
      ログ記録
      インフラストラクチャに最適なトランスポートを使用して、リクエストと応答データをログに記録します
  • サポート
  • コミュニティ
  • Kongアカデミー
デモを見る 無料トライアルを開始
Kong Gateway
3.5.x
  • Home icon
  • Kong Gateway
  • Production Deployment
  • Deployment Topologies
  • Traditional
  • Traditional Deployment
report-issue問題を報告する
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • ドキュメント投稿ガイドライン
  • 3.10.x (latest)
  • 3.9.x
  • 3.8.x
  • 3.7.x
  • 3.6.x
  • 3.5.x
  • 3.4.x (LTS)
  • 3.3.x
  • 2.8.x (LTS)
  • アーカイブ (2.6より前)
  • Introduction
    • Overview of Kong Gateway
    • Support
      • Version Support Policy
      • Third Party Dependencies
      • Browser Support
      • Vulnerability Patching Process
      • Software Bill of Materials
    • Stability
    • Release Notes
    • Breaking Changes
      • Kong Gateway 3.5.x
      • Kong Gateway 3.4.x
      • Kong Gateway 3.3.x
      • Kong Gateway 3.2.x
      • Kong Gateway 3.1.x
      • Kong Gateway 3.0.x
      • Kong Gateway 2.8.x or earlier
    • Key Concepts
      • Services
      • Routes
      • Consumers
      • Upstreams
      • Plugins
      • Consumer Groups
    • How Kong Works
      • Routing Traffic
      • Load Balancing
      • Health Checks and Circuit Breakers
    • Glossary
  • Get Started with Kong
    • Get Kong
    • Services and Routes
    • Rate Limiting
    • Proxy Caching
    • Key Authentication
    • Load-Balancing
  • Install Kong
    • Overview
    • Kubernetes
      • Overview
      • Install Kong Gateway
      • Configure the Admin API
      • Install Kong Manager
    • Docker
      • Using docker run
      • Build your own Docker images
    • Linux
      • Amazon Linux
      • Debian
      • Red Hat
      • Ubuntu
    • Post-installation
      • Set up a data store
      • Apply Enterprise license
      • Enable Kong Manager
  • Kong in Production
    • Deployment Topologies
      • Overview
      • Kubernetes Topologies
      • Hybrid Mode
        • Overview
        • Deploy Kong Gateway in Hybrid mode
      • DB-less Deployment
      • Traditional
    • Running Kong
      • Running Kong as a non-root user
      • Securing the Admin API
      • Using systemd
    • Access Control
      • Start Kong Gateway Securely
      • Programatically Creating Admins
      • Enabling RBAC
    • Licenses
      • Overview
      • Download your License
      • Deploy Enterprise License
      • Using the License API
      • Monitor Licenses Usage
    • Networking
      • Default Ports
      • DNS Considerations
      • Network and Firewall
      • CP/DP Communication through a Forward Proxy
      • PostgreSQL TLS
        • Configure PostgreSQL TLS
        • Troubleshooting PostgreSQL TLS
    • Kong Configuration File
    • Environment Variables
    • Serving a Website and APIs from Kong
    • Monitoring
      • Overview
      • Prometheus
      • StatsD
      • Datadog
      • Health Check Probes
    • Tracing
      • Overview
      • Writing a Custom Trace Exporter
      • Tracing API Reference
    • Resource Sizing Guidelines
    • Blue-Green Deployments
    • Canary Deployments
    • Clustering Reference
    • Establish a Performance Benchmark
    • Logging and Debugging
      • Log Reference
      • Dynamic log level updates
      • Customize Gateway Logs
      • Debug Requests
    • Configure a gRPC service
    • Use the Expressions Router
    • Upgrade and Migration
      • Upgrading Kong Gateway 3.x.x
      • Backup and Restore
      • Upgrade Strategies
        • Dual-Cluster Upgrade
        • In-Place Upgrade
        • Blue-Green Upgrade
        • Rolling Upgrade
      • Upgrade from 2.8 LTS to 3.4 LTS
      • Migrate from OSS to Enterprise
      • Migration Guidelines Cassandra to PostgreSQL
      • Breaking Changes
  • Kong Gateway Enterprise
    • Overview
    • Secrets Management
      • Overview
      • Getting Started
      • Secrets Rotation
      • Advanced Usage
      • Backends
        • Overview
        • Environment Variables
        • AWS Secrets Manager
        • Azure Key Vaults
        • Google Cloud Secret Manager
        • HashiCorp Vault
      • How-To
        • Securing the Database with AWS Secrets Manager
      • Reference Format
    • Dynamic Plugin Ordering
      • Overview
      • Get Started with Dynamic Plugin Ordering
    • Audit Logging
    • Keyring and Data Encryption
    • Workspaces
    • Consumer Groups
    • Event Hooks
    • Configure Data Plane Resilience
    • About Control Plane Outage Management
    • FIPS 140-2
      • Overview
      • Install the FIPS Compliant Package
    • Authenticate your Kong Gateway Amazon RDS database with AWS IAM
    • Verify Signatures for Signed Kong Images
  • Kong Manager
    • Overview
    • Enable Kong Manager
    • Get Started with Kong Manager
      • Services and Routes
      • Rate Limiting
      • Proxy Caching
      • Authentication with Consumers
      • Load Balancing
    • Authentication and Authorization
      • Overview
      • Create a Super Admin
      • Workspaces and Teams
      • Reset Passwords and RBAC Tokens
      • Basic Auth
      • LDAP
        • Configure LDAP
        • LDAP Service Directory Mapping
      • OIDC
        • Configure OIDC
        • OIDC Authenticated Group Mapping
      • Sessions
      • RBAC
        • Overview
        • Enable RBAC
        • Add a Role and Permissions
        • Create a User
        • Create an Admin
    • Networking Configuration
    • Workspaces
    • Create Consumer Groups
    • Sending Email
    • Troubleshoot
  • Develop Custom Plugins
    • Overview
    • File Structure
    • Implementing Custom Logic
    • Plugin Configuration
    • Accessing the Data Store
    • Storing Custom Entities
    • Caching Custom Entities
    • Extending the Admin API
    • Writing Tests
    • (un)Installing your Plugin
    • Proxy-Wasm Filters
      • Create a Proxy-Wasm Filter
      • Proxy-Wasm Filter Configuration
    • Plugin Development Kit
      • Overview
      • kong.client
      • kong.client.tls
      • kong.cluster
      • kong.ctx
      • kong.ip
      • kong.jwe
      • kong.log
      • kong.nginx
      • kong.node
      • kong.plugin
      • kong.request
      • kong.response
      • kong.router
      • kong.service
      • kong.service.request
      • kong.service.response
      • kong.table
      • kong.tracing
      • kong.vault
      • kong.websocket.client
      • kong.websocket.upstream
    • Plugins in Other Languages
      • Go
      • Javascript
      • Python
      • Running Plugins in Containers
      • External Plugin Performance
  • Kong Plugins
    • Overview
    • Authentication Reference
    • Allow Multiple Authentication Plugins
    • Plugin Queuing
      • Overview
      • Plugin Queuing Reference
  • Admin API
    • Overview
    • Declarative Configuration
    • Enterprise API
      • Information Routes
      • Health Routes
      • Tags
      • Debug Routes
      • Services
      • Routes
      • Consumers
      • Plugins
      • Certificates
      • CA Certificates
      • SNIs
      • Upstreams
      • Targets
      • Vaults
      • Keys
      • Filter Chains
      • Licenses
      • Workspaces
      • RBAC
      • Admins
      • Consumer Groups
      • Event Hooks
      • Keyring and Data Encryption
      • Audit Logs
      • Status API
    • Open Source API
  • Reference
    • kong.conf
    • Injecting Nginx Directives
    • CLI
    • Key Management
    • The Expressions Language
      • Overview
      • Language References
      • Performance Optimizations
    • Rate Limiting Library
    • WebAssembly
    • FAQ
enterprise-switcher-icon 次に切り替える: OSS
On this pageOn this page
  • What a Kong cluster does and doesn’t do
  • Single node Kong clusters
  • Multiple nodes Kong clusters
    • Use read-only replicas when deploying Kong clusters with PostgresSQL
  • What is being cached?
  • How to configure database caching?
    • [db_update_frequency][db_update_frequency] (default: 5s)
    • [db_update_propagation][db_update_propagation] (default: 0s)
    • [db_cache_ttl][db_cache_ttl] (default: 0s)
  • Interacting with the cache via the Admin API
    • Inspect a cached value
    • Purge a cached value
    • Purge a node’s cache

このページは、まだ日本語ではご利用いただけません。翻訳中です。

旧バージョンのドキュメントを参照しています。 最新のドキュメントはこちらをご参照ください。

Traditional Deployment

A Kong cluster allows you to scale the system horizontally by adding more machines to handle more incoming requests. They will all share the same configuration since they point to the same database. Kong nodes pointing to the same datastore will be part of the same Kong cluster.

You need a load balancer in front of your Kong cluster to distribute traffic across your available nodes.

 
flowchart TD

A[(Database)]
B( Kong Gateway instance)
C( Kong Gateway instance)
D( Kong Gateway instance)

A <---> B & C & D

style B stroke:none,fill:#0E44A2,color:#fff
style C stroke:none,fill:#0E44A2,color:#fff
style D stroke:none,fill:#0E44A2,color:#fff

  

Figure 1: In a traditional deployment, all Kong Gateway nodes connect to the database. Each node manages its own configuration.

What a Kong cluster does and doesn’t do

Having a Kong cluster does not mean that your clients traffic will be load-balanced across your Kong nodes out of the box. You still need a load-balancer in front of your Kong nodes to distribute your traffic. Instead, a Kong cluster means that those nodes will share the same configuration.

For performance reasons, Kong avoids database connections when proxying requests, and caches the contents of your database in memory. The cached entities include Services, Routes, Consumers, Plugins, Credentials, and so on. Since those values are in memory, any change made via the Admin API of one of the nodes needs to be propagated to the other nodes.

This document describes how those cached entities are being invalidated and how to configure your Kong nodes for your use case, which lies somewhere between performance and consistency.

Single node Kong clusters

A single Kong node connected to a supported database creates a Kong cluster of one node. Any changes applied via the Admin API of this node will instantly take effect. Example:

Consider a single Kong node A. If we delete a previously registered Service:

curl -X DELETE http://127.0.0.1:8001/services/test-service

Then any subsequent request to A would instantly return 404 Not Found, as the node purged it from its local cache:

curl -i http://127.0.0.1:8000/test-service

Multiple nodes Kong clusters

In a cluster of multiple Kong nodes, other nodes connected to the same database would not instantly be notified that the Service was deleted by node A. While the Service is not in the database anymore (it was deleted by node A), it is still in node B’s memory.

All nodes perform a periodic background job to synchronize with changes that may have been triggered by other nodes. The frequency of this job can be configured via:

  • [db_update_frequency][db_update_frequency] (default: 5 seconds)

Every db_update_frequency seconds, all running Kong nodes will poll the database for any update, and will purge the relevant entities from their cache if necessary.

If we delete a Service from node A, this change will not be effective in node B until node Bs next database poll, which will occur up to db_update_frequency seconds later (though it could happen sooner).

This makes Kong clusters eventually consistent.

Use read-only replicas when deploying Kong clusters with PostgresSQL

When using Postgres as the backend storage, you can optionally enable Kong to serve read queries from a separate database instance.

Enabling the read-only connection support in Kong greatly reduces the load on the main database instance since read-only queries are no longer sent to it.

To learn more about how to configure this feature, refer to the Datastore section of the Configuration reference.

What is being cached?

All of the core entities such as Services, Routes, Plugins, Consumers, Credentials are cached in memory by Kong and depend on their invalidation via the polling mechanism to be updated.

Additionally, Kong also caches database misses. This means that if you configure a Service with no plugin, Kong will cache this information. Example:

On node A, we add a Service and a Route:

# node A
curl -X POST http://127.0.0.1:8001/services \
  --data "name=example-service" \
  --data "url=http://example.com"

curl -X POST http://127.0.0.1:8001/services/example-service/routes \
  --data "paths[]=/example"

(Note that we used /services/example-service/routes as a shortcut: we could have used the /routes endpoint instead, but then we would need to pass service_id as an argument, with the UUID of the new Service.)

A request to the Proxy port of both node A and B will cache this Service, and the fact that no plugin is configured on it:

# node A
curl http://127.0.0.1:8000/example

Response:

HTTP 200 OK
...
# node B
curl http://127.0.0.2:8000/example

Response:

HTTP 200 OK
...

Now, say we add a plugin to this Service via node A’s Admin API:

# node A
curl -X POST http://127.0.0.1:8001/services/example-service/plugins \
  --data "name=example-plugin"

Because this request was issued via node A’s Admin API, node A will locally invalidate its cache and on subsequent requests, it will detect that this API has a plugin configured.

However, node B hasn’t run a database poll yet, and still caches that this API has no plugin to run. It will be so until node B runs its database polling job.

Conclusion: All CRUD operations trigger cache invalidations. Creation (POST, PUT) will invalidate cached database misses, and update/deletion (PATCH, DELETE) will invalidate cached database hits.

How to configure database caching?

You can configure three properties in the Kong configuration file, the most important one being db_update_frequency, which determine where your Kong nodes stand on the performance versus consistency trade-off.

Kong comes with default values tuned for consistency so that you can experiment with its clustering capabilities while avoiding surprises. As you prepare a production setup, you should consider tuning those values to ensure that your performance constraints are respected.

[db_update_frequency][db_update_frequency] (default: 5s)

This value determines the frequency at which your Kong nodes will be polling the database for invalidation events. A lower value means that the polling job will execute more frequently, but that your Kong nodes will keep up with changes you apply. A higher value means that your Kong nodes will spend less time running the polling jobs, and will focus on proxying your traffic.

Note: Changes propagate through the cluster in up to db_update_frequency seconds.

[db_update_propagation][db_update_propagation] (default: 0s)

Setting this parameter ensures that the change has time to propagate across your database nodes. When set, Kong nodes receiving invalidation events from their polling jobs will delay the purging of their cache for db_update_propagation seconds.

If a Kong node connected to an eventually consistent database was not delaying the event handling, it could purge its cache, only to cache the non-updated value again (because the change hasn’t propagated through the database yet)!

You should set this value to an estimate of the amount of time your database cluster takes to propagate changes.

Note: When this value is set, changes propagate through the cluster in up to db_update_frequency + db_update_propagation seconds.

[db_cache_ttl][db_cache_ttl] (default: 0s)

The time (in seconds) for which Kong will cache database entities (both hits and misses). This Time-To-Live value acts as a safeguard in case a Kong node misses an invalidation event, to avoid it from running on stale data for too long. When the TTL is reached, the value will be purged from its cache, and the next database result will be cached again.

By default, no data is invalidated based on this TTL (the default value is 0). This is usually fine: Kong nodes rely on invalidation events, which are handled at the db store level. If you are concerned that a Kong node might miss invalidation event for any reason, you should set a TTL. Otherwise the node might run with a stale value in its cache for an undefined amount of time until the cache is manually purged, or the node is restarted.

Interacting with the cache via the Admin API

If for some reason, you want to investigate the cached values, or manually invalidate a value cached by Kong (a cached hit or miss), you can do so via the Admin API /cache endpoint.

Note: Retrieving the cache_key for each entity being cached by Kong is currently an undocumented process. Future versions of the Admin API will make this process easier.

Inspect a cached value

Endpoint

/cache/{cache_key}

Response

If a value with that key is cached:

HTTP 200 OK
...
{
    ...
}

Else:

HTTP 404 Not Found

Purge a cached value

Endpoint

/cache/{cache_key}

Response

HTTP 204 No Content
...

Purge a node’s cache

Endpoint

/cache

Response

HTTP 204 No Content

Note: Be wary of using this endpoint on a node running in production with warm cache. If the node is receiving a lot of traffic, purging its cache at the same time will trigger many requests to your database, and could cause a dog-pile effect.

Thank you for your feedback.
Was this page useful?
情報が多すぎる場合 close cta icon
Kong Konnectを使用すると、より多くの機能とより少ないインフラストラクチャを実現できます。月額1Mリクエストが無料。
無料でお試しください
  • Kong
    APIの世界を動かす

    APIマネジメント、サービスメッシュ、イングレスコントローラーの統合プラットフォームにより、開発者の生産性、セキュリティ、パフォーマンスを大幅に向上します。

    • 製品
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • 製品アップデート
      • 始める
    • ドキュメンテーション
      • Kong Konnectドキュメント
      • Kong Gatewayドキュメント
      • Kong Meshドキュメント
      • Kong Insomniaドキュメント
      • Kong Konnect Plugin Hub
    • オープンソース
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kongコミュニティ
    • 会社概要
      • Kongについて
      • お客様
      • キャリア
      • プレス
      • イベント
      • お問い合わせ
  • 利用規約• プライバシー• 信頼とコンプライアンス
© Kong Inc. 2025