コンテンツにスキップ
Kong Logo | Kong Docs Logo
  • ドキュメント
    • API仕様を確認する
      View all API Specs すべてのAPI仕様を表示 View all API Specs arrow image
    • ドキュメンテーション
      API Specs
      Kong Gateway
      軽量、高速、柔軟なクラウドネイティブAPIゲートウェイ
      Kong Konnect
      SaaSのエンドツーエンド接続のための単一プラットフォーム
      Kong AI Gateway
      GenAI インフラストラクチャ向けマルチ LLM AI Gateway
      Kong Mesh
      Kuma と Envoy をベースにしたエンタープライズサービスメッシュ
      decK
      Kongの構成を宣言型で管理する上で役立ちます
      Kong Ingress Controller
      Kubernetesクラスタ内で動作し、Kongをプロキシトラフィックに設定する
      Kong Gateway Operator
      YAMLマニフェストを使用してKubernetes上のKongデプロイメントを管理する
      Insomnia
      コラボレーティブAPI開発プラットフォーム
  • Plugin Hub
    • Plugin Hubを探索する
      View all plugins すべてのプラグインを表示 View all plugins arrow image
    • 機能性 すべて表示 View all arrow image
      すべてのプラグインを表示
      AI's icon
      AI
      マルチ LLM AI Gatewayプラグインを使用してAIトラフィックを管理、保護、制御する
      認証's icon
      認証
      認証レイヤーでサービスを保護する
      セキュリティ's icon
      セキュリティ
      追加のセキュリティレイヤーでサービスを保護する
      トラフィック制御's icon
      トラフィック制御
      インバウンドおよびアウトバウンドAPIトラフィックの管理、スロットル、制限
      サーバーレス's icon
      サーバーレス
      他のプラグインと組み合わせてサーバーレス関数を呼び出します
      分析と監視's icon
      分析と監視
      APIとマイクロサービストラフィックを視覚化、検査、監視
      変革's icon
      変革
      Kongでリクエストとレスポンスをその場で変換
      ログ記録's icon
      ログ記録
      インフラストラクチャに最適なトランスポートを使用して、リクエストと応答データをログに記録します
  • サポート
  • コミュニティ
  • Kongアカデミー
デモを見る 無料トライアルを開始
Kong Gateway
3.9.x
  • Home icon
  • Kong Gateway
  • Ai Gateway
  • Llm Library Integration Guides
  • Set up AI Proxy with LangChain
report-issue問題を報告する
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • ドキュメント投稿ガイドライン
  • 3.10.x (latest)
  • 3.9.x
  • 3.8.x
  • 3.7.x
  • 3.6.x
  • 3.5.x
  • 3.4.x (LTS)
  • 3.3.x
  • 2.8.x (LTS)
  • アーカイブ (2.6より前)
  • Introduction
    • Overview of Kong Gateway
    • Support
      • Version Support Policy
      • Third Party Dependencies
      • Browser Support
      • Vulnerability Patching Process
      • Software Bill of Materials
    • Stability
    • Release Notes
    • Breaking Changes
      • Kong Gateway 3.9.x
      • Kong Gateway 3.8.x
      • Kong Gateway 3.7.x
      • Kong Gateway 3.6.x
      • Kong Gateway 3.5.x
      • Kong Gateway 3.4.x
      • Kong Gateway 3.3.x
      • Kong Gateway 3.2.x
      • Kong Gateway 3.1.x
      • Kong Gateway 3.0.x
      • Kong Gateway 2.8.x or earlier
    • Key Concepts
      • Services
      • Routes
      • Consumers
      • Upstreams
      • Plugins
      • Consumer Groups
    • How Kong Works
      • Routing Traffic
      • Load Balancing
      • Health Checks and Circuit Breakers
    • Glossary
  • Get Started with Kong
    • Get Kong
    • Services and Routes
    • Rate Limiting
    • Proxy Caching
    • Key Authentication
    • Load-Balancing
  • Install Kong
    • Overview
    • Kubernetes
      • Overview
      • Install Kong Gateway
      • Configure the Admin API
      • Install Kong Manager
    • Docker
      • Using docker run
      • Build your own Docker images
    • Linux
      • Amazon Linux
      • Debian
      • Red Hat
      • Ubuntu
    • Post-installation
      • Set up a data store
      • Apply Enterprise license
      • Enable Kong Manager
  • Kong in Production
    • Deployment Topologies
      • Overview
      • Kubernetes Topologies
      • Hybrid Mode
        • Overview
        • Deploy Kong Gateway in Hybrid mode
      • DB-less Deployment
      • Traditional
    • Running Kong
      • Running Kong as a non-root user
      • Securing the Admin API
      • Using systemd
    • Access Control
      • Start Kong Gateway Securely
      • Programatically Creating Admins
      • Enabling RBAC
    • Licenses
      • Overview
      • Download your License
      • Deploy Enterprise License
      • Using the License API
      • Monitor Licenses Usage
    • Networking
      • Default Ports
      • DNS Considerations
      • Network and Firewall
      • CP/DP Communication through a Forward Proxy
      • PostgreSQL TLS
        • Configure PostgreSQL TLS
        • Troubleshooting PostgreSQL TLS
    • Kong Configuration File
    • Environment Variables
    • Serving a Website and APIs from Kong
    • Monitoring
      • Overview
      • Prometheus
      • StatsD
      • Datadog
      • Health Check Probes
      • Expose and graph AI Metrics
    • Tracing
      • Overview
      • Writing a Custom Trace Exporter
      • Tracing API Reference
    • Resource Sizing Guidelines
    • Blue-Green Deployments
    • Canary Deployments
    • Clustering Reference
    • Performance
      • Performance Testing Benchmarks
      • Establish a Performance Benchmark
      • Improve performance with Brotli compression
    • Logging and Debugging
      • Log Reference
      • Dynamic log level updates
      • Customize Gateway Logs
      • Debug Requests
      • AI Gateway Analytics
    • Configure a gRPC service
    • Use the Expressions Router
    • Upgrade and Migration
      • Upgrading Kong Gateway 3.x.x
      • Backup and Restore
      • Upgrade Strategies
        • Dual-Cluster Upgrade
        • In-Place Upgrade
        • Blue-Green Upgrade
        • Rolling Upgrade
      • Upgrade from 2.8 LTS to 3.4 LTS
      • Migrate from OSS to Enterprise
      • Migration Guidelines Cassandra to PostgreSQL
      • Migrate to the new DNS client
      • Breaking Changes
  • Kong Gateway Enterprise
    • Overview
    • Secrets Management
      • Overview
      • Getting Started
      • Secrets Rotation
      • Advanced Usage
      • Backends
        • Overview
        • Environment Variables
        • AWS Secrets Manager
        • Azure Key Vaults
        • Google Cloud Secret Manager
        • HashiCorp Vault
      • How-To
        • Securing the Database with AWS Secrets Manager
      • Reference Format
    • Dynamic Plugin Ordering
      • Overview
      • Get Started with Dynamic Plugin Ordering
    • Audit Logging
    • Keyring and Data Encryption
    • Workspaces
    • Consumer Groups
    • Event Hooks
    • Configure Data Plane Resilience
    • About Control Plane Outage Management
    • FIPS 140-2
      • Overview
      • Install the FIPS Compliant Package
    • Authenticate your Kong Gateway Amazon RDS database with AWS IAM
    • Verify Signatures for Signed Kong Images
    • Verify Build Provenance for Signed Kong Images
    • Datakit
      • Overview
      • Get Started with Datakit
      • Datakit Configuration Reference
      • Datakit Examples Reference
  • Kong AI Gateway
    • Overview
    • Get started with AI Gateway
    • LLM Provider Integration Guides
      • OpenAI
      • Cohere
      • Azure
      • Anthropic
      • Mistral
      • Llama2
      • Vertex/Gemini
      • Amazon Bedrock
    • LLM Library Integration Guides
      • LangChain
    • AI Gateway Analytics
    • Expose and graph AI Metrics
    • AI Gateway Load Balancing
    • AI Gateway plugins
  • Kong Manager
    • Overview
    • Enable Kong Manager
    • Get Started with Kong Manager
      • Services and Routes
      • Rate Limiting
      • Proxy Caching
      • Authentication with Consumers
      • Load Balancing
    • Authentication and Authorization
      • Overview
      • Create a Super Admin
      • Workspaces and Teams
      • Reset Passwords and RBAC Tokens
      • Basic Auth
      • LDAP
        • Configure LDAP
        • LDAP Service Directory Mapping
      • OIDC
        • Configure OIDC
        • OIDC Authenticated Group Mapping
        • Migrate from previous configurations
      • Sessions
      • RBAC
        • Overview
        • Enable RBAC
        • Add a Role and Permissions
        • Create a User
        • Create an Admin
    • Networking Configuration
    • Workspaces
    • Create Consumer Groups
    • Sending Email
    • Troubleshoot
  • Develop Custom Plugins
    • Overview
    • Getting Started
      • Introduction
      • Set up the Plugin Project
      • Add Plugin Testing
      • Add Plugin Configuration
      • Consume External Services
      • Deploy Plugins
    • File Structure
    • Implementing Custom Logic
    • Plugin Configuration
    • Accessing the Data Store
    • Storing Custom Entities
    • Caching Custom Entities
    • Extending the Admin API
    • Writing Tests
    • Installation and Distribution
    • Proxy-Wasm Filters
      • Create a Proxy-Wasm Filter
      • Proxy-Wasm Filter Configuration
    • Plugin Development Kit
      • Overview
      • kong.client
      • kong.client.tls
      • kong.cluster
      • kong.ctx
      • kong.ip
      • kong.jwe
      • kong.log
      • kong.nginx
      • kong.node
      • kong.plugin
      • kong.request
      • kong.response
      • kong.router
      • kong.service
      • kong.service.request
      • kong.service.response
      • kong.table
      • kong.telemetry.log
      • kong.tracing
      • kong.vault
      • kong.websocket.client
      • kong.websocket.upstream
    • Plugins in Other Languages
      • Go
      • Javascript
      • Python
      • Running Plugins in Containers
      • External Plugin Performance
  • Kong Plugins
    • Overview
    • Authentication Reference
    • Allow Multiple Authentication Plugins
    • Plugin Queuing
      • Overview
      • Plugin Queuing Reference
  • Admin API
    • Overview
    • Declarative Configuration
    • Enterprise API
      • Information Routes
      • Health Routes
      • Tags
      • Debug Routes
      • Services
      • Routes
      • Consumers
      • Plugins
      • Certificates
      • CA Certificates
      • SNIs
      • Upstreams
      • Targets
      • Vaults
      • Keys
      • Filter Chains
      • Licenses
      • Workspaces
      • RBAC
      • Admins
      • Consumer Groups
      • Event Hooks
      • Keyring and Data Encryption
      • Audit Logs
      • Status API
    • Open Source API
  • Reference
    • kong.conf
    • Injecting Nginx Directives
    • CLI
    • Key Management
    • The Expressions Language
      • Overview
      • Language References
      • Performance Optimizations
    • Rate Limiting Library
    • WebAssembly
    • Reserved Entity Names
    • FAQ
enterprise-switcher-icon 次に切り替える: OSS
On this pageOn this page
  • Get started
    • Create LLM configuration
    • Launch the Gateway
    • Execute Your LangChain Code
  • Prepare the Gateway for production
    • Secure your AI model
  • Observability
    • Self-hosting AI Gateway observability
  • Prompt tuning, audit, and cost control features

このページは、まだ日本語ではご利用いただけません。翻訳中です。

旧バージョンのドキュメントを参照しています。 最新のドキュメントはこちらをご参照ください。

Set up AI Proxy with LangChain

This guide walks you through setting up the AI Proxy plugin with LangChain.

Kong AI Gateway delivers a suite of AI-specific plugins on top of the API Gateway platform, enabling you to:

  • Route a single consumer interface to multiple models, across many providers
  • Load balance similar models based on cost, latency, and other metrics/algorithms
  • Deliver a rich analytics and auditing suite for your deployments
  • Enable semantic features to protect your users, your models, and your costs
  • Provide no-code AI enhancements to your existing REST APIs
  • Leverage Kong’s existing ecosystem of authentication, monitoring, and traffic-control plugins

Get started

Kong AI Gateway exchanges inference requests in the OpenAI formats - thus you can easily and quickly connect your existing LangChain OpenAI adaptor-based integrations directly through Kong with no code changes.

You can target hundreds of models across the supported providers, all from the same client-side codebase.

Create LLM configuration

Kong AI Gateway uses the same familiar service/route/plugin system as the API Gateway product, with a declarative setup that launches a complete gateway system configured from a single YAML file.

Create your gateway YAML file, using the AI Proxy plugin, in this example for:

  • The OpenAI backend and GPT-4o model
  • The Gemini backend and Google One-hosted Gemini model
_format_version: "3.0"

# A service can hold plugins and features for "all models" you configure
services:
  - name: ai
    url: https://localhost:32000  # this can be any hostname

    # A route can denote a single model, or can support multiple based on the request parameters
    routes:
      - name: openai-gpt4o
        paths:
          - "/gpt4o"
        plugins:
          - name: ai-proxy  # ai-proxy is the core AI Gateway enabling feature
            config:
              route_type: llm/v1/chat
              model:
                provider: openai
                name: gpt-4o
              auth:
                header_name: Authorization
                header_value: "Bearer <OPENAI_KEY_HERE>"  # replace with your OpenAI key

Output this file to kong.yaml.

Launch the Gateway

Launch the Kong open-source gateway, loading this configuration YAML, with one command:

docker run -it --rm --name kong-ai -p 8000:8000 \
    -v "$(pwd)/kong.yaml:/etc/kong/kong.yaml" \
    -e "KONG_DECLARATIVE_CONFIG=/etc/kong/kong.yaml" \
    -e "KONG_DATABASE=off" \
    kong:3.8

Validate

Check you are reaching GPT-4o on OpenAI correctly:

curl -H 'Content-Type: application/json' -d '{"messages":[{"role":"user","content":"What are you?"}]}' http://127.0.0.1:8000/gpt4o

Response:

{
  ...
  ...
        "content": "I am an AI language model developed by OpenAI, designed to assist with generating text-based responses and providing information on a wide range of topics. How can I assist you today?",
  ...
  ...
}

Execute Your LangChain Code

Now you can configure your LangChain client code to point to Kong, and we should see identical results.

First, load the LangChain SDK into your Python dependencies:

# WSL2, Linux, macOS-native:
pip3 install -U langchain-openai
# or macOS if installed via Homebrew:
python3 -m venv .venv
source .venv/bin/activate
pip install -U langchain-openai

Then create an app.py script:

from langchain_openai import ChatOpenAI

kong_url = "http://127.0.0.1:8000"
kong_route = "gpt4o"

llm = ChatOpenAI(
    base_url=f'{kong_url}/{kong_route}',  # simply override the base URL from OpenAI, to Kong
    model="gpt-4o",
    api_key="NONE"  # set to NONE as we have not added any gateway-layer security yet
)

response = llm.invoke("What are you?")
print(f"$ ChainAnswer:> {response.content}")

Run the script:

python3 ./app.py

Custom tool usage

Kong also supports custom tools, defined via any supported OpenAI-compatible SDK, including LangChain.

With the same kong.yaml configuration, you can execute a simple custom tool definition:

from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

kong_url = "http://127.0.0.1:8000"
kong_route = "gpt4o"

@tool
def multiply(first_int: int, second_int: int) -> int:
    """Multiply two integers together."""
    return first_int * second_int

llm = ChatOpenAI(
    base_url=f'{kong_url}/{kong_route}',
    api_key="department-1-api-key"
)

llm_with_tools = llm.bind_tools([multiply])

chain = llm_with_tools | (lambda x: x.tool_calls[0]["args"]) | multiply
response = chain.invoke("What's four times 23?")
print(f"$ ToolsAnswer:> {response}")

Prepare the Gateway for production

Secure your AI model

We’ve just opened up our GPT-4o subscription to the localhost.

Now add a Kong-level API key to the kong.yaml configuration file, which secures your published AI route, and allows your to track usage across multiple users, departments, paying-subscribers, or any other entity:

_format_version: "3.0"

services:
  - name: ai
    url: https://localhost:32000

    routes:
      - name: openai-gpt4o
        paths:
          - "/gpt4o"
        plugins:
          - name: ai-proxy
            config:
              route_type: llm/v1/chat
              model:
                provider: openai
                name: gpt-4o
              auth:
                header_name: Authorization
                header_value: "Bearer <OPENAI_KEY_HERE>"  # replace with your OpenAI key again

          # Now we add a security plugin at the "individual model" scope
          - name: key-auth
            config:
              key_names:
                - Authorization

# and finally a consumer with **its own API key**
consumers:
  - username: department-1
    keyauth_credentials:
      - key: "Bearer department-1-api-key"

Adjust your Python code accordingly:

...
...
llm = ChatOpenAI(
    base_url=f'{kong_url}/{kong_route}',
    model="gpt-4o",
    api_key="department-1-api-key"  # THIS TIME WE SET THE API KEY FOR THE CONSUMER, AS CREATED ABOVE
)
...
...

Observability

There are two mechanisms for observability in Kong AI Gateway, depending on your deployment architecture:

  • Self-hosted and Kong open-source users can bring their favourite JSON-log dashboard software.
  • Kong Konnect users can use Konnect Advanced Analytics to automatically visualize every aspect of the AI Gateway operation.

Self-hosting AI Gateway observability

You can use one (or more) of Kong’s many logging protocol plugins, sending your AI Gateway metrics and logs (in JSON format) to your chosen dashboarding software.

You can choose to log metrics, input/output payloads, or both.

Sample ELK stack

Use the sample Elasticsearch/Logstash/Kibana stack on GitHub to see the full range of observability tools available when running LangChain applications via Kong AI Gateway.

Boot it up in three steps:

  1. Clone the repository:

     git clone https://github.com/KongHQ-CX/kong-ai-gateway-observability && cd kong-ai-gateway-observability/
    
  2. Export your OpenAI API auth header (with API key) into the current shell environment:

     export OPENAI_AUTH_HEADER="Bearer sk-proj-......"
    
  3. Start the stack:

     docker compose up
    

Now you can run the same LangChain code as in the previous step(s), visualizing exactly what’s happening in Kibana, at the following URL:

http://localhost:5601/app/dashboards#/view/aa8e4cb0-9566-11ef-beb2-c361d8db17a8

Example reports

You can generate analytics over every AI request executed by LangChain/Kong:

Kong API Stats Example

And even, if enabled, every request and response, as granular as “who-is-executing-what-when”:

Kong API Logs Example

This uses the HTTP Log plugin to send all AI statistics and payloads to Logstash.

Prompt tuning, audit, and cost control features

Now that you have your LangChain codebase calling one or many LLMs via Kong AI Gateway, you can snap-in as many features as required by harnessing Kong’s growing array of AI plugins.

Thank you for your feedback.
Was this page useful?
情報が多すぎる場合 close cta icon
Kong Konnectを使用すると、より多くの機能とより少ないインフラストラクチャを実現できます。月額1Mリクエストが無料。
無料でお試しください
  • Kong
    APIの世界を動かす

    APIマネジメント、サービスメッシュ、イングレスコントローラーの統合プラットフォームにより、開発者の生産性、セキュリティ、パフォーマンスを大幅に向上します。

    • 製品
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • 製品アップデート
      • 始める
    • ドキュメンテーション
      • Kong Konnectドキュメント
      • Kong Gatewayドキュメント
      • Kong Meshドキュメント
      • Kong Insomniaドキュメント
      • Kong Konnect Plugin Hub
    • オープンソース
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kongコミュニティ
    • 会社概要
      • Kongについて
      • お客様
      • キャリア
      • プレス
      • イベント
      • お問い合わせ
  • 利用規約• プライバシー• 信頼とコンプライアンス
© Kong Inc. 2025