このページは、まだ日本語ではご利用いただけません。翻訳中です。
Changelog
Kong Gateway 3.8.x
- Added the
allow_override
option to allow overriding the upstream model auth parameter or header from the caller’s request. #13158 - Replaced the library and use
cycle_aware_deep_copy
for therequest_table
object. #13582 - The Mistral provider can now use mistral.ai-managed services by omitting the
upstream_url
. #13481 - Added the new response header
X-Kong-LLM-Model
, which displays the name of the language model used in the AI Proxy plugin. #13472 - Latency data is now pushed to logs and metrics. #13428
- Kong AI Gateway now supports all AWS Bedrock Converse API models. #12948
- Kong AI Gateway now supports the Google Gemini chat (
generateContent
) interface. #12948 - Fixed various issues #13000:
- Fixed an issue where certain Azure models would return partial tokens/words when in response-streaming mode.
- Fixed an issue where Cohere and Anthropic providers didn’t read the
model
parameter properly from the caller’s request body. - Fixed an issue where using OpenAI Function inference requests would log a request error, and then hang until timeout.
- Fixed an issue where AI Proxy would still allow callers to specify their own model, ignoring the plugin-configured model name.
- Fixed an issue where AI Proxy would not take precedence of the plugin’s configured model tuning options over those in the user’s LLM request.
- Fixed an issue where setting OpenAI SDK model parameter
null
caused analytics to not be written to the logging plugin(s).
- Fixed an issue when response was gzipped even if the client didn’t accept the format. #13155
- Fixed an issue where the object constructor would set data on the class instead of the instance. #13028
- Added a configuration validation to prevent
log_statistics
from being enabled on providers that don’t support statistics. Accordingly, the default oflog_statistics
has changed fromtrue
tofalse
, and a database migration has been added for disablinglog_statistics
if it has already been enabled upon unsupported providers. #12860 - Fixed an issue where the plugin couldn’t be applied per consumer or per service. #13209
Kong Gateway 3.7.x
- To support the new messages API of Anthropic,
the upstream path of the
Anthropic
forllm/v1/chat
route type has changed from/v1/complete
to/v1/messages
. - Added support for streaming event-by-event responses back to the client on supported providers. #12792
- AI Proxy now reads most prompt tuning parameters from the client,
while the plugin config parameters under
model_options
are now just defaults. This fixes support for using the respective provider’s native SDK. #12903 - You can now use an existing OpenAI-compatible SDK (for example, Python OpenAI) to call
different models, providers, and configurations with Kong AI Gateway.
AI Proxy now has a
preserve
option forroute_type
, where the requests and responses are passed directly to the upstream LLM. This enables compatibility with any models and SDKs that may be used when calling the AI services. #12903 - Added support for streaming event-by-event responses back to the client on supported providers. #12792
- Enterprise feature: You can now use Azure’s native authentication mechanism to secure your cloud-hosted models.
Kong Gateway 3.6.x
- Introduced the AI Proxy plugin, which can mediate request and response formats, as well as authentication between users. This plugin supports a variety of Large Language Model (LLM) AI services.