このページは、まだ日本語ではご利用いただけません。翻訳中です。
Looking for the plugin's configuration parameters? You can find them in the AI Prompt Guard configuration reference doc.
The AI Prompt Guard plugin lets you to configure a series of PCRE-compatible regular expressions as allow or deny lists,
to guard against misuse of llm/v1/chat or llm/v1/completions requests.
You can use this plugin to allow or block specific prompts, words, phrases, or otherwise have more control over how an LLM service is used when called via Kong Gateway.
It does this by scanning all chat messages (where the role is user) for the specific expressions set.
You can use a combination of allow and deny rules to preserve integrity and compliance when serving an LLM service using Kong Gateway.
-
For
llm/v1/chattype models: You can optionally configure the plugin to ignore existing chat history, wherein it will only scan the trailingusermessage. -
For
llm/v1/completionstype models: There is only onepromptfield, thus the whole prompt is scanned on every request.
This plugin extends the functionality of the AI Proxy plugin, and requires AI Proxy to be configured first. Check out the AI Gateway quickstart to get an AI proxy up and running within minutes!
How it works
The plugin matches lists of regular expressions to requests through AI Proxy.
The matching behavior is as follows:
- If any
denyexpressions are set, and the request matches any regex pattern in thedenylist, the caller receives a 400 response. - If any
allowexpressions are set, but the request matches none of the allowed expressions, the caller also receives a 400 response. - If any
allowexpressions are set, and the request matches one of theallowexpressions, the request passes through to the LLM. - If there are both
denyandallowexpressions set, thedenycondition takes precedence overallow. Any request that matches an entry in thedenylist will return a 400 response, even if it also matches an expression in theallowlist. If the request does not match an expression in thedenylist, then it must match an expression in theallowlist to be passed through to the LLM
Get started with the AI Prompt Guard plugin
- AI Gateway quickstart: Set up AI Proxy
- Configuration reference
- Basic configuration example
- Learn how to use the plugin