このページは、まだ日本語ではご利用いただけません。翻訳中です。
Looking for the plugin's configuration parameters? You can find them in the AI Semantic Prompt Guard configuration reference doc.
The AI Semantic Prompt Guard plugin enhances the AI Prompt Guard plugin by allowing you to permit or block prompts based on a list of similar prompts, helping to prevent misuse of llm/v1/chat or llm/v1/completions requests.
You can use a combination of allow and deny rules to maintain integrity and compliance when serving an LLM service using Kong Gateway.
How it works
The plugin matches lists of prompts to requests through AI Proxy.
The matching behavior is as follows:
- If any
denyprompts are set, and the request matches prompt in thedenylist, the caller receives a 400 response. - If any
allowprompts are set, but the request matches none of the allowed prompts, the caller also receives a 400 response. - If any
allowprompts are set, and the request matches one of theallowprompts, the request passes through to the LLM. - If there are both
denyandallowprompts set, thedenycondition takes precedence overallow. Any request that matches a prompt in thedenylist will return a 400 response, even if it also matches a prompt in theallowlist. If the request does not match a prompt in thedenylist, then it must match a prompt in theallowlist to be passed through to the LLM
Get started with the AI Prompt Guard plugin
- AI Gateway quickstart: Set up AI Proxy
- Configuration reference
- Basic configuration example
- Learn how to use the plugin