このページは、まだ日本語ではご利用いただけません。翻訳中です。
Looking for the plugin's configuration parameters? You can find them in the AI Semantic Prompt Guard configuration reference doc.
The AI Semantic Prompt Guard plugin enhances the AI Prompt Guard plugin by allowing you to permit or block prompts based on a list of similar prompts, helping to prevent misuse of llm/v1/chat
or llm/v1/completions
requests.
You can use a combination of allow
and deny
rules to maintain integrity and compliance when serving an LLM service using Kong Gateway.
How it works
The plugin matches lists of prompts to requests through AI Proxy.
The matching behavior is as follows:
- If any
deny
prompts are set, and the request matches prompt in thedeny
list, the caller receives a 400 response. - If any
allow
prompts are set, but the request matches none of the allowed prompts, the caller also receives a 400 response. - If any
allow
prompts are set, and the request matches one of theallow
prompts, the request passes through to the LLM. - If there are both
deny
andallow
prompts set, thedeny
condition takes precedence overallow
. Any request that matches a prompt in thedeny
list will return a 400 response, even if it also matches a prompt in theallow
list. If the request does not match a prompt in thedeny
list, then it must match a prompt in theallow
list to be passed through to the LLM
Get started with the AI Prompt Guard plugin
- AI Gateway quickstart: Set up AI Proxy
- Configuration reference
- Basic configuration example
- Learn how to use the plugin