このページは、まだ日本語ではご利用いただけません。翻訳中です。
Prerequisites
- Mistral’s API key
- Redis configured as a vector database
- Redis configured as a cache
- A service and a route for the LLM provider. You need a service to contain the route for the LLM provider. Create a service first:
curl -X POST http://localhost:8001/services \ --data "name=ai-semantic-cache" \ --data "url=http://localhost:32000"
Remember that the upstream URL can point anywhere empty, as it won’t be used by the plugin.
Then, create a route:
curl -X POST http://localhost:8001/services/ai-semantic-cache/routes \ --data "name=mistral-semantic-cache" \ --data "paths[]=~/mistral-semantic-cache$"
Mistral Example
This configures the following:
-
embeddings.auth.header_value
: The API key for Mistral. This uses Mistral’s API Key explicitly, but you can use an environment variable instead if you want. -
model.provider
: The model provider you want to use. In this example, Mistral. -
model.name
: The AI model to use for generating embeddings. This example is configured withmistral-embed
because it’s the only option available for Mistral AI. -
model.options.upstream_url
: The upstream URL for the LLM provider. -
vectordb.dimensions
: The dimensionality for the vectors. This configuration uses1024
since it’s the example Mistral uses in their documentation. -
vectordb.distance_metric
: The distance metric to use for vectors. This example usescosine
. -
vectordb.strategy
: Defines the vector database, in this case, Redis. -
vectordb.threshold
: Defines the similarity threshold for accepting semantic search results. In the example, this is configured to as a low threshold, meaning it would include results that are only somewhat similar. -
vectordb.redis.host
: The host of your vector database. -
vectordb.redis.port
: The port to use for your vector database.
More information
- Redis Documentation: Vectors - Learn how to use vector fields and perform vector searches in Redis
- Redis Documentation: How to Perform Vector Similarity Search Using Redis in NodeJS