CREATE KNOWLEDGE_BASE Syntax
Here is the syntax for creating a knowledge base:
my_kb and associates the specified models and storage. my_kb is a unique identifier of the knowledge base within MindsDB.
Here is how to list all knowledge bases:
Users can use the variables and the
from_env() function to pass parameters when creating knowledge bases.As MindsDB stores objects, such as models or knowledge bases, inside projects, you can create a knowledge base inside a custom project.
Supported LLMs
Below is the list of all language models supported for theembedding_model and reranking_model parameters.
provider = 'openai'
This provider is supported for both embedding_model and reranking_model.
Users can define the default embedding and reranking models from OpenAI in Settings of the MindsDB GUI.Furthermore, users can select
Custom OpenAI API from the dropdown and use models from any OpenAI-compatible API.openai as the model provider, users should define the following model parameters.
model_namestores the name of the OpenAI model to be used.api_keystores the OpenAI API key.
provider = 'openai_azure'
This provider is supported for both embedding_model and reranking_model.
Users can define the default embedding and reranking models from Azure OpenAI in Settings of the MindsDB GUI.
openai_azure as the model provider, users should define the following model parameters.
model_namestores the name of the OpenAI model to be used.api_keystores the OpenAI API key.base_urlstores the base URL of the Azure instance.api_versionstores the version of the Azure instance.
Users need to log in to their Azure OpenAI instance to retrieve all relevant parameter values. Next, click on
Explore Azure AI Foundry portal and go to Models + endpoints. Select the model and copy the parameter values.provider = 'google'
This provider is supported for both embedding_model and reranking_model.
Users can define the default embedding and reranking models from Google in Settings of the MindsDB GUI.
google as the model provider, users should define the following model parameters.
model_namestores the name of the Google model to be used.api_keystores the Google API key.
provider = 'bedrock'
This provider is supported for both embedding_model and reranking_model.
When choosing bedrock as the model provider, users should define the following model parameters.
model_namestores the name of the model available via Amazon Bedrock.aws_access_key_idstores a unique identifier associated with your AWS account, used to identify the user or application making requests to AWS.aws_region_namestores the name of the AWS region you want to send your requests to (e.g.,"us-west-2").aws_secret_access_keystores the secret key associated with your AWS access key ID. It is used to sign your requests securely.aws_session_tokenis an optional parameter that stores a temporary token used for short-term security credentials when using AWS Identity and Access Management (IAM) roles or temporary credentials.
provider = 'snowflake'
This provider is supported for reranking_model. Note that Snowflake Cortex AI does not offer embedding models as of now.
When choosing snowflake as the model provider, users should choose one of the available models from Snowflake Cortex AI and define the following model parameters.
model_namestores the name of the model available via Snowflake Cortex AI.api_keystores the Snowflake Cortex AI API key.snowflake_account_idstores the Snowflake account ID.
How to Generate the API key of Snowflake Cortex AI
How to Generate the API key of Snowflake Cortex AI
Follow the below steps to generate the API key.
-
Generate a key pair according to this instruction as below.
-
Execute these commands in the console:
-
Save the public key, that is, the content of rsa_key.pub, into your database user:
-
Execute these commands in the console:
-
Verify the key pair with the database user.
-
Install
snowsqlfollowing this instruction. -
Execute this command in the console:
-
Install
-
Generate JWT token.
- Download the Python script from Snowflake’s Developer Guide for Authentication. Here is a direct download link.
- Ensure to have the PyJWT module installed that is required for running the script.
-
Run the script using this command:
This command returns the JWT token, which is used in the
api_keyparameter for thesnowflakeprovider.
provider = 'ollama'
This provider is supported for both embedding_model and reranking_model.
Users can define the default embedding and reranking models from Ollama in Settings of the MindsDB GUI.
ollama as the model provider, users should define the following model parameters.
model_namestores the name of the model to be used.base_urlstores the base URL of the Ollama instance.
embedding_model
The embedding model is a required component of the knowledge base. It stores specifications of the embedding model to be used.
Users can define the embedding model choosing one of the following options.
Option 1. Use the embedding_model parameter to define the specification.
You can define the default models in the Settings of the MindsDB Editor GUI.
Note that if you define
default_embedding_model in the configuration file, you do not need to provide the embedding_model parameter when creating a knowledge base. If provide both, then the values from the embedding_model parameter are used.When using default_embedding_model from the configuration file, the knowledge base saves this model internally. Therefore, when changing default_embedding_model in the configuration file to a different one after the knowledge base is created, it does not affect the already created knowledge bases.-
providerIt is a required parameter. It defines the model provider. -
model_nameIt is a required parameter. It defines the embedding model name as specified by the provider. -
api_keyThe API key is required to access the embedding model assigned to a knowledge base. Users can provide it either in thisapi_keyparameter, or in theOPENAI_API_KEYenvironment variable for"provider": "openai"andAZURE_OPENAI_API_KEYenvironment variable for"provider": "azure_openai". -
base_urlIt is an optional parameter, which defaults tohttps://api.openai.com/v1/. It is a required parameter when using theazure_openaiprovider. It is the root URL used to send API requests. -
api_versionIt is an optional parameter. It is a required parameter when using theazure_openaiprovider. It defines the API version.
reranking_model
The reranking model is an optional component of the knowledge base. It stores specifications of the reranking model to be used.
Users can disable reranking features of knowledge bases by setting this parameter to false.
reranking_model parameter to define the specification.
You can define the default models in the Settings of the MindsDB Editor GUI.
Note that if you define
default_reranking_model in the configuration file, you do not need to provide the reranking_model parameter when creating a knowledge base. If provide both, then the values from the reranking_model parameter are used.When using default_reranking_model from the configuration file, the knowledge base saves this model internally. Therefore, when changing default_reranking_model in the configuration file to a different one after the knowledge base is created, it does not affect the already created knowledge bases.-
providerIt is a required parameter. It defines the model provider as listed in supported LLMs. -
model_nameIt is a required parameter. It defines the embedding model name as specified by the provider. -
api_keyThe API key is required to access the embedding model assigned to a knowledge base. Users can provide it either in thisapi_keyparameter, or in theOPENAI_API_KEYenvironment variable for"provider": "openai"andAZURE_OPENAI_API_KEYenvironment variable for"provider": "azure_openai". -
base_urlIt is an optional parameter, which defaults tohttps://api.openai.com/v1/. It is a required parameter when using theazure_openaiprovider. It is the root URL used to send API requests. -
api_versionIt is an optional parameter. It is a required parameter when using theazure_openaiprovider. It defines the API version. -
methodIt is an optional parameter. It defines the method used to calculate the relevance of the output rows. The available options includemulti-classandbinary. It defaults tomulti-class.
Reranking MethodThe
multi-class reranking method classifies each document chunk (that meets any specified metadata filtering conditions) into one of four relevance classes:- Not relevant with class weight of 0.25.
- Slightly relevant with class weight of 0.5.
- Moderately relevant with class weight of 0.75.
- Highly relevant with class weight of 1.
relevance_score of a document is calculated as the sum of each chunk’s class weight multiplied by its class probability (from model logprob output).The binary reranking method simplifies classification by determining whether a document is relevant or not, without intermediate relevance levels. With this method, the overall relevance_score of a document is calculated based on the model log probability.storage
The vector store is a required component of the knowledge base. It stores data in the form of embeddings.
It is optional for users to provide the storage parameter. If not provided, the default ChromaDB is created when creating a knowledge base.
The available options include either PGVector or ChromaDB.
It is recommended to use PGVector version 0.8.0 or higher for a better performance.
storage parameter is not provided, the system creates the default ChromaDB vector database called <kb_name>_chromadb with the default table called default_collection that stores the embedded data. This default ChromaDB vector database is stored in MindsDB’s storage.
In order to provide the storage vector database, it is required to connect it to MindsDB beforehand.
Here is an example for PGVector.
Note that you do not need to have the
storage_table created as it is created when creating a knowledge base.metadata_columns
The data inserted into the knowledge base can be classified as metadata, which enables users to filter the search results using defined data fields.
Note that source data column(s) included in
metadata_columns cannot be used in content_columns, and vice versa.id_column and content_columns) are considered metadata columns.
Here is an example of usage. A user wants to store the following data in a knowledge base.
Go to the Complete Example section below to find out how to access this sample data.
product column can be used as metadata to enable metadata filtering.
content_columns
The data inserted into the knowledge base can be classified as content, which is embedded by the embedding model and stored in the underlying vector store.
Note that source data column(s) included in
content_columns cannot be used in metadata_columns, and vice versa.content column is expected by default when inserting data into the knowledge base.
Here is an example of usage. A user wants to store the following data in a knowledge base.
Go to the Complete Example section below to find out how to access this sample data.
notes column can be used as content.
id_column
The ID column uniquely identifies each source data row in the knowledge base.
It is an optional parameter. If provided, this parameter is a string that contains the source data ID column name. If not provided, it is generated from the hash of the content columns.
Here is an example of usage. A user wants to store the following data in a knowledge base.
Go to the Complete Example section below to find out how to access this sample data.
order_id column can be used as ID.
Note that if the source data row is chunked into multiple chunks by the knowledge base (that is, to optimize the storage), then these rows in the knowledge base have the same ID value that identifies chunks from one source data row.
Available options for the ID column values
-
User-Defined ID Column:
When users defined theid_columnparameter, the values from the provided source data column are used to identify source data rows within the knowledge base. -
User-Generated ID Column:
When users do not have a column that uniquely identifies each row in their source data, they can generate the ID column values when inserting data into the knowledge base using functions likeHASH()orROW_NUMBER().
- Default ID Column:
If theid_columnparameter is not defined, its default values are build from the hash of the content columns and follow the format:<first 16 char of md5 hash of row content>.
Example
Here is a sample knowledge base that will be used for examples in the following content.DESCRIBE KNOWLEDGE_BASE Syntax
Users can get details about the knowledge base using the DESCRIBE KNOWLEDGE_BASE command.