This documentation describes the integration of MindsDB with OpenAI, an AI research organization known for developing AI models like GPT-3 and GPT-4.
The integration allows for the deployment of OpenAI models within MindsDB, providing the models with access to data from various data sources.
CREATE MODEL openai_modelPREDICT target_columnUSING engine = 'openai_engine', -- engine name as created via CREATE ML_ENGINE api_base = 'base-url', -- optional, replaces the default base URL mode = 'mode_name', -- optional, mode to run the model in model_name = 'openai_model_name', -- optional with default value of gpt-3.5-turbo question_column = 'question', -- optional, column name that stores user input context_column = 'context', -- optional, column that stores context of the user input prompt_template = 'input message to the model here', -- optional, user provides instructions to the model here user_column = 'user_input', -- optional, stores user input assistant_column = 'conversation_context', -- optional, stores conversation context prompt = 'instruction to the model', -- optional stores instruction to the model max_tokens = 100, -- optional, token limit for answer temperature = 0.3, -- temp json_struct = { 'key': 'value', ... }'
If you want to update the prompt_template parameter, you do not have to recreate the model. Instead, you can override the prompt_template parameter at prediction time like this:
Copy
Ask AI
SELECT question, answerFROM openai_modelWHERE question = 'input question here'USING prompt_template = 'input new message to the model here';
The following parameters are available to use when creating an OpenAI model:
engine
This is the engine name as created with the CREATE ML_ENGINE statement.
api_base
This parameter is optional.
It replaces the default OpenAI’s base URL with the defined value.
mode
This parameter is optional.
The available modes include default, conversational, conversational-full, image, and embedding.
The default mode is used by default. The model replies to the prompt_template message.
The conversational mode enables the model to read and reply to multiple messages.
The conversational-full mode enables the model to read and reply to multiple messages, one reply per message.
The image mode is used to create an image instead of a text reply.
The embedding mode enables the model to return output in the form of embeddings.
This parameter is optional. It contains the column name that stores user input.
context_column
This parameter is optional. It contains the column name that stores context for the user input.
prompt_template
This parameter is optional if you use question_column. It stores the message or instructions to the model. Please note that this parameter can be overridden at prediction time.
max_tokens
This parameter is optional. It defines the maximum token cost of the prediction. Please note that this parameter can be overridden at prediction time.
temperature
This parameter is optional. It defines how risky the answers are. The value of 0 marks a well-defined answer, and the value of 0.9 marks a more creative answer. Please note that this parameter can be overridden at prediction time.
json_struct
This parameter is optional. It is used to extract JSON data from a text column provided in the prompt_template parameter. See examples here.
Here are the combination of parameters for creating a model:
Provide a prompt_template alone.
Provide a question_column and optionally a context_column.
Provide a prompt, user_column, and assistant_column to create a model in the conversational mode.
The following usage examples utilize openai_engine to create a model with the CREATE MODEL statement.
Answering questions without context
Here is how to create a model that answers questions without context.
Copy
Ask AI
CREATE MODEL openai_modelPREDICT answerUSING engine = 'openai_engine', question_column = 'question';
Query the model to get predictions.
Copy
Ask AI
SELECT question, answerFROM openai_modelWHERE question = 'Where is Stockholm located?';
Here is the output:
Copy
Ask AI
+---------------------------+-------------------------------+|question |answer |+---------------------------+-------------------------------+|Where is Stockholm located?|Stockholm is located in Sweden.|+---------------------------+-------------------------------+
Answering questions with context
Here is how to create a model that answers questions with context.
SELECT context, question, answerFROM openai_modelWHERE context = 'Answer accurately'AND question = 'How many planets exist in the solar system?';
On execution, we get:
Copy
Ask AI
+-------------------+-------------------------------------------+----------------------------------------------+|context |question |answer |+-------------------+-------------------------------------------+----------------------------------------------+|Answer accurately |How many planets exist in the solar system?| There are eight planets in the solar system. |+-------------------+-------------------------------------------+----------------------------------------------+
Prompt completion
Here is how to create a model that offers the most flexible mode of operation. It answers any query provided in the prompt_template parameter.
Good prompts are the key to getting great completions out of large language models like the ones that OpenAI offers. For best performance, we recommend you read their prompting guide before trying your hand at prompt templating.
Let’s look at an example that reuses the openai_model model created earlier and overrides parameters at prediction time.
Copy
Ask AI
SELECT instruction, answerFROM openai_modelWHERE instruction = 'Speculate extensively'USING prompt_template = '{{instruction}}. What does Tom Hanks like?', max_tokens = 100, temperature = 0.5;
On execution, we get:
Copy
Ask AI
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+|instruction |answer |+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+|Speculate extensively |Some people speculate that Tom Hanks likes to play golf, while others believe that he enjoys acting and directing. It is also speculated that he likes to spend time with his family and friends, and that he enjoys traveling.|+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Conversational mode
Here is how to create a model in the conversational mode.
Copy
Ask AI
CREATE MODEL openai_chat_modelPREDICT responseUSING engine = 'openai_engine', mode = 'conversational', model_name = 'gpt-3.5-turbo', user_column = 'user_input', assistant_column = 'conversation_history', prompt = 'Answer the question in a helpful way.';
And here is how to query this model:
Copy
Ask AI
SELECT responseFROM openai_chat_modelWHERE user_input = '<question>'AND conversation_history = '<optionally, provide the context for the question>';