This documentation describes the integration of MindsDB with LangChain, a framework for developing applications powered by language models. The integration allows for the deployment of LangChain models within MindsDB, providing the models with access to data from various data sources.

Prerequisites

Before proceeding, ensure the following prerequisites are met:

  1. Install MindsDB locally via Docker or Docker Desktop.
  2. To use LangChain within MindsDB, install the required dependencies following this instruction.
  3. Obtain the API key for a selected model provider that you want to use through LangChain.

Available models include the following:

Setup

Create an AI engine from the LangChain handler.

CREATE ML_ENGINE langchain_engine
FROM langchain
USING
      serper_api_key = 'your-serper-api-key'; -- it is an optional parameter (if provided, the model will use serper.dev search to enhance the output)

Create a model using langchain_engine as an engine and a selected model provider.

CREATE MODEL langchain_model
PREDICT target_column
USING
      engine = 'langchain_engine',           -- engine name as created via CREATE ML_ENGINE
      <provider>_api_key = 'api-key-value',  -- replace <provider> with one of the available values (openai, anthropic, anyscale, google, litellm)
      model_name = 'model-name',             -- optional, model to be used (for example, 'gpt-4' if 'openai_api_key' provided)
      prompt_template = 'message to the model that may include some {{input}} columns as variables',
      max_tokens = 4096; -- defines the maximum number of tokens

This handler supports tracing features for LangChain via LangFuse. To use it, provide the following parameters in the USING clause:

  • langfuse_host,
  • langfuse_public_key,
  • langfuse_secret_key.

There are three different tools utilized by this agent:

  • MindsDB is the internal MindsDB executor.
  • Metadata fetches the metadata information for the available tables.
  • Write is able to write agent responses into a MindsDB data source.

Each tool exposes the internal MindsDB executor in a different way to perform its tasks, effectively enabling the agent model to read from (and potentially write to) data sources or models available in the active MindsDB project.

Create a conversational model using langchain_engine as an engine and a selected model provider.

Usage

The following usage examples utilize langchain_engine to create a model with the CREATE MODEL statement.

Create a model that will be used to ask questions.

CREATE ML_ENGINE langchain_engine_google
FROM langchain;

CREATE MODEL langchain_google_model
PREDICT answer 
USING
     engine = 'langchain_engine_google',
     provider = 'google',
     google_api_key = 'api-key-value',
     model_name = 'gemini-1.5-flash',
     mode = 'conversational',
     user_column = 'question',
     assistant_column = 'answer',
     verbose = True,
     prompt_template = 'Answer the users input in a helpful way: {{question}}',
     max_tokens = 4096;

Ask questions.

SELECT question, answer
FROM langchain_google_model
WHERE question = 'How many planets are in the solar system?';

Next Steps

Go to the Use Cases section to see more examples.