Motivation
Half a month ago in this post, I tried six LLMs for using the Knowledge Graph. As I wrote there, the LLMs available at this time are OpenAI and Mistral. So, I tried to run MistarlAI’s LLM on my PC (local environment). In fact, I found that it is not usable for the knowledge graph. In this post, I tried to use MistaralAI via Langchain via API to see if it can be used in the knowledge graph.
Sources.
- MistralAI This is the home page. To obtain an account and API key, go to this page. To obtain an account, sing up from this page and enter the verification code sent to your email address.
- Coding AI from [Codestral-22B] Mistral AI, supporting over 80 languages! I referred to this page for getting an account and API key.
Dockerfile
Dockerfile for using MistralAI via Langchain from Jupyterlab NoteBook, add a line to the Dockerfile in this post The other parts have not been added or changed.
RUN pip install --upgrade pip setuptools \
&& pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 \
--index-url https://download.pytorch.org/whl/cu121 \
&& pip install torch torchvision torchaudio \
&& pip install jupyterlab matplotlib pandas scikit-learn ipywidgets \
&& pip install transformers accelerate sentencepiece einops \
&& pip install langchain bitsandbytes protobuf \
&& pip install auto-gptq optimum \
&& pip install pypdf tiktoken sentence_transformers faiss-gpu trafilatura \
&& pip install langchain-community langchain_openai wikipedia \
&& pip install langchain-huggingface unstructured html2text rank-bm25 janome \
&& pip install langchain-chroma sudachipy sudachidict_full \
&& pip install mysql-connector-python \
&& pip install langchain-experimental neo4j pandas \
&& pip install json-repair langchain-mistralai
The last line “langchain-mistralai” is the part I added.
Launch the Mistal model
# Mistral-smallモデルを立ち上げる
import os
from langchain_mistralai import ChatMistralAI
os.environ['MISTRAL_API_KEY'] = 'xxxx'
llm = ChatMistralAI(
model = "mistral-small",
temperature = 0,
# other params...,
)
xxx is the API key obtained from the MistaralAI page.
For the rest of the code, please refer to Source 1. shown in this posting.
Execution Results
The results of the run using the mistarl-small model resulted in the following error.
HTTPStatusError: Error response 400 while fetching https://api.mistral.ai/v1/chat/completions: {"object":"error","message":"Function calling is not enabled for this model","type":"invalid_request_error","param":null,"code":null}
Change the Mistral model.
Changed to “mistral-large-latest” as follows
# Mistralモデルを立ち上げる
import os
from langchain_mistralai import ChatMistralAI
os.environ['MISTRAL_API_KEY'] = 'xxxx'
llm = ChatMistralAI(
model = "mistral-large-latest",
temperature = 0,
# other params...,
)
Summary
It was found that Knowledge graphs can be tested using MistalAI’s model. However, this time I only created the graphs using LLMs and did not verify the RAGs using the created graphs.
The reasons for trying MistralAI this time are as follows. I am planning to use Amazon Bedrock in the future, so I wanted to check in advance which of the LLMs (foundation model; FM) provided there can be used with Knowledge graphs.