-
Checked other resources
Commit to Help
Example Codeimport { ConversationalRetrievalQAChain } from 'langchain/chains';
import { ChatMistralAI } from '@langchain/mistralai';
import { CheerioWebBaseLoader } from 'langchain/document_loaders/web/cheerio';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { MistralAIEmbeddings } from '@langchain/mistralai';
const labelsJsonSchema = {
type: 'array',
items: {
type: 'object',
properties: {
name: { type: 'string' },
description: { type: 'string' },
},
required: ['name'],
},
description: 'A list of labels for the given article.',
};
(
async function () {
const articleUrl = 'https://medium.com/databutton/getting-started-with-langchain-a-powerful-tool-for-working-with-large-language-models-286419ba0842';
const mistralAPIKey = 'YOUR_API'
// Fetch the article page
const loader = new CheerioWebBaseLoader(articleUrl);
const docs = await loader.load();
// Split the article into pages
const splitter = new RecursiveCharacterTextSplitter();
const splitDocs = await splitter.splitDocuments(docs);
const embeddings = new MistralAIEmbeddings({
apiKey: mistralAPIKey,
});
const vectorStore = await MemoryVectorStore.fromDocuments(
splitDocs,
embeddings,
);
const retriever = vectorStore.asRetriever();
// Setup model
const model = new ChatMistralAI({
apiKey: mistralAPIKey,
model: 'mistral-large-latest',
});
const modelWithTool = model.withStructuredOutput(labelsJsonSchema);
// Setup the chain
//! This will fail because the chain is not taking the modelWithTool
const chain = ConversationalRetrievalQAChain.fromLLM(modelWithTool, retriever);
// Process the chain
const response = await chain.invoke({
question: `Recommend upto 5 relavant labels to the given article. Don't make things up and stick to the content of the article.`,
chat_history: [],
});
console.log(response);
})(); Description
System Info"pnpm info output": keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores dist dependencies: maintainers:
dist-tags: published a week ago by jacoblee93 jacoblee93@gmail.com platform: mac |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
To integrate Mistral with structured output in a Here's a streamlined approach to use Mistral with structured output:
Adjust the templates and options according to your specific requirements. If you encounter issues or need further assistance, providing more details about the error message or issue will help in diagnosing the problem and offering a more accurate solution.
|
Beta Was this translation helpful? Give feedback.
-
I figured it out myself. This the updated code: import { ConversationalRetrievalQAChain } from 'langchain/chains';
import { ChatMistralAI } from '@langchain/mistralai';
import { CheerioWebBaseLoader } from 'langchain/document_loaders/web/cheerio';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { MistralAIEmbeddings } from '@langchain/mistralai';
const labelsJsonSchema = {
type: 'array',
items: {
type: 'object',
properties: {
name: { type: 'string' },
description: { type: 'string' },
},
required: ['name'],
},
description: 'A list of labels for the given article.',
};
(
async function () {
const articleUrl = 'https://medium.com/databutton/getting-started-with-langchain-a-powerful-tool-for-working-with-large-language-models-286419ba0842';
const mistralAPIKey = 'YOUR_API'
// Fetch the article page
const loader = new CheerioWebBaseLoader(articleUrl);
const docs = await loader.load();
// Split the article into pages
const splitter = new RecursiveCharacterTextSplitter();
const splitDocs = await splitter.splitDocuments(docs);
const embeddings = new MistralAIEmbeddings({
apiKey: mistralAPIKey,
});
const vectorStore = await MemoryVectorStore.fromDocuments(
splitDocs,
embeddings,
);
const retriever = vectorStore.asRetriever();
// Setup model
const model = new ChatMistralAI({
apiKey: mistralAPIKey,
model: 'mistral-large-latest',
});
const modelWithTool = model.withStructuredOutput(labelsJsonSchema);
// Setup the chain
//! This will fail because the chain is not taking the modelWithTool
const chain = ConversationalRetrievalQAChain.fromLLM(modelWithTool, retriever);
// Process the chain
const response = await chain.invoke({
question: `Recommend upto 5 relavant labels to the given article.
Keep the labels short and favour 1 word labels.
Don't make things up and stick to the content of the article.
Return the labels in the following format:
[
{
"name": "label1",
"description": "description of label1"
},
{
"name": "label2",
"description": "description of label2"
}
]`,
chat_history: [],
format_instructions: modelWithTool,
});
console.log(response);
})(); |
Beta Was this translation helpful? Give feedback.
I figured it out myself. This the updated code: