Frequently Asked Questions (FAQs) offer users immediate access to answers for common queries. However, as the volume and complexity of inquiries grow, manual management of FAQs becomes unsupportable. This is where an AI-powered FAQ system comes in.
In this tutorial, you'll learn how to create an AI-driven FAQ system using Strapi, LangChain.js, and OpenAI. This system will allow users to pose queries related to Strapi CMS and receive accurate responses generated by a GPT model.
To comfortably follow along with this tutorial, you need to have:
You need to configure the data source, which, in this case, is Strapi. Then, obtain an OpenAI API key, initialize a React project, and finally install the required dependencies.
Strapi provides a centralized data managing platform. This makes it easier to organize, update, and maintain the FAQ data. It also automatically generates a RESTful API for accessing the content stored in its database.
If you don't have Strapi installed in your system, proceed to your terminal and run the following command:
npx create-strapi-app@latest my-project
The above command will install Strapi into your system and launch the admin registration page on your browser.
Fill in your credentials in order to access the Strapi dashboard.
On the dashboard, under Content-Type Builder create a new collection type and name it FAQ
.
Then, add a question
and an answer
field to the FAQ collection. The question
field should be of type text as it will be a plain text input. As for the answer
field use Rich Text (Blocks) type as it allows formatted text.
Proceed to the Content Manager and add entries to the FAQ
collection type. Each entry should have a FAQ question and its corresponding answer. Make sure you publish the entry. Create as many entries as you wish.
Now that you have the FAQ
data in Strapi, you need to expose it via an API. This will allow the application you will create to consume it.
To achieve this, proceed to Settings > Users & Permissions Plugin > Roles > Public.
Click on Faq
Under permissions, check find
and findOne
actions and save.
This will allow us to retrieve our FAQ data via the http://localhost:1337/api/faqs endpoint. Here is how the data looks via a get request.
Strapi is now configured and the FAQ data is ready for use.
This is the final step needed to complete setting up our project. Create a new directory in your preferred location and open it with an IDE like VS Code. Then run the following command on the terminal:
npx create-react-app faq-bot
The command will create a new React.js application named faq-bot
set up and ready to be developed further.
Then navigate to the faq-bot
directory and run the following command to install all the dependencies you need to develop the FAQ AI application:
yarn add axios langchain @langchain/openai express cors
If you don't have yarn installed, install it using this command:
npm install -g yarn
You can use npm to install the dependencies, but during development, I found yarn to be better at handling any dependency conflict issues that occurred.
The dependencies will help you achieve the following:
axios
: To fetch data from the Strapi CMS API and also to fetch responses from our Express server.langchain
: To implement the Retrieval Augmented Generation(RAG) part of the application.@langchain/openai
: To handle communication with the OpenAI API.express
: To create a simple server to serve the frontend.cors
: To ensure the server responds correctly to requests from different origins.The core of your FAQ system will reside in an Express.js server. It will leverage the RAG (Retrieval Augmented Generation) approach.
RAG approach enhances the accuracy and richness of responses. It achieves this by combining information retrieval with large language models (LLMs) to provide more factually grounded answers. A retrieval locates relevant passages from external knowledge sources, such as FAQs stored in Strapi CMS. These passages, along with the user's query, are then fed into the LLM. By leveraging both internal knowledge and retrieved context, the LLM generates responses that are more informative and accurate. Through this process will help us create our AI FAQ system.
The server will be responsible for managing incoming requests, retrieving FAQ data from Strapi, processing user queries, and utilizing RAG for generating AI-driven responses.
At the root of your faq-bot
project, create a file and name it server.mjs
. The extension indicates that the JavaScript code is written in the ECMAScript module format. ECMAScript modules are a standard mechanism for modularizing JavaScript code.
Then open the server.mjs
file and proceed to import the libraries we installed earlier and some specific ones from LangChain
. Proceed to define the port on which the server will listen for incoming requests. Finally, configure the middleware functions to handle JSON parsing and CORS.
1import express from "express";
2import axios from "axios";
3import dotenv from "dotenv";
4import cors from "cors";
5import { ChatOpenAI } from "@langchain/openai";
6import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
7import { ChatPromptTemplate } from "@langchain/core/prompts";
8import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
9import { OpenAIEmbeddings } from "@langchain/openai";
10import { MemoryVectorStore } from "langchain/vectorstores/memory";
11import { createRetrievalChain } from "langchain/chains/retrieval";
12import { createHistoryAwareRetriever } from "langchain/chains/history_aware_retriever";
13import { MessagesPlaceholder } from "@langchain/core/prompts";
14import { HumanMessage, AIMessage } from "@langchain/core/messages";
15import { Document } from "langchain/document";
16
17dotenv.config();
18
19const app = express();
20const PORT = process.env.PORT || 30080;
21
22// Middleware to handle JSON requests
23app.use(express.json());
24app.use(cors()); // Add this line to enable CORS for all routes
You will understand what each library does as we move on with the code.
The rest of the code in the "Creating the FAQ AI App Backend" section will reside in the same server.mjs
file as the code above. The code in each subsection is a continuation of the code explained in the previous subsection.
To interact with the OpenAI language model, you'll need to initialize it with your API key and desired settings.
1// Instantiate Model
2const model = new ChatOpenAI({
3 modelName: "gpt-3.5-turbo",
4 temperature: 0.7,
5 openAIApiKey: process.env.OPENAI_API_KEY,
6});
The API Key is stored as an environmental variable. Proceed to the root folder of your project and create a file named .env. Store your OpenAI API key there as follows:
OPENAI_API_KEY=Your API Key
Temperature is a hyperparameter that controls the randomness of the model's output.
The system relies on pre-defined FAQ data stored in Strapi. Define a function to fetch this data using Axios and make a GET
request to the Strapi API endpoint you configured earlier.
1// Fetch FAQ data
2const fetchData = async () => {
3 try {
4 const response = await axios.get("http://localhost:1337/api/faqs");
5 return response.data;
6 } catch (error) {
7 console.error("Error fetching data:", error.message);
8 return [];
9 }
10};
After fetching the data, extract the questions and their corresponding answers.
1const extractQuestionsAndAnswers = (data) => {
2 return data.data.map((item) => {
3 return {
4 question: item.attributes.Question,
5 answer: item.attributes.Answer[0].children[0].text,
6 };
7 });
8};
The above function maps through the data array and extract the question and answer attributes from each item.
To efficiently retrieve relevant answers, create a vector store containing embeddings of the FAQ documents.
1// Populate Vector Store
2const populateVectorStore = async () => {
3 const data = await fetchData();
4 const questionsAndAnswers = extractQuestionsAndAnswers(data);
5
6 // Create documents from the FAQ data
7 const docs = questionsAndAnswers.map(({ question, answer }) => {
8 return new Document({ pageContent: `${question}\n${answer}`, metadata: { question } });
9 });
10
11 // Text Splitter
12 const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 100, chunkOverlap: 20 });
13 const splitDocs = await splitter.splitDocuments(docs);
14
15 // Instantiate Embeddings function
16 const embeddings = new OpenAIEmbeddings();
17
18 // Create the Vector Store
19 const vectorstore = await MemoryVectorStore.fromDocuments(splitDocs, embeddings);
20 return vectorstore;
21};
The above code uses the questions and answers data to create document objects. It then splits them into smaller chunks, computes embeddings, and constructs a vector store.
The vector store holds representations of the FAQ data, facilitating efficient retrieval and processing within the AI FAQ system.
Having the vector store full of information, you need a way to retrieve only the relevant information to a user query. Then use an LLM to come up with a good response to the query based on the retrieved information and the chat history.
To achieve this, you will implement a function to create a retriever, define prompts for AI interaction, and invoke a retrieval chain.
1// Logic to answer from Vector Store
2const answerFromVectorStore = async (chatHistory, input) => {
3 const vectorstore = await populateVectorStore();
4
5 // Create a retriever from vector store
6 const retriever = vectorstore.asRetriever({ k: 4 });
7
8 // Create a HistoryAwareRetriever which will be responsible for
9 // generating a search query based on both the user input and
10 // the chat history
11 const retrieverPrompt = ChatPromptTemplate.fromMessages([
12 new MessagesPlaceholder("chat_history"),
13 ["user", "{input}"],
14 [
15 "user",
16 "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation",
17 ],
18 ]);
19
20 // This chain will return a list of documents from the vector store
21 const retrieverChain = await createHistoryAwareRetriever({
22 llm: model,
23 retriever,
24 rephrasePrompt: retrieverPrompt,
25 });
26
27 // Define the prompt for the final chain
28 const prompt = ChatPromptTemplate.fromMessages([
29 [
30 "system",
31 `You are a Strapi CMS FAQs assistant. Your knowledge is limited to the information I provide in the context.
32 You will answer this question based solely on this information: {context}. Do not make up your own answer .
33 If the answer is not present in the information, you will respond 'I don't have that information.
34 If a question is outside the context of Strapi, you will respond 'I can only help with Strapi related questions.`,
35 ],
36 new MessagesPlaceholder("chat_history"),
37 ["user", "{input}"],
38 ]);
39
40 // the createStuffDocumentsChain
41 const chain = await createStuffDocumentsChain({
42 llm: model,
43 prompt: prompt,
44 });
45
46 // Create the conversation chain, which will combine the retrieverChain
47 // and combineStuffChain to get an answer
48 const conversationChain = await createRetrievalChain({
49 combineDocsChain: chain,
50 retriever: retrieverChain,
51 });
52
53 // Get the response
54 const response = await conversationChain.invoke({
55 chat_history: chatHistory,
56 input: input,
57 });
58
59 // Log the response to the server console
60 console.log("Server response:", response);
61 return response;
62};
The above code creates a retriever for search queries and configures a history-aware retriever. It then defines prompts for AI interaction, constructs a conversation chain, and invokes it with chat history and input. Finally, it logs and returns the generated response.
Now that you have everything for handling a user request ready, expose a POST
endpoint /chat
to handle incoming requests from clients. The route handler will parse input data, format the chat history, and pass it to the answerFromVectorStore
function responsible for answering questions.
1// Route to handle incoming requests
2app.post("/chat", async (req, res) => {
3 const { chatHistory, input } = req.body;
4
5 // Convert the chatHistory to an array of HumanMessage and AIMessage objects
6 const formattedChatHistory = chatHistory.map((message) => {
7 if (message.role === "user") {
8 return new HumanMessage(message.content);
9 } else {
10 return new AIMessage(message.content);
11 }
12 });
13
14 const response = await answerFromVectorStore(formattedChatHistory, input);
15 res.json(response);
16});
17
18// Start the server
19app.listen(PORT, () => {
20 console.log(`Server is running on http://localhost:${PORT}`);
21});
Run the following command on your terminal to start the server:
1node server.mjs
The server will run on the specified port.
Use Postman or any other software to test the server. Make sure the payload you send is in this format:
1{
2 "chatHistory": [
3 {
4 "role": "user",
5 "content": "What is Strapi?"
6 },
7 {
8 "role": "assistant",
9 "content": "Strapi is an open-source headless CMS (Content Management System) "
10 }
11 ],
12 "input": "Does Strapi have a default limit"
13}
You can change the content and input data to your liking. Below is a sample result after you make the post request:
1"answer": "The default limit for records in the Strapi API is 100."
That is the answer part of the response. But the response has a lot more data in it including the documents used to answer the question.
Having the core part of your system completed. You need a user interface in which the users will interact with your system. Under src in your React app, create a ChatbotUI.js file and paste the following code:
1import React, { useState, useEffect, useRef } from 'react';
2import axios from 'axios';
3import './ChatbotUI.css'; // Assuming the CSS file exists
4
5const ChatbotUI = () => {
6 const [chatHistory, setChatHistory] = useState([]);
7 const [userInput, setUserInput] = useState('');
8 const [isLoading, setIsLoading] = useState(false);
9 const [error, setError] = useState(null);
10 const [isExpanded, setIsExpanded] = useState(true); // State for chat window expansion
11 const chatContainerRef = useRef(null);
12
13 useEffect(() => {
14 // Scroll to the bottom of the chat container when new messages are added
15 if (isExpanded) {
16 chatContainerRef.current.scrollTop = chatContainerRef.current.scrollHeight;
17 }
18 }, [chatHistory, isExpanded]);
19
20 const handleUserInput = (e) => {
21 setUserInput(e.target.value);
22 };
23
24 const handleSendMessage = async () => {
25 if (userInput.trim() !== '') {
26 const newMessage = { role: 'user', content: userInput };
27 const updatedChatHistory = [...chatHistory, newMessage];
28 setChatHistory(updatedChatHistory);
29 setUserInput('');
30 setIsLoading(true);
31
32 try {
33 const response = await axios.post('http://localhost:30080/chat', {
34 chatHistory: updatedChatHistory,
35 input: userInput,
36 });
37 const botMessage = {
38 role: 'assistant',
39 content: response.data.answer,
40 };
41 setChatHistory([...updatedChatHistory, botMessage]);
42 } catch (error) {
43 console.error('Error sending message:', error);
44 setError('Error sending message. Please try again later.');
45 } finally {
46 setIsLoading(false);
47 }
48 }
49 };
50
51 const toggleChatWindow = () => {
52 setIsExpanded(!isExpanded);
53 };
54
55 return (
56 <div className="chatbot-container">
57 <button className="toggle-button" onClick={toggleChatWindow}>
58 {isExpanded ? 'Collapse Chat' : 'Expand Chat'}
59 </button>
60 {isExpanded && (
61 <div className="chat-container" ref={chatContainerRef}>
62 {chatHistory.map((message, index) => (
63 <div
64 key={index}
65 className={`message-container ${
66 message.role === 'user' ? 'user-message' : 'bot-message'
67 }`}
68 >
69 <div
70 className={`message-bubble ${
71 message.role === 'user' ? 'user-bubble' : 'bot-bubble'
72 }`}
73 >
74 <div className="message-content">{message.content}</div>
75 </div>
76 </div>
77 ))}
78 {error && <div className="error-message">{error}</div>}
79 </div>
80 )}
81 <div className="input-container">
82 <input
83 type="text"
84 placeholder="Type your message..."
85 value={userInput}
86 onChange={handleUserInput}
87 onKeyPress={(e) => {
88 if (e.key === 'Enter') {
89 handleSendMessage();
90 }
91 }}
92 disabled={isLoading}
93 />
94 <button onClick={handleSendMessage} disabled={isLoading}>
95 {isLoading ? 'Loading...' : 'Send'}
96 </button>
97 </div>
98 </div>
99 );
100};
101
102export default ChatbotUI;
The above code creates a user interface for interacting with the AI-powered FAQ system hosted on the server. It allows users to send messages, view chat history, and receive responses from the server. It also maintains a state for chat history, user input, loading status, and error handling. When a user sends a message, the component sends an HTTP POST
request to the server's /chat endpoint
, passing along the updated chat history and user input. Upon receiving a response from the server, it updates the chat history with the bot's message.
Create another file under src
directory and name it ChatbotUI.css
and paste the following code. This code will be responsible for styling the user interface.
1.chatbot-container {
2 display: flex;
3 flex-direction: column;
4 background-color: #f5f5f5;
5 padding: 5px;
6 position: fixed;
7 bottom: 10px;
8 right: 10px;
9 width: 300px;
10 z-index: 10;
11 }
12
13 .toggle-button {
14 padding: 5px 10px;
15 background-color: #ddd;
16 border: 1px solid #ccc;
17 border-radius: 5px;
18 cursor: pointer;
19 margin-bottom: 5px;
20 }
21
22 .chat-container {
23 height: 300px;
24 overflow-y: auto;
25 }
26
27 .message-container {
28 display: flex;
29 justify-content: flex-start;
30 margin-bottom: 5px; /* Reduced margin for tighter spacing */
31 }
32
33 .message-bubble {
34 max-width: 70%;
35 padding: 5px; /* Reduced padding for smaller bubbles */
36 border-radius: 10px;
37 }
38
39 .user-bubble {
40 background-color: #007bff;
41 color: white;
42 }
43
44 .bot-bubble {
45 background-color: #f0f0f0;
46 color: black;
47 }
48
49 .input-container {
50 align-self: flex-end;
51 display: flex;
52 align-items: center;
53 padding: 5px;
54 }
55
56 .input-container input {
57 flex: 1;
58 padding: 5px;
59 border: 1px solid #ccc;
60 border-radius: 5px;
61 margin-right: 10px;
62 }
63
64 .input-container button {
65 padding: 10px 20px;
66 background-color: #007bff;
67 color: white;
68 border: none;
69 border-radius: 5px;
70 cursor: pointer;
71 }
72
The above code defines the layout and styling for the user interface. It positions the chat interface fixed at the bottom right corner of the screen, styles message bubbles, and formats the input field and send button for user interaction.
In the App.js
file render the user interface.
1import React from 'react';
2import ChatbotUI from './ChatbotUI';
3
4const App = () => {
5 return (
6 <div>
7 <ChatbotUI />
8 </div>
9 );
10};
11
12export default App;
You are now done creating the FAQ AI-powered system.
Open a new terminal in the same path you run your server and start your react app using the following command:
yarn start
You can now start asking the system FAQs about Strapi CMS. The system knowledge depends on the FAQ data you have stored in Strapi.
The following GIF shows how the system responds:
When asked about a topic outside Strapi, it reminds the user it only deals with Strapi CMS. Also if an answer is not present in the FAQ data stored in Strapi CMS, it responds it does not have that information.
Congratulations on creating an AI & Strapi-powered FAQ system. In this tutorial, you've learned how to leverage the strengths of Strapi, LangChain.js, and OpenAI.
The system integrates seamlessly with Strapi, allowing you to effortlessly manage your FAQ data through a centralized platform. LangChain.js facilitates Retrieval Augmented Generation (RAG), enhancing the accuracy and comprehensiveness of the system's responses. OpenAI provides the large language model that the system uses to generate informative and relevant answers to user queries.
Denis works as a software developer who enjoys writing guides to help other developers. He has a bachelor's in computer science. He loves hiking and exploring the world.