Welcome to the third and final installment in this series. In Part 2, we created and connected the backend with Strapi to help save our meetings and transcriptions. In this part of the series, we will use ChatGPT with Strapi to gain insights about the transcribed text at the click of a button. We will also look at some testing and how to deploy the application to Strapi cloud.
You can find the outline for this series below:
We will need our custom endpoints in Strapi CMS to connect with ChatGPT, so navigate to the terminal, change the directory to strapi-transcribe-api
, and run the below command:
yarn strapi generate
Doing this will begin the process of generating our custom API. Choose the API option, give it the name transcribe-insight-gpt
, and select "no" when it asks us if this is for a plugin.
Inside the src
directory, If we check the api
directory in our code editor, we should see the newly created API for transcribe-insight-gpt
with it's routes, controllers, and services directories.
Let's check if it works by uncommenting the code in each file, restarting the server, and navigating to the admin dashboard. We will want to make access to this route public, so click Settings > Users & permissions plugin > Roles > Public, then scroll down to Select all on the transcribe-insight-gpt
API to make the permissions public, and click save in the top right.
If we enter the following into our browser and click enter, we should get an "ok" message.
http://localhost:1337/api/transcribe-insight-gpt
We have confirmed the API endpoint is working, let's connect it to OpenAI first, install the OpenAI package, navigate to the route
directory, and run the below command in the terminal
yarn add openai
Then, in the .env
file, add the API key to the OPENAI
environment variable:
OPENAI=<OpenAI api key here>
Now, under the transcribe-insight-gpt
directory, change the code in the routes directory to the following:
1module.exports = {
2 routes: [
3 {
4 method: "POST",
5 path: "/transcribe-insight-gpt/exampleAction",
6 handler: "transcribe-insight-gpt.exampleAction",
7 config: {
8 policies: [],
9 middlewares: [],
10 },
11 },
12 ],
13};
Change the code in the controller
directory to the following:
1"use strict";
2
3module.exports = {
4 exampleAction: async (ctx) => {
5 try {
6 const response = await strapi
7 .service("api::transcribe-insight-gpt.transcribe-insight-gpt")
8 .insightService(ctx);
9
10 ctx.body = { data: response };
11 } catch (err) {
12 console.log(err.message);
13 throw new Error(err.message);
14 }
15 },
16};
And the code in the services
directory to the following:
1"use strict";
2const { OpenAI } = require("openai");
3const openai = new OpenAI({
4 apiKey: process.env.OPENAI,
5});
6
7/**
8 * transcribe-insight-gpt service
9 */
10
11module.exports = ({ strapi }) => ({
12 insightService: async (ctx) => {
13 try {
14 const input = ctx.request.body.data?.input;
15 const operation = ctx.request.body.data?.operation;
16
17 if (operation === "analysis") {
18 const analysisResult = await gptAnalysis(input);
19
20 return {
21 message: analysisResult,
22 };
23 } else if (operation === "answer") {
24 const answerResult = await gptAnswer(input);
25
26 return {
27 message: answerResult,
28 };
29 } else {
30 return { error: "Invalid operation specified" };
31 }
32 } catch (err) {
33 ctx.body = err;
34 }
35 },
36});
37
38async function gptAnalysis(input) {
39 const analysisPrompt =
40 "Analyse the following text and give me a brief overview of what it means:";
41 const completion = await openai.chat.completions.create({
42 messages: [{ role: "user", content: `${analysisPrompt} ${input}` }],
43 model: "gpt-3.5-turbo",
44 });
45
46 const analysis = completion.choices[0].message.content;
47
48 return analysis;
49}
50
51async function gptAnswer(input) {
52 const answerPrompt =
53 "Analyse the following text and give me an answer to the question posed: ";
54 const completion = await openai.chat.completions.create({
55 messages: [{ role: "user", content: `${answerPrompt} ${input}` }],
56 model: "gpt-3.5-turbo",
57 });
58
59 const answer = completion.choices[0].message.content;
60
61 return answer;
62}
Here, we pass two parameters to our API: the input text, which will be our transcriptions, and the operation, which will be either analysis
or answer
depending on what operation we want it to perform. Each operation will have a different prompt for ChatGPT.
We can check the connection to our POST
route by pasting the below code in our terminal:
curl -X POST \
http://localhost:1337/api/transcribe-insight-gpt/exampleAction \
-H 'Content-Type: application/json' \
-d '{
"data": {
"input": "Comparatively, four-dimensional space has an extra coordinate axis, orthogonal to the other three, which is usually labeled w. To describe the two additional cardinal directions",
"operation": "analysis"
}
}'
And to check the answer
operation, you can use the below command:
curl -X POST \
http://localhost:1337/api/transcribe-insight-gpt/exampleAction \
-H 'Content-Type: application/json' \
-d '{
"data": {
"input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?",
"operation": "answer"
}
}'
That's great. Now that we have our analysis and answer capabilities within a Strapi API route, we need to connect this to our front-end code and ensure we can save this information for our meetings and transcriptions.
To maintain a clear separation of concerns, let's create a separate API file for our app's analysis functionality.
In transcribe-frontend
under the api
directory, create a new file called analysis.js
and paste in the following code:
1const baseUrl = 'http://localhost:1337';
2const url = `${baseUrl}/api/transcribe-insight-gpt/exampleAction`;
3
4export async function callInsightGpt(operation, input) {
5 console.log('operation - ', operation);
6 const payload = {
7 data: {
8 input: input,
9 operation: operation,
10 },
11 };
12 try {
13 const response = await fetch(url, {
14 method: 'POST',
15 headers: {
16 'Content-Type': 'application/json',
17 },
18 body: JSON.stringify(payload),
19 });
20
21 const data = await response.json();
22 return data;
23 } catch (error) {
24 console.error('Error:', error);
25 }
26}
The code above is a POST
request to call the insight API and get the analysis back from ChatGPT.
Let's add a way to update our transcriptions with analysis and answers. Paste the following code into the transcriptions.js
file.
1export async function updateTranscription(
2 updatedTranscription,
3 transcriptionId
4) {
5 const updateURL = `${url}/${transcriptionId}`;
6 const payload = {
7 data: updatedTranscription,
8 };
9
10 try {
11 const res = await fetch(updateURL, {
12 method: 'PUT',
13 headers: {
14 'Content-Type': 'application/json',
15 },
16 body: JSON.stringify(payload),
17 });
18
19 return await res.json();
20 } catch (error) {
21 console.error('Error updating meeting:', error);
22 throw error;
23 }
24}
The code above is a PUT
request to handle an update of the analysis
or answer
field on each transcription.
Now, let's create a hook where we can use this method. Create a file named useInsightGpt
under the hooks
directory and paste in the following code:
1import { useState } from 'react';
2import { callInsightGpt } from '../api/analysis';
3import { updateMeeting } from '../api/meetings';
4import { updateTranscription } from '../api/transcriptions';
5
6export const useInsightGpt = () => {
7 const [loadingAnalysis, setLoading] = useState(false);
8 const [transcriptionIdLoading, setTranscriptionIdLoading] = useState('');
9 const [analysisError, setError] = useState(null);
10
11 const getAndSaveTranscriptionAnalysis = async (
12 operation,
13 input,
14 transcriptionId
15 ) => {
16 try {
17 setTranscriptionIdLoading(transcriptionId);
18 // Get insight analysis / answer
19 const { data } = await callInsightGpt(operation, input);
20 // Use transcriptionId to save it to the transcription
21 const updateTranscriptionDetails =
22 operation === 'analysis'
23 ? { analysis: data.message }
24 : { answer: data.message };
25 await updateTranscription(updateTranscriptionDetails, transcriptionId);
26 setTranscriptionIdLoading('');
27 } catch (e) {
28 setTranscriptionIdLoading('');
29 setError('Error getting analysis', e);
30 }
31 };
32
33 const getAndSaveOverviewAnalysis = async (operation, input, meetingId) => {
34 try {
35 setLoading(true);
36 // Get overview insight
37 const {
38 data: { message },
39 } = await callInsightGpt(operation, input);
40 // Use meetingId to save it to the meeting
41 const updateMeetingDetails = { overview: message };
42 await updateMeeting(updateMeetingDetails, meetingId);
43 setLoading(false);
44 } catch (e) {
45 setLoading(false);
46 setError('Error getting overview', e);
47 }
48 };
49
50 return {
51 loadingAnalysis,
52 transcriptionIdLoading,
53 analysisError,
54 getAndSaveTranscriptionAnalysis,
55 getAndSaveOverviewAnalysis,
56 };
57};
This hook handles the logic to get and save the overview for our meeting when it has ended. It also handles getting the analysis or answers to our transcriptions and saving them, too. It keeps track of which transcription we are requesting analysis for so we can show specific loading states.
Import the functionality above into the TranscribeContainer
and use it. Paste the following updated code into TranscribeContainer.jsx
1import React, { useState, useEffect } from "react";
2import styles from "../styles/Transcribe.module.css";
3import { useAudioRecorder } from "../hooks/useAudioRecorder";
4import RecordingControls from "../components/transcription/RecordingControls";
5import TranscribedText from "../components/transcription/TranscribedText";
6import { useRouter } from "next/router";
7import { useMeetings } from "../hooks/useMeetings";
8import { useInsightGpt } from "../hooks/useInsightGpt";
9import { createNewTranscription } from "../api/transcriptions";
10
11const TranscribeContainer = ({ streaming = true, timeSlice = 1000 }) => {
12 const router = useRouter();
13 const [meetingId, setMeetingId] = useState(null);
14 const [meetingTitle, setMeetingTitle] = useState("");
15 const {
16 getMeetingDetails,
17 saveTranscriptionToMeeting,
18 updateMeetingDetails,
19 loading,
20 error,
21 meetingDetails,
22 } = useMeetings();
23 const {
24 loadingAnalysis,
25 transcriptionIdLoading,
26 analysisError,
27 getAndSaveTranscriptionAnalysis,
28 getAndSaveOverviewAnalysis,
29 } = useInsightGpt();
30 const apiKey = process.env.NEXT_PUBLIC_OPENAI_API_KEY;
31 const whisperApiEndpoint = "https://api.openai.com/v1/audio/";
32 const {
33 recording,
34 transcribed,
35 handleStartRecording,
36 handleStopRecording,
37 setTranscribed,
38 } = useAudioRecorder(streaming, timeSlice, apiKey, whisperApiEndpoint);
39
40 const { ended } = meetingDetails;
41 const transcribedHistory = meetingDetails?.transcribed_chunks?.data;
42
43 useEffect(() => {
44 const fetchDetails = async () => {
45 if (router.isReady) {
46 const { meetingId } = router.query;
47 if (meetingId) {
48 try {
49 await getMeetingDetails(meetingId);
50 setMeetingId(meetingId);
51 } catch (err) {
52 console.log("Error getting meeting details - ", err);
53 }
54 }
55 }
56 };
57
58 fetchDetails();
59 }, [router.isReady, router.query]);
60
61 useEffect(() => {
62 setMeetingTitle(meetingDetails.title);
63 }, [meetingDetails]);
64
65 const handleGetAnalysis = async (input, transcriptionId) => {
66 await getAndSaveTranscriptionAnalysis("analysis", input, transcriptionId);
67 // re-fetch meeting details
68 await getMeetingDetails(meetingId);
69 };
70
71 const handleGetAnswer = async (input, transcriptionId) => {
72 await getAndSaveTranscriptionAnalysis("answer", input, transcriptionId);
73 // re-fetch meeting details
74 await getMeetingDetails(meetingId);
75 };
76
77 const handleStopMeeting = async () => {
78 // provide meeting overview and save it
79 // getMeetingOverview(transcribed_chunks)
80 await updateMeetingDetails(
81 {
82 title: meetingTitle,
83 ended: true,
84 },
85 meetingId,
86 );
87
88 // re-fetch meeting details
89 await getMeetingDetails(meetingId);
90 setTranscribed("");
91 };
92
93 const stopAndSaveTranscription = async () => {
94 // save transcription first
95 let {
96 data: { id: transcriptionId },
97 } = await createNewTranscription(transcribed);
98
99 // make a call to save the transcription chunk here
100 await saveTranscriptionToMeeting(meetingId, meetingTitle, transcriptionId);
101 // re-fetch current meeting which should have updated transcriptions
102 await getMeetingDetails(meetingId);
103 // Stop and clear the current transcription as it's now saved
104 await handleStopRecording();
105 };
106
107 const handleGoBack = () => {
108 router.back();
109 };
110
111 if (loading) return <p>Loading...</p>;
112
113 return (
114 <div style={{ margin: "20px" }}>
115 {ended && (
116 <button onClick={handleGoBack} className={styles.goBackButton}>
117 Go Back
118 </button>
119 )}
120 {!ended && (
121 <button
122 className={styles["end-meeting-button"]}
123 onClick={handleStopMeeting}
124 >
125 End Meeting
126 </button>
127 )}
128 {ended ? (
129 <p className={styles.title}>{meetingTitle}</p>
130 ) : (
131 <input
132 onChange={(e) => setMeetingTitle(e.target.value)}
133 value={meetingTitle}
134 type="text"
135 placeholder="Meeting title here..."
136 className={styles["custom-input"]}
137 />
138 )}
139 <div>
140 {!ended && (
141 <div>
142 <RecordingControls
143 handleStartRecording={handleStartRecording}
144 handleStopRecording={stopAndSaveTranscription}
145 />
146 {recording ? (
147 <p className={styles["primary-text"]}>Recording</p>
148 ) : (
149 <p>Not recording</p>
150 )}
151 </div>
152 )}
153
154 {/*Current transcription*/}
155 {transcribed && <h1>Current transcription</h1>}
156 <TranscribedText transcribed={transcribed} current={true} />
157
158 {/*Transcribed history*/}
159 <h1>History</h1>
160 {transcribedHistory
161 ?.slice()
162 .reverse()
163 .map((val, i) => {
164 const transcribedChunk = val.attributes;
165 const text = transcribedChunk.text;
166 const transcriptionId = val.id;
167 return (
168 <TranscribedText
169 key={transcriptionId}
170 transcribed={text}
171 answer={transcribedChunk.answer}
172 analysis={transcribedChunk.analysis}
173 handleGetAnalysis={() =>
174 handleGetAnalysis(text, transcriptionId)
175 }
176 handleGetAnswer={() => handleGetAnswer(text, transcriptionId)}
177 loading={transcriptionIdLoading === transcriptionId}
178 />
179 );
180 })}
181 </div>
182 </div>
183 );
184};
185
186export default TranscribeContainer;
Here, depending on your need, we use the useInsightGpt
hook to get the analysis or answer. We also display a loading indicator beside the transcribed text.
Paste the following code into TranscribedText.jsx
to update the UI accordingly.
1import styles from '../../styles/Transcribe.module.css';
2
3function TranscribedText({
4 transcribed,
5 answer,
6 analysis,
7 handleGetAnalysis,
8 handleGetAnswer,
9 loading,
10 current,
11}) {
12 return (
13 <div className={styles['transcribed-text-container']}>
14 <div className={styles['speech-bubble-container']}>
15 {transcribed && (
16 <div className={styles['speech-bubble']}>
17 <div className={styles['speech-pointer']}></div>
18 <div className={styles['speech-text-question']}>{transcribed}</div>
19 {!current && (
20 <div className={styles['button-container']}>
21 <button
22 className={styles['primary-button-analysis']}
23 onClick={handleGetAnalysis}
24 >
25 Get analysis
26 </button>
27 <button
28 className={styles['primary-button-answer']}
29 onClick={handleGetAnswer}
30 >
31 Get answer
32 </button>
33 </div>
34 )}
35 </div>
36 )}
37 </div>
38 <div>
39 <div className={styles['speech-bubble-container']}>
40 {loading && (
41 <div className={styles['analysis-bubble']}>
42 <div className={styles['analysis-pointer']}></div>
43 <div className={styles['speech-text-answer']}>Loading...</div>
44 </div>
45 )}
46 {analysis && (
47 <div className={styles['analysis-bubble']}>
48 <div className={styles['analysis-pointer']}></div>
49 <p style={{ margin: 0 }}>Analysis</p>
50 <div className={styles['speech-text-answer']}>{analysis}</div>
51 </div>
52 )}
53 </div>
54 <div className={styles['speech-bubble-container']}>
55 {answer && (
56 <div className={styles['speech-bubble-right']}>
57 <div className={styles['speech-pointer-right']}></div>
58 <p style={{ margin: 0 }}>Answer</p>
59 <div className={styles['speech-text-answer']}>{answer}</div>
60 </div>
61 )}
62 </div>
63 </div>
64 </div>
65 );
66}
67
68export default TranscribedText;
We can now request analysis and get answers to questions in real-time straight after they have been transcribed.
When the user ends the meeting, we want to provide an overview of everything discussed. Let's add this functionality to the TranscribeContainer
component.
In the function handleStopMeeting
we can use the method getAndSaveOverviewAnalysis
from the useInsightGpt
hook:
1const handleStopMeeting = async () => {
2 // provide meeting overview and save it
3 const transcribedHistoryText = transcribedHistory
4 .map((val) => `transcribed_chunk: ${val.attributes.text}`)
5 .join(', ');
6
7 await getAndSaveOverviewAnalysis(
8 'analysis',
9 transcribedHistoryText,
10 meetingId
11 );
12
13 await updateMeetingDetails(
14 {
15 title: meetingTitle,
16 ended: true,
17 },
18 meetingId
19 );
20
21 // re-fetch meeting details
22 await getMeetingDetails(meetingId);
23 setTranscribed('');
24 };
Here, we are joining all of the transcribed chunks from the meeting and then sending them to our ChatGPT API for analysis, where they will be saved for our meeting.
Now, let's display the overview once it has been loaded. Add the following code above the RecordingControls
:
1{loadingAnalysis && <p>Loading Overview...</p>}
2
3 {overview && (
4 <div>
5 <h1>Overview</h1>
6 <p>{overview}</p>
7 </div>
8 )}
Then, destructure the overview
from the meeting details by adding the following line below our hook declarations:
1const { ended, overview } = meetingDetails;
To summarise, we listen to the loading indicator from useInsightGpt
and check if overview
is present from the meeting; if it is, we display it.
We have a couple of errors that could be caused by one of our hooks; let's create a component to handle them.
Create a file called ErrorToast.js
under the components directory:
1import { useEffect, useState } from 'react';
2
3const ErrorToast = ({ message, duration }) => {
4 const [visible, setVisible] = useState(true);
5
6 useEffect(() => {
7 const timer = setTimeout(() => {
8 setVisible(false);
9 }, duration);
10
11 return () => clearTimeout(timer);
12 }, [duration]);
13
14 if (!visible) return null;
15
16 return <div className="toast">{message}</div>;
17};
18
19export default ErrorToast;
And add the following css code to globals.css
under the style
directory:
1.toast {
2 position: fixed;
3 top: 20px;
4 left: 50%;
5 transform: translateX(-50%);
6 background-color: rgba(255, 0, 0, 0.8);
7 color: white;
8 padding: 16px;
9 border-radius: 8px;
10 box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
11 z-index: 1000;
12 transition: opacity 0.5s ease-out;
13 opacity: 1;
14 display: flex;
15 align-items: center;
16 justify-content: center;
17 text-align: center;
18}
19
20.toast-hide {
21 opacity: 0;
22}
Now, we can use this error component in TranscribeContainer
; whenever we encounter an unexpected error from the API, we will show this error toast briefly to notify the user that something went wrong.
Import the ErrorToast
at the top of the file and then paste the following code above the Go Back
button in the return statement of our component:
1 {error || analysisError ? (
2 <ErrorToast message={error || analysisError} duration={5000} />
3 ) : null}
Now, let's add a test to ensure our hooks are working as we expect them to and to alert us to any breaking changes in the code that might be introduced later. First, add the packages below so we can use jest
in our project.
yarn add -D jest jest-environment-jsdom @testing-library/react @testing-library/jest-dom @testing-library/react-hooks
Then create a jest.config.js
file in the route of the frontend project and add the following code:
1const nextJest = require('next/jest');
2const createJestConfig = nextJest({
3 dir: './',
4});
5const customJestConfig = {
6 moduleDirectories: ['node_modules', '<rootDir>/'],
7 testEnvironment: 'jest-environment-jsdom',
8};
9module.exports = createJestConfig(customJestConfig);
This just sets up Jest ready to be used in Next.js.
Create a test
directory and an index.test.js
file with the following code:
1import { renderHook, act } from '@testing-library/react-hooks';
2import { useInsightGpt } from '../hooks/useInsightGpt';
3import { callInsightGpt } from '../api/analysis';
4import { updateMeeting } from '../api/meetings';
5import { updateTranscription } from '../api/transcriptions';
6
7jest.mock('../api/analysis');
8jest.mock('../api/meetings');
9jest.mock('../api/transcriptions');
10
11describe('useInsightGpt', () => {
12 beforeEach(() => {
13 jest.clearAllMocks();
14 });
15
16 it('should handle transcription analysis successfully', async () => {
17 const mockData = { data: { message: 'Test analysis message' } };
18 callInsightGpt.mockResolvedValueOnce(mockData);
19 updateTranscription.mockResolvedValueOnce({});
20
21 const { result } = renderHook(() => useInsightGpt());
22
23 await act(async () => {
24 await result.current.getAndSaveTranscriptionAnalysis(
25 'analysis',
26 'input',
27 'transcriptionId'
28 );
29 });
30
31 expect(callInsightGpt).toHaveBeenCalledWith('analysis', 'input');
32 expect(updateTranscription).toHaveBeenCalledWith(
33 { analysis: 'Test analysis message' },
34 'transcriptionId'
35 );
36 expect(result.current.transcriptionIdLoading).toBe('');
37 expect(result.current.analysisError).toBe(null);
38 });
39
40 it('should handle overview analysis successfully', async () => {
41 const mockData = { data: { message: 'Test overview message' } };
42 callInsightGpt.mockResolvedValueOnce(mockData);
43 updateMeeting.mockResolvedValueOnce({});
44
45 const { result } = renderHook(() => useInsightGpt());
46
47 await act(async () => {
48 await result.current.getAndSaveOverviewAnalysis(
49 'overview',
50 'input',
51 'meetingId'
52 );
53 });
54
55 expect(callInsightGpt).toHaveBeenCalledWith('overview', 'input');
56 expect(updateMeeting).toHaveBeenCalledWith(
57 { overview: 'Test overview message' },
58 'meetingId'
59 );
60 expect(result.current.loadingAnalysis).toBe(false);
61 expect(result.current.analysisError).toBe(null);
62 });
63
64 it('should handle errors in transcription analysis', async () => {
65 const mockError = new Error('Test error');
66 callInsightGpt.mockRejectedValueOnce(mockError);
67
68 const { result } = renderHook(() => useInsightGpt());
69
70 await act(async () => {
71 await result.current.getAndSaveTranscriptionAnalysis(
72 'analysis',
73 'input',
74 'transcriptionId'
75 );
76 });
77
78 expect(result.current.transcriptionIdLoading).toBe('');
79 expect(result.current.analysisError).toBe(
80 'Error getting analysis',
81 mockError
82 );
83 });
84
85 it('should handle errors in overview analysis', async () => {
86 const mockError = new Error('Test error');
87 callInsightGpt.mockRejectedValueOnce(mockError);
88
89 const { result } = renderHook(() => useInsightGpt());
90
91 await act(async () => {
92 await result.current.getAndSaveOverviewAnalysis(
93 'overview',
94 'input',
95 'meetingId'
96 );
97 });
98
99 expect(result.current.loadingAnalysis).toBe(false);
100 expect(result.current.analysisError).toBe(
101 'Error getting overview',
102 mockError
103 );
104 });
105});
Because the hooks use our Strapi API, we need a way to replace the data we're getting back from the API calls. We're using jest.mock
to intercept the APIs and send back mock data. This way, we can test our hooks' internal logic without calling the API.
In the first two tests, we mock the API call and return some data, then render our hook and call the correct function. We then check if the correct functions have been called with the correct data from inside the hook. The last two tests just test that errors are handled correctly.
Add the following under scripts
in the package.json
file:
1"test": "jest --watch"
Now open the terminal, navigate to the route directory of the frontend project, and run the following command to check if the tests are passing:
yarn test
You should see a success message like the one below:
As an optional challenge, let's see if you can apply what we did with testing useInsightGpt
to testing the other hooks.
Here is what our application looks like.
Finally, we have the finished application up and running correctly with some tests. The time has come to deploy our project to Strapi cloud.
First, navigate to Strapi and click on "cloud" at the top right.
Connect with GitHub.
From the dashboard, click on Create project.
Choose your GitHub account and the correct repo, fill out the display name, and choose the region.
Now, if you have the same file structure as me, which you should do if you've been following along, then you will just need to add the base directory, so click on Show advanced settings and enter the base directory of /strapi-transcribe-api
, then you will need to add all of the environment variables that can be found in the .env
file in the route of the strapi project.
Once you have added all of these, click on "create project." This will bring you to a loading screen, and then you will be redirected to the build logs; here, you can just wait for the build to finish.
Once it has finished building, you can click on Overview from the top left. This should direct you to the dashboard, where you will find the details of your deployment and the app URL under Overview on the right.
First, click on your app URL, which will open a new tab and direct you to the welcome page of your Strapi app. Then, create a new admin user, which will log you into the dashboard.
This is a new deployment, and as such, it won't have any of the data we had saved locally; it also won't have carried across the public settings we had on the API, so click on Settings>Users & Permissions Plugin>Roles>Public, expand and select all on Meeting
, Transcribe-insight-gpt
, and Transcribed-chunk
, and then click save in the top right.
Once again, let's just check that our deployment was successful by running the below command in the terminal. Please replace https://yourDeployedUrlHere.com
with the URL in the Strapi cloud dashboard.
curl -X POST \
https://yourDeployedUrlHere.com/api/transcribe-insight-gpt/exampleAction \
-H 'Content-Type: application/json' \
-d '{
"data": {
"input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?",
"operation": "answer"
}
}'
Now we have the API deployed and ready to use, let's deploy our frontend with Vercel.
First, we will need to change the baseUrl
in our API files to link to our newly deployed Strapi instance,
Add the following variable to .env.local
1NEXT_PUBLIC_STRAPI_URL="your strapi cloud url"
Now go ahead and replace the current value of baseUrl
with the following in all three API files:
1const baseUrl =
2 process.env.NODE_ENV == 'production'
3 ? process.env.NEXT_PUBLIC_STRAPI_URL
4 : 'http://localhost:1337';
This will just check if the app is running in production. If so, it will use our deployed strap instance. If not, it will revert to localhost. Make sure to push these changes to Github.
Now navigate to Vercel and sign up if you don't already have an account.
Now, let's create a new project by continuing with GitHub.
Once you have verified your account, import the correct GitHub repo
Now we will fill out some configuration details, give the project a name, change the framework preset to Next.js, change the root directory to 'transcribe-frontend', and add the two environment variables from the .env.local
file in the Next.js project.
Now click deploy and wait for it to finish. Once deployed, it should redirect you to a success page with a preview of the app.
Now click continue to the dashboard, where you can find information about the app, such as the domain and the deployment logs.
From here, you can click visit to be directed to the app's frontend deployment.
So there you have it! You have now built your transcription app from start to finish. We have gone over how to achieve this with several cutting-edge technologies. We used Strapi for the backend CMS and custom ChatGPT integration, demonstrating how quickly and easily this technology can make building complex web apps. We also covered some architectural patterns with error handling and testing in Next.js, and finally, we deployed the backend to the Strapi cloud. I hope that you have found this series eye-opening and that it will encourage you to bring your ideas to life.
Hey! 👋 I'm Mike, a seasoned web developer with 5 years of full-stack expertise. Passionate about tech's impact on the world, I'm on a journey to blend code with compelling stories. Let's explore the tech landscape together! 🚀✍️