These integration guides are not official documentation and the Strapi Support Team will not provide assistance with them.
What Is Elasticsearch?
Elasticsearch is a distributed search and analytics engine built on Apache Lucene that powers modern search applications. It exposes a RESTful HTTP API that works with any programming language or platform.
The Elasticsearch integration handles full-text search far better than traditional database queries. Its distributed architecture lets you scale horizontally by adding nodes to your cluster, with built-in data replication and fault tolerance.
The real-time indexing sets Elasticsearch apart from batch-processing search solutions. Your content becomes searchable almost instantly after indexing, which is crucial when dealing with frequently changing content.
You can start with a single node and expand to hundreds as your data and query load grow. The distributed architecture handles most scaling complexity, though performance optimization becomes critical at scale.
Why Integrate Elasticsearch with Strapi
Combining Strapi with Elasticsearch creates a powerful ecosystem that enhances content discovery while maintaining Strapi's flexible content management capabilities. Strapi allows for flexible content management and, when integrated with Elasticsearch, provides an enhanced search experience. While Strapi excels at content creation and API delivery, the headless CMS advantages combined with Elasticsearch's enterprise-grade search functionality create a powerful solution that traditional databases can't match.
Among the numerous benefits of Strapi integrations, combining Strapi with Elasticsearch particularly enhances search capabilities in your application. The integration offers four key benefits: 1. Enhanced search with full-text capabilities, fuzzy matching, and relevance scoring that understands context and handles typos 2. Real-time indexing that automatically synchronizes search results with content changes via lifecycle hooks 3. Advanced query capabilities including faceted search, geographic filtering, and boolean logic for sophisticated experiences like multi-attribute filtering 4. Scalability that maintains performance as content volume grows.
Elasticsearch adds enterprise-grade search functionality that traditional databases can't match, complementing Strapi's enterprise content management features.
This integration is particularly valuable for content-heavy applications like e-commerce platforms, knowledge bases, news sites, and documentation portals. Consider implementing it when basic search no longer meets user expectations due to slow performance, poor relevance, or limited filtering options.
Keep in touch with the latest Strapi and Elasticsearch updates
How to Integrate Elasticsearch with Strapi
Setting up Elasticsearch with Strapi requires careful planning and systematic implementation. This Strapi integration guide walks you through the entire process, from initial setup to building functional search endpoints.
Prerequisites
Before beginning the integration, ensure you have the following technical knowledge and tools:
- JavaScript/TypeScript proficiency: Understanding of asynchronous programming, API development, and modern JavaScript features
- Node.js experience: Familiarity with Node.js runtime and package management
- React knowledge: For building front-end search interfaces that consume your search API
- Docker fundamentals: Understanding containerization concepts and Docker Compose
- CLI comfort: Ability to work with command-line tools and terminal commands
- Basic understanding of headless CMS concepts: Familiarity with the principles of headless content management systems
Setting Up Elasticsearch
You can deploy Elasticsearch using two primary approaches: Elastic Cloud (managed service) or a self-hosted Docker setup. Each has distinct advantages depending on your project requirements.
Elastic Cloud Setup
Elastic Cloud offers a managed solution with automatic updates, built-in security, and simplified scaling. To get started:
- Sign up for an Elastic Cloud account.
- Create a deployment with your desired specifications.
- Note your cloud ID, API key, and endpoint for later configuration.
- The connection will use cloud-specific credentials rather than traditional username/password authentication.
Self-Hosted Docker Setup
For complete control over your infrastructure, Docker provides an excellent self-hosted option. Create a docker-compose.yml file:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.7.0
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- ELASTIC_PASSWORD=your_secure_password
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
volumes:
elasticsearch-data:Launch your Elasticsearch instance:
docker-compose up -dAfter starting, retrieve the CA certificate from the container:
docker cp elasticsearch:/usr/share/elasticsearch/config/certs/http_ca.crt .Your search instance will be accessible at http://localhost:9200 with the credentials you specified.
Installing and Configuring Strapi
If you don't have a Strapi project yet, create one using the CLI:
npx create-strapi-app@latest my-project --quickstart
cd my-projectFor existing Strapi projects, ensure you're running a compatible version. The integration works with both Strapi v4 and v5, though plugin versions may differ.
Installing Required Dependencies
Install the Elasticsearch JavaScript client, which enables communication between Strapi and your search instance:
# Using npm
npm install @elastic/elasticsearch
# Using Yarn
yarn add @elastic/elasticsearchYou can also install the dedicated Strapi plugin for a more streamlined experience:
# For Strapi v5
npm install @geeky-biz/strapi-plugin-elasticsearch
# For Strapi v4, use version 0.0.8
npm install @geeky-biz/strapi-plugin-elasticsearch@0.0.8Configuring Environment Variables
Create or update your .env file with search connection details:
# For self-hosted setup
ELASTIC_HOST=https://localhost:9200
ELASTIC_USERNAME=elastic
ELASTIC_PASSWORD=your_secure_password
ELASTIC_CERT_NAME=http_ca.crt
ELASTIC_ALIAS_NAME=content_index
# For Elastic Cloud setup
ELASTIC_CLOUD_ID=your_cloud_id
ELASTIC_API_KEY=your_api_keyNever commit these credentials to version control. The environment variables ensure secure, configurable connections across different deployment environments.
Creating the Elasticsearch Client
Build a reusable client to handle all interactions with your search engine. Create a helper file at helpers/elastic_client.js:
const { Client } = require('@elastic/elasticsearch');
const fs = require('fs');
const connector = () => {
// For self-hosted setup
if (process.env.ELASTIC_HOST) {
return new Client({
node: process.env.ELASTIC_HOST,
auth: {
username: process.env.ELASTIC_USERNAME,
password: process.env.ELASTIC_PASSWORD
},
tls: {
ca: fs.readFileSync('./http_ca.crt'),
rejectUnauthorized: false
}
});
}
// For Elastic Cloud setup
return new Client({
cloud: {
id: process.env.ELASTIC_CLOUD_ID
},
auth: {
apiKey: process.env.ELASTIC_API_KEY
}
});
};
const testConn = async (client) => {
try {
const response = await client.info();
console.log('Elasticsearch connected successfully:', response);
return response;
} catch (error) {
console.error('Elasticsearch connection error:', error);
throw error;
}
};
module.exports = {
connector,
testConn
};This client automatically handles both self-hosted and cloud configurations based on your environment variables.
Implementing Content Indexing
To keep your search index synchronized with Strapi content, implement lifecycle hooks that automatically index content when it's created, updated, or deleted. Here's how to set up automatic indexing:
Create an indexing helper function:
// helpers/indexing.js
const { connector } = require('./elastic_client');
const indexContent = async (contentType, data) => {
const client = connector();
try {
await client.index({
index: process.env.ELASTIC_ALIAS_NAME,
id: data.id.toString(),
body: {
id: data.id,
title: data.title,
slug: data.slug,
description: data.description,
content: data.content,
publishedAt: data.publishedAt,
contentType: contentType
}
});
console.log(`Indexed ${contentType} with id ${data.id}`);
} catch (error) {
console.error(`Error indexing ${contentType}:`, error);
}
};
const deleteFromIndex = async (id) => {
const client = connector();
try {
await client.delete({
index: process.env.ELASTIC_ALIAS_NAME,
id: id.toString()
});
console.log(`Deleted document with id ${id}`);
} catch (error) {
console.error(`Error deleting document:`, error);
}
};
module.exports = {
indexContent,
deleteFromIndex
};Add lifecycle hooks to your content types. For example, in src/api/article/content-types/article/lifecycles.js:
const { indexContent, deleteFromIndex } = require('../../../../helpers/indexing');
module.exports = {
async afterCreate(event) {
const { result } = event;
await indexContent('article', result);
},
async afterUpdate(event) {
const { result } = event;
await indexContent('article', result);
},
async beforeDelete(event) {
const { where } = event.params;
await deleteFromIndex(where.id);
}
};For bulk indexing operations, create a more efficient approach:
const bulkIndex = async (documents, index) => {
const client = connector();
const operations = documents.flatMap(doc => [
{ index: { _index: index, _id: doc.id } },
doc
]);
try {
const response = await client.bulk({
refresh: true,
operations
});
console.log(`Bulk indexed ${documents.length} documents`);
return response;
} catch (error) {
console.error('Bulk indexing error:', error);
throw error;
}
};Building Search API Endpoints
Create search endpoints that leverage advanced capabilities. Generate a new API for search functionality:
npx strapi generate api searchConfigure the route in src/api/search/routes/search.js:
module.exports = {
routes: [
{
method: 'GET',
path: '/search',
handler: 'search.find',
config: {
policies: [],
middlewares: [],
},
},
{
method: 'GET',
path: '/search/suggest',
handler: 'search.suggest',
config: {
policies: [],
middlewares: [],
},
}
],
};Implement the search controller in src/api/search/controllers/search.js with comprehensive search functionality, similar to the techniques discussed in optimizing Strapi search:
const { connector } = require('../../../../helpers/elastic_client');
module.exports = {
async find(ctx) {
const client = connector();
const {
query,
fields = 'title,description,content',
limit = 10,
offset = 0,
sort = '_score:desc',
filters = {}
} = ctx.request.query;
if (!query) {
ctx.body = { results: [], total: 0 };
return;
}
try {
const searchFields = fields.split(',');
const [sortField, sortOrder] = sort.split(':');
// Build query with filters
const must = [
{
multi_match: {
query,
fields: searchFields.map(field =>
field === 'title' ? `${field}^2` : field
),
fuzziness: 'AUTO'
}
}
];
// Add filters
Object.entries(filters).forEach(([field, value]) => {
if (value) {
must.push({ term: { [`${field}.keyword`]: value } });
}
});
const response = await client.search({
index: process.env.ELASTIC_ALIAS_NAME,
body: {
from: parseInt(offset),
size: parseInt(limit),
sort: [{ [sortField]: { order: sortOrder } }],
query: {
bool: { must }
},
highlight: {
fields: {
title: {},
description: {},
content: {
fragment_size: 150,
number_of_fragments: 3
}
},
pre_tags: ['<strong>'],
post_tags: ['</strong>']
}
}
});
const results = response.hits.hits.map(hit => ({
id: hit._id,
score: hit._score,
...hit._source,
highlights: hit.highlight
}));
ctx.body = {
results,
total: response.hits.total.value,
took: response.took
};
} catch (error) {
console.error('Search error:', error);
ctx.status = 500;
ctx.body = { error: 'Search request failed' };
}
},
async suggest(ctx) {
const client = connector();
const { prefix } = ctx.request.query;
if (!prefix) {
ctx.body = { suggestions: [] };
return;
}
try {
const response = await client.search({
index: process.env.ELASTIC_ALIAS_NAME,
body: {
suggest: {
title_suggest: {
prefix,
completion: {
field: 'title_suggest',
size: 5
}
}
}
}
});
const suggestions = response.suggest.title_suggest[0].options.map(
option => option.text
);
ctx.body = { suggestions };
} catch (error) {
console.error('Suggestion error:', error);
ctx.status = 500;
ctx.body = { error: 'Suggestion request failed' };
}
}
};For advanced search scenarios, you can implement faceted search and aggregations:
// Add this method to your search controller
async facetedSearch(ctx) {
const client = connector();
const { query, category, tags } = ctx.request.query;
try {
const response = await client.search({
index: process.env.ELASTIC_ALIAS_NAME,
body: {
query: {
bool: {
must: query ? [
{ multi_match: { query, fields: ['title', 'content'] } }
] : [{ match_all: {} }],
filter: [
...(category ? [{ term: { 'category.keyword': category } }] : []),
...(tags ? [{ terms: { 'tags.keyword': tags.split(',') } }] : [])
]
}
},
aggs: {
categories: {
terms: { field: 'category.keyword' }
},
tags: {
terms: { field: 'tags.keyword', size: 20 }
}
}
}
});
ctx.body = {
results: response.hits.hits.map(hit => hit._source),
facets: {
categories: response.aggregations.categories.buckets,
tags: response.aggregations.tags.buckets
}
};
} catch (error) {
console.error('Faceted search error:', error);
ctx.status = 500;
ctx.body = { error: 'Faceted search failed' };
}
}To enable public access to your search endpoints, configure permissions in Strapi Admin:
- Navigate to Settings → Users & Permissions Plugin → Roles.
- Select the Public role.
- Expand your Search API permissions.
- Enable the find and suggest actions.
- Save your changes.
With these implementations, you'll have a fully functional integration that provides real-time indexing, comprehensive search capabilities, and advanced features like suggestions and faceted search. Your Strapi application can now deliver search experiences that scale with your content growth, aligning with modern search solutions.
Keep in touch with the latest Strapi and Elasticsearch updates
Project Example: Integrating Elasticsearch with Strapi
Let's walk through a practical implementation that demonstrates integrating Elasticsearch with Strapi. We'll examine a restaurant discovery application similar to Strapi's FoodAdvisor demo, which transforms basic content management into a sophisticated search platform.
Implementation Overview
This restaurant search application lets users discover dining options through advanced search capabilities. The architecture combines Strapi's content management with full-text search, enabling searches by name, cuisine type, location, and description with real-time results and relevance scoring.
The three-tier approach keeps things clean: Strapi manages restaurant data, your search engine handles indexing and queries, and React consumes the search API. Each component excels at its strengths while maintaining clean interfaces.
Key Components
The client connection handles all communication between Strapi and your search cluster, managing authentication, SSL certificates, and connection pooling.
A dedicated search service centralizes search logic, transforming Strapi's content structure into search-optimized documents. This service fetches restaurant data with populated relationships, maps complex nested data into flat searchable structures, and translates between Strapi's API responses and document format.
Content synchronization occurs through Strapi's lifecycle hooks, keeping your search index current with content changes. These hooks trigger automatically when restaurants are created, updated, or deleted, maintaining data consistency without manual intervention.
Code Walkthrough
The search service demonstrates practical integration patterns. Here's the restaurant search functionality:
module.exports = ({ strapi }) => ({
restaurants: async () => {
const data = await strapi.entityService.findMany('api::restaurant.restaurant', {
populate: { information: true, place: true, images: true }
});
const mappedData = data.map((el) => {
return {
id: el.id,
slug: el.slug,
name: el.name,
description: el.information.description,
location: el.place.name,
image: el.images[0].url
}
});
return mappedData;
}
});The controller implements the search API endpoint, processing user queries and returning formatted results. It handles query sanitization, implements fuzzy matching for typos, and provides relevance-based result ranking.
The client configuration uses environment variables for security while supporting both local development and production deployments. The client automatically handles connection pooling, retry logic, and SSL certificate validation.
Strapi Open Office Hours
If you have any questions about Strapi 5 or just would like to stop by and say hi, you can join us at Strapi's Discord Open Office Hours, Monday through Friday, from 12:30 pm to 1:30 pm CST: Strapi Discord Open Office Hours.
For more details, visit the Strapi documentation and the Elasticsearch documentation.