These integration guides are not official documentation and the Strapi Support Team will not provide assistance with them.
What Is Elasticsearch?
Elasticsearch is a distributed search and analytics engine built on Apache Lucene that powers modern search applications. It exposes a RESTful HTTP API that works with any programming language or platform.
The Elasticsearch integration handles full-text search far better than traditional database queries. Its distributed architecture lets you scale horizontally by adding nodes to your cluster, with built-in data replication and fault tolerance.
The real-time indexing sets Elasticsearch apart from batch-processing search solutions. Your content becomes searchable almost instantly after indexing, which is crucial when dealing with frequently changing content.
You can start with a single node and expand to hundreds as your data and query load grow. The distributed architecture handles most scaling complexity, though performance optimization becomes critical at scale.
Why Integrate Elasticsearch with Strapi
Combining Strapi with Elasticsearch creates a powerful ecosystem that enhances content discovery while maintaining Strapi's flexible content management capabilities. Strapi allows for flexible content management and, when integrated with Elasticsearch, provides an enhanced search experience. While Strapi excels at content creation and API delivery, the headless CMS advantages combined with Elasticsearch's enterprise-grade search functionality create a powerful solution that traditional databases can't match.
Among the numerous benefits of Strapi integrations, combining Strapi with Elasticsearch particularly enhances search capabilities in your application. The integration offers four key benefits: 1. Enhanced search with full-text capabilities, fuzzy matching, and relevance scoring that understands context and handles typos 2. Real-time indexing that automatically synchronizes search results with content changes via lifecycle hooks 3. Advanced query capabilities including faceted search, geographic filtering, and boolean logic for sophisticated experiences like multi-attribute filtering 4. Scalability that maintains performance as content volume grows.
Elasticsearch adds enterprise-grade search functionality that traditional databases can't match, complementing Strapi's enterprise content management features.
This integration is particularly valuable for content-heavy applications like e-commerce platforms, knowledge bases, news sites, and documentation portals. Consider implementing it when basic search no longer meets user expectations due to slow performance, poor relevance, or limited filtering options.
Keep in touch with the latest Strapi and Elasticsearch updates
How to Integrate Elasticsearch with Strapi
Setting up Elasticsearch with Strapi requires careful planning and systematic implementation. This Strapi integration guide walks you through the entire process, from initial setup to building functional search endpoints.
Prerequisites
Before beginning the integration, ensure you have the following technical knowledge and tools:
- JavaScript/TypeScript proficiency: Understanding of asynchronous programming, API development, and modern JavaScript features
- Node.js experience: Familiarity with Node.js runtime and package management
- React knowledge: For building front-end search interfaces that consume your search API
- Docker fundamentals: Understanding containerization concepts and Docker Compose
- CLI comfort: Ability to work with command-line tools and terminal commands
- Basic understanding of headless CMS concepts: Familiarity with the principles of headless content management systems
Setting Up Elasticsearch
You can deploy Elasticsearch using two primary approaches: Elastic Cloud (managed service) or a self-hosted Docker setup. Each has distinct advantages depending on your project requirements.
Elastic Cloud Setup
Elastic Cloud offers a managed solution with automatic updates, built-in security, and simplified scaling. To get started:
- Sign up for an Elastic Cloud account.
- Create a deployment with your desired specifications.
- Note your cloud ID, API key, and endpoint for later configuration.
- The connection will use cloud-specific credentials rather than traditional username/password authentication.
Self-Hosted Docker Setup
For complete control over your infrastructure, Docker provides an excellent self-hosted option. Create a docker-compose.yml
file:
1version: '3'
2services:
3 elasticsearch:
4 image: docker.elastic.co/elasticsearch/elasticsearch:8.7.0
5 environment:
6 - discovery.type=single-node
7 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
8 - xpack.security.enabled=true
9 - ELASTIC_PASSWORD=your_secure_password
10 ports:
11 - "9200:9200"
12 volumes:
13 - elasticsearch-data:/usr/share/elasticsearch/data
14volumes:
15 elasticsearch-data:
Launch your Elasticsearch instance:
1docker-compose up -d
After starting, retrieve the CA certificate from the container:
1docker cp elasticsearch:/usr/share/elasticsearch/config/certs/http_ca.crt .
Your search instance will be accessible at http://localhost:9200 with the credentials you specified.
Installing and Configuring Strapi
If you don't have a Strapi project yet, create one using the CLI:
1npx create-strapi-app@latest my-project --quickstart
2cd my-project
For existing Strapi projects, ensure you're running a compatible version. The integration works with both Strapi v4 and v5, though plugin versions may differ.
Installing Required Dependencies
Install the Elasticsearch JavaScript client, which enables communication between Strapi and your search instance:
1# Using npm
2npm install @elastic/elasticsearch
3
4# Using Yarn
5yarn add @elastic/elasticsearch
You can also install the dedicated Strapi plugin for a more streamlined experience:
1# For Strapi v5
2npm install @geeky-biz/strapi-plugin-elasticsearch
3
4# For Strapi v4, use version 0.0.8
5npm install @geeky-biz/strapi-plugin-elasticsearch@0.0.8
Configuring Environment Variables
Create or update your .env
file with search connection details:
1# For self-hosted setup
2ELASTIC_HOST=https://localhost:9200
3ELASTIC_USERNAME=elastic
4ELASTIC_PASSWORD=your_secure_password
5ELASTIC_CERT_NAME=http_ca.crt
6ELASTIC_ALIAS_NAME=content_index
7
8# For Elastic Cloud setup
9ELASTIC_CLOUD_ID=your_cloud_id
10ELASTIC_API_KEY=your_api_key
Never commit these credentials to version control. The environment variables ensure secure, configurable connections across different deployment environments.
Creating the Elasticsearch Client
Build a reusable client to handle all interactions with your search engine. Create a helper file at helpers/elastic_client.js
:
1const { Client } = require('@elastic/elasticsearch');
2const fs = require('fs');
3
4const connector = () => {
5 // For self-hosted setup
6 if (process.env.ELASTIC_HOST) {
7 return new Client({
8 node: process.env.ELASTIC_HOST,
9 auth: {
10 username: process.env.ELASTIC_USERNAME,
11 password: process.env.ELASTIC_PASSWORD
12 },
13 tls: {
14 ca: fs.readFileSync('./http_ca.crt'),
15 rejectUnauthorized: false
16 }
17 });
18 }
19
20 // For Elastic Cloud setup
21 return new Client({
22 cloud: {
23 id: process.env.ELASTIC_CLOUD_ID
24 },
25 auth: {
26 apiKey: process.env.ELASTIC_API_KEY
27 }
28 });
29};
30
31const testConn = async (client) => {
32 try {
33 const response = await client.info();
34 console.log('Elasticsearch connected successfully:', response);
35 return response;
36 } catch (error) {
37 console.error('Elasticsearch connection error:', error);
38 throw error;
39 }
40};
41
42module.exports = {
43 connector,
44 testConn
45};
This client automatically handles both self-hosted and cloud configurations based on your environment variables.
Implementing Content Indexing
To keep your search index synchronized with Strapi content, implement lifecycle hooks that automatically index content when it's created, updated, or deleted. Here's how to set up automatic indexing:
Create an indexing helper function:
1// helpers/indexing.js
2const { connector } = require('./elastic_client');
3
4const indexContent = async (contentType, data) => {
5 const client = connector();
6
7 try {
8 await client.index({
9 index: process.env.ELASTIC_ALIAS_NAME,
10 id: data.id.toString(),
11 body: {
12 id: data.id,
13 title: data.title,
14 slug: data.slug,
15 description: data.description,
16 content: data.content,
17 publishedAt: data.publishedAt,
18 contentType: contentType
19 }
20 });
21 console.log(`Indexed ${contentType} with id ${data.id}`);
22 } catch (error) {
23 console.error(`Error indexing ${contentType}:`, error);
24 }
25};
26
27const deleteFromIndex = async (id) => {
28 const client = connector();
29
30 try {
31 await client.delete({
32 index: process.env.ELASTIC_ALIAS_NAME,
33 id: id.toString()
34 });
35 console.log(`Deleted document with id ${id}`);
36 } catch (error) {
37 console.error(`Error deleting document:`, error);
38 }
39};
40
41module.exports = {
42 indexContent,
43 deleteFromIndex
44};
Add lifecycle hooks to your content types. For example, in src/api/article/content-types/article/lifecycles.js
:
1const { indexContent, deleteFromIndex } = require('../../../../helpers/indexing');
2
3module.exports = {
4 async afterCreate(event) {
5 const { result } = event;
6 await indexContent('article', result);
7 },
8
9 async afterUpdate(event) {
10 const { result } = event;
11 await indexContent('article', result);
12 },
13
14 async beforeDelete(event) {
15 const { where } = event.params;
16 await deleteFromIndex(where.id);
17 }
18};
For bulk indexing operations, create a more efficient approach:
1const bulkIndex = async (documents, index) => {
2 const client = connector();
3
4 const operations = documents.flatMap(doc => [
5 { index: { _index: index, _id: doc.id } },
6 doc
7 ]);
8
9 try {
10 const response = await client.bulk({
11 refresh: true,
12 operations
13 });
14
15 console.log(`Bulk indexed ${documents.length} documents`);
16 return response;
17 } catch (error) {
18 console.error('Bulk indexing error:', error);
19 throw error;
20 }
21};
Building Search API Endpoints
Create search endpoints that leverage advanced capabilities. Generate a new API for search functionality:
1npx strapi generate api search
Configure the route in src/api/search/routes/search.js
:
1module.exports = {
2 routes: [
3 {
4 method: 'GET',
5 path: '/search',
6 handler: 'search.find',
7 config: {
8 policies: [],
9 middlewares: [],
10 },
11 },
12 {
13 method: 'GET',
14 path: '/search/suggest',
15 handler: 'search.suggest',
16 config: {
17 policies: [],
18 middlewares: [],
19 },
20 }
21 ],
22};
Implement the search controller in src/api/search/controllers/search.js
with comprehensive search functionality, similar to the techniques discussed in optimizing Strapi search:
1const { connector } = require('../../../../helpers/elastic_client');
2
3module.exports = {
4 async find(ctx) {
5 const client = connector();
6 const {
7 query,
8 fields = 'title,description,content',
9 limit = 10,
10 offset = 0,
11 sort = '_score:desc',
12 filters = {}
13 } = ctx.request.query;
14
15 if (!query) {
16 ctx.body = { results: [], total: 0 };
17 return;
18 }
19
20 try {
21 const searchFields = fields.split(',');
22 const [sortField, sortOrder] = sort.split(':');
23
24 // Build query with filters
25 const must = [
26 {
27 multi_match: {
28 query,
29 fields: searchFields.map(field =>
30 field === 'title' ? `${field}^2` : field
31 ),
32 fuzziness: 'AUTO'
33 }
34 }
35 ];
36
37 // Add filters
38 Object.entries(filters).forEach(([field, value]) => {
39 if (value) {
40 must.push({ term: { [`${field}.keyword`]: value } });
41 }
42 });
43
44 const response = await client.search({
45 index: process.env.ELASTIC_ALIAS_NAME,
46 body: {
47 from: parseInt(offset),
48 size: parseInt(limit),
49 sort: [{ [sortField]: { order: sortOrder } }],
50 query: {
51 bool: { must }
52 },
53 highlight: {
54 fields: {
55 title: {},
56 description: {},
57 content: {
58 fragment_size: 150,
59 number_of_fragments: 3
60 }
61 },
62 pre_tags: ['<strong>'],
63 post_tags: ['</strong>']
64 }
65 }
66 });
67
68 const results = response.hits.hits.map(hit => ({
69 id: hit._id,
70 score: hit._score,
71 ...hit._source,
72 highlights: hit.highlight
73 }));
74
75 ctx.body = {
76 results,
77 total: response.hits.total.value,
78 took: response.took
79 };
80 } catch (error) {
81 console.error('Search error:', error);
82 ctx.status = 500;
83 ctx.body = { error: 'Search request failed' };
84 }
85 },
86
87 async suggest(ctx) {
88 const client = connector();
89 const { prefix } = ctx.request.query;
90
91 if (!prefix) {
92 ctx.body = { suggestions: [] };
93 return;
94 }
95
96 try {
97 const response = await client.search({
98 index: process.env.ELASTIC_ALIAS_NAME,
99 body: {
100 suggest: {
101 title_suggest: {
102 prefix,
103 completion: {
104 field: 'title_suggest',
105 size: 5
106 }
107 }
108 }
109 }
110 });
111
112 const suggestions = response.suggest.title_suggest[0].options.map(
113 option => option.text
114 );
115
116 ctx.body = { suggestions };
117 } catch (error) {
118 console.error('Suggestion error:', error);
119 ctx.status = 500;
120 ctx.body = { error: 'Suggestion request failed' };
121 }
122 }
123};
For advanced search scenarios, you can implement faceted search and aggregations:
1// Add this method to your search controller
2async facetedSearch(ctx) {
3 const client = connector();
4 const { query, category, tags } = ctx.request.query;
5
6 try {
7 const response = await client.search({
8 index: process.env.ELASTIC_ALIAS_NAME,
9 body: {
10 query: {
11 bool: {
12 must: query ? [
13 { multi_match: { query, fields: ['title', 'content'] } }
14 ] : [{ match_all: {} }],
15 filter: [
16 ...(category ? [{ term: { 'category.keyword': category } }] : []),
17 ...(tags ? [{ terms: { 'tags.keyword': tags.split(',') } }] : [])
18 ]
19 }
20 },
21 aggs: {
22 categories: {
23 terms: { field: 'category.keyword' }
24 },
25 tags: {
26 terms: { field: 'tags.keyword', size: 20 }
27 }
28 }
29 }
30 });
31
32 ctx.body = {
33 results: response.hits.hits.map(hit => hit._source),
34 facets: {
35 categories: response.aggregations.categories.buckets,
36 tags: response.aggregations.tags.buckets
37 }
38 };
39 } catch (error) {
40 console.error('Faceted search error:', error);
41 ctx.status = 500;
42 ctx.body = { error: 'Faceted search failed' };
43 }
44}
To enable public access to your search endpoints, configure permissions in Strapi Admin:
- Navigate to Settings → Users & Permissions Plugin → Roles.
- Select the Public role.
- Expand your Search API permissions.
- Enable the find and suggest actions.
- Save your changes.
With these implementations, you'll have a fully functional integration that provides real-time indexing, comprehensive search capabilities, and advanced features like suggestions and faceted search. Your Strapi application can now deliver search experiences that scale with your content growth, aligning with modern search solutions.
Keep in touch with the latest Strapi and Elasticsearch updates
Project Example: Integrating Elasticsearch with Strapi
Let's walk through a practical implementation that demonstrates integrating Elasticsearch with Strapi. We'll examine a restaurant discovery application similar to Strapi's FoodAdvisor demo, which transforms basic content management into a sophisticated search platform.
Implementation Overview
This restaurant search application lets users discover dining options through advanced search capabilities. The architecture combines Strapi's content management with full-text search, enabling searches by name, cuisine type, location, and description with real-time results and relevance scoring.
The three-tier approach keeps things clean: Strapi manages restaurant data, your search engine handles indexing and queries, and React consumes the search API. Each component excels at its strengths while maintaining clean interfaces.
Key Components
The client connection handles all communication between Strapi and your search cluster, managing authentication, SSL certificates, and connection pooling.
A dedicated search service centralizes search logic, transforming Strapi's content structure into search-optimized documents. This service fetches restaurant data with populated relationships, maps complex nested data into flat searchable structures, and translates between Strapi's API responses and document format.
Content synchronization occurs through Strapi's lifecycle hooks, keeping your search index current with content changes. These hooks trigger automatically when restaurants are created, updated, or deleted, maintaining data consistency without manual intervention.
Code Walkthrough
The search service demonstrates practical integration patterns. Here's the restaurant search functionality:
1module.exports = ({ strapi }) => ({
2 restaurants: async () => {
3 const data = await strapi.entityService.findMany('api::restaurant.restaurant', {
4 populate: { information: true, place: true, images: true }
5 });
6
7 const mappedData = data.map((el) => {
8 return {
9 id: el.id,
10 slug: el.slug,
11 name: el.name,
12 description: el.information.description,
13 location: el.place.name,
14 image: el.images[0].url
15 }
16 });
17
18 return mappedData;
19 }
20});
The controller implements the search API endpoint, processing user queries and returning formatted results. It handles query sanitization, implements fuzzy matching for typos, and provides relevance-based result ranking.
The client configuration uses environment variables for security while supporting both local development and production deployments. The client automatically handles connection pooling, retry logic, and SSL certificate validation.
Strapi Open Office Hours
If you have any questions about Strapi 5 or just would like to stop by and say hi, you can join us at Strapi's Discord Open Office Hours, Monday through Friday, from 12:30 pm to 1:30 pm CST: Strapi Discord Open Office Hours.
For more details, visit the Strapi documentation and the Elasticsearch documentation.
FAQ
How does Elasticsearch enhance search capabilities in Strapi applications?
Elasticsearch offers enhanced search capabilities through features like full-text search with fuzzy matching, real-time indexing for immediate searchability of new content, advanced query capabilities for sophisticated search experiences, and scalability to maintain performance as your content and query load grow. This makes searching through content-rich applications fast, accurate, and efficient.
Can I scale my Elasticsearch integration with Strapi as my application grows?
Yes, Elasticsearch's distributed nature allows you to scale horizontally by adding more nodes to your cluster, providing built-in data replication and fault tolerance. As your application and data grow, you can expand your Elasticsearch setup to maintain high performance and reliability.
How do I ensure my Elasticsearch and Strapi integration is secure?
For security, use environment variables to store sensitive information like connection details and passwords. When setting up Elasticsearch, enable built-in security features like X-Pack for encryption, role-based access control, and audit logging. Always use secure connections (HTTPS) and consider implementing additional security measures like IP filtering and secure API keys for Elastic Cloud setups.
What version of Strapi supports Elasticsearch integration?
The Elasticsearch integration is compatible with both Strapi v4 and v5. Ensure you have a compatible version of Strapi and use the appropriate plugin version or custom integration code to connect to your Elasticsearch instance.
How can I test and monitor the performance of my Elasticsearch integration with Strapi?
To test the integration, conduct unit, integration, and end-to-end tests covering various search scenarios, content updates, and query loads. Monitor performance using Elasticsearch's built-in monitoring tools, tracking metrics like search latency, indexing throughput, and error rates. Utilize logs and alerts to identify and address issues promptly.
What are some common use cases for integrating Elasticsearch with Strapi?
Common use cases include enhancing e-commerce platforms with advanced search filters, building dynamic knowledge bases that allow for quick content discovery, improving the search experience on news sites and blogs, and creating documentation portals with efficient search capabilities. The integration is ideal for any content-heavy application requiring sophisticated search features.