AWS provides many services to host your application there; in this tutorial, I will show you how to deploy Strapi as a Docker container in AWS Elastic Beanstalk.
The article will focus on deploying Strapi as a Docker container connected to PostgreSQL as a database with Load Balancer for monitoring the health check in this instance.
To follow up through this article, you should have the following: 1. Basic knowledge of JavaScript 2. Understanding of Docker 3. Basic understanding of AWS cloud concepts 4. AWS account 5. AWS CLI installed (If you don't have it installed, click here to start.). 6. Basic understanding of Strapi 7. Node.js downloaded and installed. 8. Yarn as Node package manager 9. Vscode or any code editor
Strapi is the leading open-source, customizable, headless CMS based on Node.js; it is used to develop and manage content using RESTful APIs and GraphQL.
With Strapi, you can scaffold an API faster and consume the content via APIs using any HTTP client or GraphQL-enabled frontend.
In this article, I'll show how to use the template blog to quickly scaffold a Strapi project. You can apply what we will do here for any Strapi project or template.
yarn create strapi-app bloggy --template blog
The command will create a new folder called “bloggy” under the current working directory. It contains all files of the project generated by the Strapi command; after that, you can access the Strapi dashboard with that URL: http://localhost:1337/admin.
The default behavior with the generated project uses SQLite as the main database; you need to change that to use PostgreSQL in both development and production.
Create a new PostgreSQL container with the following command:
1 docker run --name strapi-bloggy-db -v my_dbdata:/var/lib/postgresql/data -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=strapi -d postgres:13.6
This command will create a new Docker container called strapi-bloggy-db
that runs on port 5432
with a database username and password called postgres
with a pre-initialized database called strapi
.
Now, it's time to change the connection from SQLite to PostgreSQL with the new configurations. In config/env/development/database.js
, add these lines:
1 const path = require('path');
2
3 module.exports = ({ env }) => ({
4 connection: {
5 client: 'postgres',
6 connection: {
7 host: env('DATABASE_HOST', '12l7.0.0.1'),
8 port: env.int('DATABASE_PORT', 5432),
9 database: env('DATABASE_NAME', 'strapi'),
10 user: env('DATABASE_USERNAME', 'postgres'),
11 password: env('DATABASE_PASSWORD', 'postgres'),
12 schema: env('DATABASE_SCHEMA', 'public'),
13 ssl: env('DATABASE_SSL', false),
14 },
15 debug: false,
16 },
17 });
Strapi requires pg
package to establish the connection with Postgres; add pg
using the command below:
yarn add pg
You've successfully changed the SQLite database to PostgreSQL; test it it by running:
yarn develop
In production, do not use PostgreSQL as a Docker container; managing database operations such as backup, restore, and monitoring would be a hassle. Instead, delegate these tasks to the AWS Relational Database Service (RDS). The difference in database type or version between development and production would introduce problems, so you must prevent that.
Moving forward, Elastic Beanstalk will set different naming conventions for database credentials. Here is an overview of what the naming will look like.
You'll need to create a new file with that expected naming for database credentials; therefore, in config/env/production/database.js
, add these lines:
1 module.exports = ({ env }) => ({
2 connection: {
3 client: 'postgres',
4 connection: {
5 host: env('RDS_HOSTNAME', ''),
6 port: env.int('RDS_PORT', undefined),
7 database: env('RDS_DB_NAME', ''),
8 user: env('RDS_USERNAME', ''),
9 password: env('RDS_PASSWORD', ''),
10 ssl: env.bool('DATABASE_SSL', false)
11 }
12 }
13 });
Here is an overview of how the config folder looks now.
You should currently be running the Strapi server from your local machine; now, you need to run Strapi as a Docker container. Make two docker files: Dockerfile
will be used in production and Dockerfile.dev
for development.
Here's how to add the two docker files:
Dockerfile
:
1 FROM node:16
2 ENV NODE_ENV=production
3 WORKDIR /opt/
4 COPY ./package.json ./yarn.lock ./
5 ENV PATH /opt/node_modules/.bin:$PATH
6 RUN yarn install
7 WORKDIR /opt/app
8 COPY . .
9 RUN yarn build
10 EXPOSE 1337
11 CMD ["yarn", "start"]
Dockerfile.dev
:
1 FROM node:16
2 ENV NODE_ENV=development
3 WORKDIR /opt/
4 COPY ./package.json ./yarn.lock ./
5 ENV PATH /opt/node_modules/.bin:$PATH
6 RUN yarn install
7 WORKDIR /opt/app
8 COPY . .
9 RUN yarn build
10 EXPOSE 1337
11 CMD ["yarn", "develop"]
If you do not understand any step in the Docker file creation, there is a detailed blog post by Simen Daehlin that will guide you.
The difference between the two files is this: In Dockerfile, you explicitly set the environment, whether it is production or development, and CMD to run yarn develop
in case you are in development.
Add .dockerignore
to ignore these files during the build step:
1 .tmp/
2 .cache/
3 .git/
4 build/
5 node_modules/
Building our development docker image by tagging it as bloggy-dev:v1.0
1 docker build -t bloggy:v1.0 .
If you try to run the "bloggy” container, it won't connect to Postgres container because, from the Docker perspective, they are running on different networks. Hence, you need to create a network and then make the two containers connected to the same network.
You can create a network with Docker CLI, and both containers can connect to the network, but to it let's introduce Docker-Compose.
Docker-Compose simplifies creating multiple Docker containers with a single yml file. It helps to create Docker containers with configuration for volumes and networks.
You can create a docker-compose
file from scratch or use an excellent tool called strapi-tool-dockerize
to quickly generate this docker-compose
file by answering five (5) questions.
strapi-tool-dockerize
can generate either a Docker file or a docker-compose
file.
To get started with strapi-tool-dockerize
, run the following command:
npx @strapi-community/dockerize
Provide the answers to the questions below as shown: 1. Do you want to create a docker-compose file? Yes 2. What environments do you want to configure? » Development 3. What database do you want to use? » PostgreSQL 4. Database host: Localhost 5. Database Port: 5432
1 ✔ Do you want to create a docker-compose file? 🐳 … No / Yes
2 ✔ What environments do you want to configure? › Development
3 ✔ Whats the name of the project? … strapi
4 ✔ What database do you want to use? › PostgreSQL
5 ✔ Database Host … localhost
6 ✔ Database Name … strapi
7 ✔ Database Username … postgres
8 ✔ Database Password … ********
9 ✔ Database Port … 5432
After answering these questions, it would add two files to our project docker-compose.yml
and .env
.
After running dockerize, check your .dockerignore
file. It may add /data
to be ignored. We are using it in our app, so make sure to remove it from the .dockerignore
file. It should look like the following.
1.tmp/
2.cache/
3.git/
4build/
5node_modules/
6.env
The Dockerfile
would be slightly different, you can revert it back to our version. There is no such big difference dockerize-tool
, which adds some packages to meet sharp
compatibility.
In Docker-compose.yml
, I just edited the image
value in strapiDB
service to match the versions supported in AWS, to be image:postgres:13.6
, so now docker-compose looks like below.
If you are using m1 mac change the platform
value to use linux/arm64/v8
instead of linux/amd64
version: '3'
services:
strapi:
container_name: strapi
build: .
image: strapi:latest
restart: unless-stopped
env_file: .env
environment:
DATABASE_CLIENT: ${DATABASE_CLIENT}
DATABASE_HOST: strapiDB
DATABASE_NAME: ${DATABASE_NAME}
DATABASE_USERNAME: ${DATABASE_USERNAME}
DATABASE_PORT: ${DATABASE_PORT}
JWT_SECRET: ${JWT_SECRET}
ADMIN_JWT_SECRET: ${ADMIN_JWT_SECRET}
DATABASE_PASSWORD: ${DATABASE_PASSWORD}
NODE_ENV: ${NODE_ENV}
volumes:
- ./config:/opt/app/config
- ./src:/opt/app/src
- ./package.json:/opt/package.json
- ./yarn.lock:/opt/yarn.lock
- ./.env:/opt/app/.env
- ./public/uploads:/opt/app/public/uploads
ports:
- '1337:1337'
networks:
- strapi
depends_on:
- strapiDB
strapiDB:
container_name: strapiDB
platform: linux/arm64/v8 #for platform error on Apple M1 chips
restart: unless-stopped
env_file: .env
image: postgres:13.6
environment:
POSTGRES_USER: ${DATABASE_USERNAME}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
POSTGRES_DB: ${DATABASE_NAME}
volumes:
- strapi-data:/var/lib/postgresql/data/ #using a volume
#- ./data:/var/lib/postgresql/data/ # if you want to use a bind folder
ports:
- '5432:5432'
networks:
- strapi
volumes:
strapi-data:
networks:
strapi:
name: Strapi
driver: bridge
Docker-Compose will read environment variables from .env
files. The .env
should look like the following after strapi-dockerize
adding database environment variables.
1 HOST=0.0.0.0
2 PORT=1337
3 APP_KEYS=zz1kt2QS2I7BBuP8EuIjlA==,L8XX/OEbybRFh40q8DzIng==,yt4yAvYgK83xycthu5yxtA==,X7Gcx1VVAUm8d+A7rTZ7Yw==
4 API_TOKEN_SALT=MWPCH4U70a2E8ubTlAC6Yg==
5 ADMIN_JWT_SECRET=hJXXOaTmQl8A4zXbiqTicQ==
6 JWT_SECRET=aUnqqM5AwuUQAyxXE6LQnQ==
7 # @strapi-community/dockerize variables
8 DATABASE_HOST=localhost
9 DATABASE_PORT=5432
10 DATABASE_NAME=strapi
11 DATABASE_USERNAME=postgres
12 DATABASE_PASSWORD=postgres
13 NODE_ENV=development
14 DATABASE_CLIENT=postgres
To build docker-compose, run:
1 docker-compose build
To start all containers with docker-compose, run:
1 docker-compose up
After that, you would see the Strapi container up, running, and connected to the Postgres container.
AWS provides many services to deploy Docker containers; the most popular services are:
App runner is the simplest. You can set up and run the container without doing many tasks, but it is not available in all regions. Elastic Container Service(ECS) is a service with a low-level abstraction to run and manage multiple containers(cluster). In our case, we just need to run one container, so Elastic Beanstalk is a perfect choice and is available in all regions.
View this page before you go further: AWS Elastic Beanstalk FAQs - Amazon Web Services (AWS)
Elastic Beanstalk is a higher level of abstraction above cloud computing (EC2), cloud storage (S3), Cloudwatch (logging and monitoring), and Elastic Load Balancer. it facilitates the provisioning and managing of the backend infrastructure.
Below are the steps to take to deploy the image in Elastic Beanstalk as a Docker container:
Dockerrun.aws.json
, and uploaded it to the S3 bucket. You can refence it later when creating an Elastic Beanstalk environment to pull the image from ECR.You can use any container registry like “docker hub” or “google container registry”. However, it would require extra fields in the configuration file we are going to add “Dockerrun.aws.json” to make AWS able to authenticate to that service, but in case using ECR the authentication would be out of the box.
Leave the other config as it is and then choose “Create repository”.
Click on “bloggy” in its row under the repository name; it will display all images you have under that repository.
Click on “View push commands”. This step is to make docker CLI authenticate your repository by the authentication token that AWS provided to us.
“View push commands” displays the steps needed to push the image in the “bloggy” repository. To do so, follow up with these steps to authenticate with the repository.
To get started using AWS CLI, you need to obtain the Access key ID and Access Secret key. That access key gives permission to CLI to make programmatic calls like updating or creating any resources. In our case, we need to add an image in the ECR repository. To do so, we need to start with the following steps.
AdministratorAccess gives all permission to that created users to manipulate all AWS resources. So you need to save them in a secret place. Also, you can choose a policy that has permission to just take actions over ECR in that case, the CLI would not be allowed to affect any other resources or services.
Now we have the Access Key ID and Secret Access Key, we can configure AWS CLI by running the following command:
1 aws configure
Add your AWS Access Key ID and AWS Secret Access Key.
I am using windows even so I can use macOS / Linux commands without any problems aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 391161446417.dkr.ecr.us-east-1.amazonaws.com
You should get the following result
1 docker build -t bloggy:v1.0 .
1 docker tag bloggy:v1.0 391161446417.dkr.ecr.us-east-1.amazonaws.com/bloggy:v1.0
1 docker push 391161446417.dkr.ecr.us-east-1.amazonaws.com/bloggy:v1.0
Finally, the image is now in your repository. It will be pulled in the following steps.
In the AWS console, search and navigate to S3 and follow the steps outlined below.
3- Click on “Create bucket”; you can leave the other options they are.
1 {
2 "AWSEBDockerrunVersion": "1",
3 "Image": {
4 "Name": "391161446417.dkr.ecr.us-east-1.amazonaws.com/bloggy:v1.0",
5 "Update": "true"
6 },
7 "Ports": [
8 {
9 "ContainerPort": "1337"
10 }
11 ]
12 }
Choose the“Dockerrun.aws.json” file by ”Add files” then click upload.
You should get the following result:
Before moving from S3 to Elastic Beanstalk, you need to get the object URL of “Dockerrun.aws.json” by clicking on “Dockerrun.aws.json” under the “Name” column in the previous view.
In the AWS console, search and navigate to Elastic Beanstalk page. 1. Click on “Create Application.”
env
file, copy values of APP_KEYS
API_TOKEN_SALT
ADMIN_JWT_SECRET
JWT_SECRET
, then add them as key-value like below and click “Save”.You do not have to add database credentials as stated above. Elastic Beanstalk sets them by default.
Leave other options as they are and click on “Save”.
It would take a few minutes until AWS creates a database instance and create server instance. If everything goes fine, you would get Health status as “Ok” and the application URL.
To automate deployment in a real-world CI/CD environment, you need to do three steps:
In this tutorial, you learned how to deploy Strapi as a Docker container in AWS Elastic Beanstalk, connected to PostgreSQL in Relational Database Service. You also saw an overview of how you can automate these deployment steps in CI/CD pipelines like GitHub actions.
You can check out the full source code of the project here.
A Fullstack Developer Looking to build and deploy large-scale applications.