If you've ever tried to spin up a web app alongside a database, a cache layer, and maybe a reverse proxy, all using individual docker run commands, you already know the pain. You end up juggling network creation, volume flags, environment variables, and teardown scripts. One missed flag and the whole stack breaks in ways that eat your afternoon.
Docker Compose exists to eliminate that friction. It's a tool that lets you define your entire multi-container application in a single YAML file and manage it with a handful of commands. Rather than issuing imperative instructions container by container, you describe your desired state declaratively, and Compose handles the rest.
Today, Docker Compose is a CLI plugin invoked with docker compose (note the space, not hyphen). Install docs cover how it ships with Docker Desktop, and it works across Linux, macOS, and Windows.
This guide covers what Docker Compose is, how it works under the hood, its core benefits, the commands you'll reach for daily, a full YAML walkthrough, common use cases, and best practices for production-ready configurations.
In brief:
- Docker Compose defines multi-container applications in a single
compose.yamlfile using a declarative YAML model. - It orchestrates services, networks, and volumes with commands like
docker compose upanddocker compose down. - Modern Compose is a Docker CLI plugin, and older
docker-composereferences in tutorials reflect legacy syntax. See the retired docs. - It's ideal for local development, CI/CD testing, and single-host deployments of tools like databases, caches, and headless CMS platforms such as Strapi.
What Is Docker Compose?
Docker Compose is a declarative tool for defining and running multi-container Docker applications. Instead of writing shell scripts full of docker run commands, you describe your application's architecture in a compose.yaml file, and Compose builds, networks, and starts everything for you.
The configuration model centers on three core building blocks:
- Services: the containers that make up your application (a Node.js API, a PostgreSQL database, a Redis cache).
- Networks: how those containers communicate with each other, with automatic DNS-based service discovery.
- Volumes: where persistent data lives so it survives container restarts and rebuilds.
When you run docker compose up, Compose reads your YAML file, pulls or builds the necessary images, creates networks, provisions volumes, and starts containers in the correct order. When you run docker compose down, it tears everything back down. One file, two commands.
It's worth understanding the evolution. Compose history documents the shift from the standalone Python docker-compose binary to the Go-based Docker CLI plugin invoked as docker compose. The old Python binary is listed in the retired docs. So if you see docker-compose with a hyphen in older tutorials, treat it as legacy syntax rather than current usage.
One more thing: the top-level version: key that older compose.yaml files included? It's deprecated and parser behavior. Remove it from all new files. The current Compose Specification is a rolling standard with no version field.
Docker Compose vs Docker Run
The difference boils down to imperative vs. declarative:
| Dimension | docker run | Docker Compose |
|---|---|---|
| Paradigm | Imperative: step-by-step commands | Declarative: desired state in YAML |
| Scope | Single container per command | Multi-container application |
| Networking | Manual: create and connect yourself | Automatic: DNS by service name |
| Reproducibility | Shell scripts or tribal knowledge | YAML committed to version control |
| Teardown | Stop and remove each container individually | docker compose down handles everything |
Reach for docker run when you need a quick, one-off container, testing an image interactively or running a standalone utility. The moment your app depends on more than one service, switch to Compose. That usually makes it easier for a teammate to clone the repo and run docker compose up instead of deciphering a long shell script.
How Docker Compose Works
The lifecycle follows a predictable pattern:
- Write a
compose.yaml: define your services, their images or build contexts, networking, volumes, and environment variables. - Run
docker compose up: Compose reads the file, pulls or builds images, creates a default bridge network, resolves dependencies viadepends_on, and starts containers in the correct order. - Develop and iterate: Cached config means re-running
uponly recreates containers whose configuration actually changed. - Tear down with
docker compose down: stops containers, removes them, and cleans up networks. Add-vto also wipe named volumes.
Compose resolves startup order using depends_on. In its short form (depends_on: [db]), it only waits for the container process to start, not for the service to be ready. For databases and other services with initialization delays, use the long form with condition: service_healthy and pair it with a healthcheck definition, as shown in the services reference. This is one of those things teams often learn after debugging why their app crashes on startup.
Anatomy of a compose.yaml File
Here's a minimal two-service stack, a Node.js app backed by PostgreSQL:
services:
api:
build: ./api
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://app:secret@db:5432/mydb
depends_on:
db:
condition: service_healthy
db:
image: postgres:16.3
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: secret
POSTGRES_DB: mydb
volumes:
- db-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d mydb"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
volumes:
db-data:The top-level keys break down as follows:
services: each child key defines a container.apibuilds from a local Dockerfile;dbpulls a pre-built image.environment: sets variables inside the container. Notice howapireferencesdbby service name in its connection string. Compose's automatic DNS handles that.volumes(top-level): declares a named volume (db-data) so PostgreSQL data persists across container restarts.depends_onwithcondition: service_healthy: ensuresapionly starts after PostgreSQL passes its health check.healthcheck: defines how Compose determines whetherdbis ready to accept connections.
Key Benefits of Using Docker Compose
Portable Development Environments
A single docker compose up -d command brings up an identical environment on any machine with Docker installed. No more "works on my machine" arguments or multi-page setup guides. This is particularly valuable for onboarding: a new developer can clone your repo and have a fully running stack without spending extra time installing dependencies.
Compose variables and env files let you customize behavior per environment without modifying the YAML itself. Different database credentials for local vs. staging? Different port mappings for different team members? All handled without touching the shared configuration.
Isolated, Reproducible Testing
Compose provides a way to create and destroy isolated testing environments for your test suite. The pattern is straightforward:
docker compose up -d
./run_tests
docker compose downYour integration and E2E tests run against real services, an actual PostgreSQL instance, an actual Redis cache, rather than mocked substitutes. And because each run gets fresh containers, tests don't pollute each other or the host operating system. This kind of deterministic testing is difficult to achieve with local installations where state accumulates between runs.
Simplified Multi-Service Orchestration
Managing networking, health checks, restart policies, and dependency ordering across multiple containers is tedious when done manually. Compose consolidates all of this into a single file. Services on the same Compose network discover each other automatically by name via Docker's embedded DNS server.
Health checks gate startup order. Restart policies keep critical services running. The result is a self-documenting application topology that anyone on the team can read and understand.
For full-stack developers building content-driven applications, where you might have an API backend, a database, a cache layer, and maybe a frontend dev server, this consolidation reduces setup overhead and keeps the stack easier to reason about.
Essential Docker Compose Commands
These are the commands you'll reach for daily. All use the current docker compose (space) syntax.
Start and stop services:
docker compose up -d # Start all services in background
docker compose up -d --build # Rebuild images, then start
docker compose up -d --wait # Wait for healthy status before returning
docker compose down # Stop and remove containers + networks
docker compose down -v # Also wipe named volumes (careful, deletes DB data)Monitor and debug:
docker compose ps # List running containers
docker compose ps -a # Include stopped containers
docker compose logs -f # Follow logs from all services
docker compose logs -f api # Follow logs from one service
docker compose logs --tail=100 # Last 100 linesInteract with running containers:
docker compose exec api sh # Open a shell in a running container
docker compose exec -T api npm test # Non-interactive (use -T in CI scripts)
docker compose run --rm api npm run migrate # One-off command, auto-remove containerBuild and watch:
docker compose build # Build all service images
docker compose build --no-cache # Clean build, ignore layer cache
docker compose watch # Hot-reload mode for developmentThe watch command monitors your source files and automatically syncs changes, restarts containers, or triggers rebuilds depending on your develop.watch configuration. For frameworks with Hot Module Replacement (HMR) like Vite or Webpack, the sync action copies changed files into the running container, and the framework picks up the change instantly, with no manual restart needed.
Docker Compose YAML File Example: Step-by-Step
Here's a realistic three-service stack, a web application, PostgreSQL database, and Redis cache, that demonstrates the features you'll use in real projects. This is a complete, copy-pasteable compose.yaml:
services:
web:
build:
context: ./app
target: production
ports:
- "${APP_PORT:-3000}:3000"
env_file:
- path: ./default.env
required: true
- path: ./override.env
required: false
secrets:
- api_key
networks:
- frontend
- backend
depends_on:
db:
condition: service_healthy
restart: true
cache:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:3000/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
restart: unless-stopped
db:
image: postgres:16.3
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myapp
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "pg_isready -U myapp"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
restart: always
cache:
image: redis:7.2-alpine
volumes:
- cache-data:/data
networks:
- backend
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
restart: unless-stopped
volumes:
db-data:
cache-data:
networks:
frontend:
driver: bridge
backend:
driver: bridge
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
file: ./secrets/api_key.txtHere's what each section does:
Build context and multi-stage target: web builds from ./app and targets the production stage in a multi-stage Dockerfile. This keeps the final image lean by excluding build tools and dev dependencies.
Port mappings with defaults: ${APP_PORT:-3000}:3000 uses a variable with a fallback. Each developer can set APP_PORT in their .env file to avoid port conflicts without editing the shared YAML.
Environment variables via .env files: default.env is required; override.env is optional and silently ignored if missing. This env_file approach keeps configuration out of the YAML itself. The environment key in db takes priority over env_file values when both set the same variable.
Docker secrets: the database password is mounted at /run/secrets/db_password rather than passed as a plain environment variable. Many official images support the _FILE suffix pattern, like POSTGRES_PASSWORD_FILE, for reading secrets from files.
Named volumes: db-data and cache-data are declared at the top level and persist data across container restarts. Without these, a docker compose down would wipe your database.
Custom networks: frontend and backend segment traffic. The cache and db services only exist on backend, while web bridges both. This mirrors a real-world topology where your database shouldn't be directly reachable from a public-facing network.
Health checks with depends_on conditions: web waits for both db and cache to pass their health checks before starting. The restart: true on the dependency means web restarts if the database container is recreated.
Restart policies: always for the database (availability is paramount), unless-stopped for app services (respects intentional stops), and the default no for anything you'd rather manage manually.
Common Use Cases for Docker Compose
Local Development Stacks
This is where most developers first encounter Compose. Need a backend API, a database, and a cache running together? Define them in one file, start them with one command. Frameworks built on Node.js, including content management systems that generate REST and GraphQL APIs, pair naturally with Compose because you can bundle the application alongside its required database (PostgreSQL, MySQL, SQLite) and any additional services.
CI/CD Pipelines
Compose in CI creates deterministic test environments in platforms like GitHub Actions and GitLab CI. The pattern is the same as local testing:
docker compose up -d
./run_tests
docker compose down --volumesGitHub runners come with Docker and Compose pre-installed, so there's zero setup overhead. For GitLab CI, running full Compose workflows requires Docker-in-Docker (DinD) with the correct TLS configuration, specifically DOCKER_HOST: tcp://docker:2376 and DOCKER_TLS_CERTDIR: "/certs". Without these, expect connection errors.
The key advantage over platform-specific service containers is portability: the same compose.yaml that works on your laptop works in CI without modification.
Self-Hosting Open-Source Tools
Compose is the standard way to self-host complex applications that depend on multiple services. A headless CMS like Strapi, which is built on Node.js and needs a database to store content, is a textbook example. You define the Strapi service alongside PostgreSQL in a single compose.yaml, configure environment variables for the database connection, add a named volume for data persistence, and run docker compose up -d. The entire CMS stack is running as a single managed setup.
This pattern applies broadly: monitoring tools, media libraries, CI runners, analytics platforms, anything that ships a Docker image and needs a backing database can be composed into a self-hosted stack without writing a single line of infrastructure code. For teams that want full control over their deployment model while avoiding the complexity of manual container management, Compose hits a practical middle ground.
Best Practices for Docker Compose
- Use
compose.yamlas the preferred filename. Per the Compose Specification,compose.yamlis the canonical filename. Compose still supportsdocker-compose.ymlfor backward compatibility, but new projects should use the preferred name. No flags or configuration changes are required; CLI discovery is automatic. - Never hard-code secrets. If you're injecting passwords and API keys as inline environment variables, you risk unintentional exposure through logs, process listings, and version control. Use Docker secrets (mounted as files at
/run/secrets/) for sensitive values like database passwords and API keys. For non-sensitive configuration that still benefits from externalization, use.envfiles and add them to.gitignore. - Pin image tags. Tags are mutable: anyone with push access to a registry can overwrite a tag silently. Per Docker's trust model, a reference you reviewed last week may point to different content today. At minimum, pin to a specific version (
postgres:16.3). For maximum security, pin to a digest (postgres:16.3@sha256:...). - Use multi-stage builds to keep images lean. Per build best practices, multi-stage builds create a cleaner separation between building and the final output. Build your application in one stage, then copy only the runtime artifacts into a minimal production image. Reference the target stage in your
compose.yamlwithbuild.target: production. - Add health checks and
restart: unless-stoppedfor resilience. Health checks enable proper startup ordering withcondition: service_healthy. Theunless-stoppedrestart policy keeps application services running through daemon restarts while still respecting intentionaldocker compose stopcommands. Usealwaysfor critical data services like databases where availability is the primary concern. - Use profiles to toggle optional services. Debug tools, admin UIs, and monitoring stacks can live in the same
compose.yamlwithout starting by default. Assign them a profile likedebug, and activate withdocker compose --profile debug up. This keeps your development config in one place without cluttering every developer's running stack with services they don't always need. - Inspect resolved configuration before deploying. Run
docker compose configto see the fully resolved YAML, including all variable interpolation,.envfile values, and merged overrides. This is especially useful when debugging why a service isn't picking up the configuration you expect.
One File, One Command, Your Entire Stack
Docker Compose takes the complexity of multi-container application management and reduces it to a single, version-controlled YAML file and a handful of commands. For full-stack developers building applications that depend on databases, caches, and API services, it eliminates the infrastructure busywork that pulls focus away from feature development.
Whether you're spinning up a local development stack, running integration tests in CI, or self-hosting an open-source headless CMS with its backing database, Compose handles the orchestration so you can focus on building. The best way to internalize it is to try it hands-on: grab a compose.yaml, run docker compose up, and see your entire stack come to life.
Ready to see it in action? Check out the Strapi tutorial, or dive into the Compose docs for the full reference.
Victor Coisne is VP Marketing at Strapi, the leading Open Source Headless CMS. As an open-source and developer community enthusiast, Victor has been working for Open Core B2B companies for more than a decade including 5+ years as Head of Community at Docker. In his free time, Victor enjoys spending time with friends, wine tasting, playing chess, tennis or soccer.