visit
Docker Compose uses a YAML file (docker-compose.yml
) to define services, networks, and volumes that make up your application. The structure is easy to understand and is highly configurable, allowing you to manage multiple containers with a single file.
Here’s an overview of the basic components of a docker-compose.yml
file:
The version
key defines which version of the Docker Compose file format is being used. Some features in Docker Compose may only be available in certain versions. For example:
version: '3'
services:
web:
image: nginx:latest
ports:
- "8080:80"
In this case, we're using an existing image (nginx) from Docker Hub and exposing port 80 from the container to port 8080 on the host machine.
networks:
my_network:
driver: bridge
Once a network is defined, you can assign services to this network for better isolation and control.
volumes:
my_volume:
You can then attach this volume to a service to persist data:
services:
db:
image: postgres
volumes:
- my_volume:/var/lib/postgresql/data
In this example, the Postgres database data is stored in the volume my_volume, ensuring that the data is not lost when the container stops or restarts.
With these basic components, you can already start defining a multi-container application. The flexibility of Docker Compose makes it easy to scale and manage services as your project grows.
Each service is defined under the services
section in the docker-compose.yml
file. The most basic configuration for a service includes specifying an image or a build option, ports to expose, and any additional service-specific configurations like volumes, networks, or environment variables.
services:
web:
image: nginx:latest
ports:
- "8080:80"
networks:
- app_network
db:
image: postgres:13
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
volumes:
- db_data:/var/lib/postgresql/data
networks:
- app_network
image: nginx:latest
ports:
- "8080:80"
In this case, port 80 inside the container is mapped to port 8080 on the host machine.
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
volumes:
- db_data:/var/lib/postgresql/data
networks:
- app_network
services:
web:
image: nginx:latest
depends_on:
- db
ports:
- "8080:80"
db:
image: postgres:13
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
In this setup, Docker Compose ensures that the db service starts before the web service.
With these service configuration options, you have the building blocks to define your application's architecture. Docker Compose makes it simple to manage the lifecycle of each service and its relationships to others in the stack.
Environment variables are an essential part of configuring services in Docker Compose. They allow you to customize the behavior of each service without hardcoding values in your docker-compose.yml
file. This flexibility is especially useful when working with different environments, such as development, testing, and production.
docker-compose.yml
FileYou can define environment variables directly under the environment
key for each service. This approach is useful for simple configurations, but it might clutter the file if you have many variables.
services:
app:
image: myapp:latest
environment:
- APP_ENV=production
- APP_DEBUG=false
In this example, APP_ENV
is set to production, and APP_DEBUG
is disabled by setting it to false.
.env
FileA more common practice is to separate environment variables from the docker-compose.yml file by using an .env
file. This file contains key-value pairs and allows you to manage your environment variables more cleanly. Docker Compose will automatically load the .env
file if it is in the same directory as the docker-compose.yml
file.
APP_ENV=production
APP_DEBUG=false
DATABASE_URL=postgres://admin:secret@db:5432/mydb
In your docker-compose.yml file, reference these variables like this:
services:
app:
image: myapp:latest
environment:
- APP_ENV
- APP_DEBUG
- DATABASE_URL
Docker Compose will substitute the values from the .env file automatically when it starts the services.
env_file
OptionAlternatively, you can load environment variables from a file explicitly by using the env_file
option in the docker-compose.yml
file. This is useful when you want to load variables from multiple files or have separate files for different environments (e.g., .env.development, .env.production).
services:
app:
image: myapp:latest
env_file:
- .env
You can also specify multiple environment files:
services:
app:
image: myapp:latest
env_file:
- .env
- .env.custom
If you define environment variables in both the docker-compose.yml
file and the .env file, the variables in docker-compose.yml
will take precedence. This allows you to have default values in your .env
file while overriding them on a per-service basis.
services:
app:
image: myapp:latest
environment:
- APP_ENV=development # Overrides the value from .env
services:
db:
image: postgres:13
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
In the .env file:
POSTGRES_USER=admin
POSTGRES_PASSWORD=secret
POSTGRES_DB=my_database
In this setup, the Postgres service will use the credentials defined in the .env file. This setup ensures that sensitive information like passwords is not hardcoded in the docker-compose.yml file.
When you run docker-compose up
for the first time, Docker automatically creates a default network for your services. All services in the docker-compose.yml
file are attached to this network unless you define custom networks. Services can communicate with each other using their service names as DNS hostnames.
services:
web:
image: nginx
ports:
- "8080:80"
db:
image: postgres
In this setup, the web service can access the db service using the hostname db. There’s no need to define IP addresses; Docker manages the DNS resolution internally.
To define a custom network, use the networks key in your docker-compose.yml
file:
networks:
frontend_network:
backend_network:
Then assign services to these networks:
services:
web:
image: nginx
networks:
- frontend_network
- backend_network
db:
image: postgres
networks:
- backend_network
In this example, the web service is attached to both frontend network and backend network, allowing it to communicate with both the front-end and back-end services. The db service is only attached to backend network, which limits its exposure to the internal services.
networks:
my_bridge_network:
driver: bridge
You can attach services to this network by specifying it in the services section:
services:
web:
image: nginx
networks:
- my_bridge_network
db:
image: postgres
networks:
- my_bridge_network
Now, both services are connected via the my_bridge_network and can communicate freely using their service names (web and db).
services:
web:
image: nginx
network_mode: "host"
However, be cautious when using the host network mode because it can introduce security risks by exposing your containers directly to the host network.
docker network create my_external_network
Then, in your docker-compose.yml file, define the network as external:
networks:
my_external_network:
external: true
Now, you can assign services to this external network:
services:
web:
image: nginx
networks:
- my_external_network
This allows your web service to communicate with containers that are also connected to my_external_network
, even if they are not defined in the same Docker Compose project.
services:
web:
image: nginx
ports:
- "8080:80"
In this example, port 80
on the web container is mapped to port 8080
on the host. This makes the web service accessible via //localhost:8080
on your machine.
One of the major benefits of Docker is the availability of pre-built images for popular software, which you can easily pull from Docker Hub or other container registries. With Docker Compose, you can integrate these images into your docker-compose.yml
file, saving time and effort when setting up common services like databases, message brokers, or web servers.
The image
option in Docker Compose allows you to specify a pre-built image. Docker will automatically pull the image if it's not available locally when you run docker-compose up
.
services:
web:
image: nginx:latest
ports:
- "8080:80"
In this example, Docker Compose pulls the nginx:latest
image from Docker Hub and runs the container, exposing port 80
on the container to port 8080
on the host machine.
services:
db:
image: postgres:13
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
In this case, the image postgres:13
is pulled from Docker Hub, ensuring that version 13 of PostgreSQL is used, rather than the latest version which might introduce breaking changes.
docker login myprivateregistry.com
Then, you can reference the private image in your docker-compose.yml
:
services:
app:
image: myprivateregistry.com/myapp:latest
Docker Compose will automatically use your credentials from the Docker CLI to pull the image.
services:
web:
image: nginx:alpine
ports:
- "8080:80"
In this example, the nginx:alpine
image is pulled, which is a lightweight version of Nginx, reducing the overall size of the container and startup time.
FROM node:14
# Install additional packages
RUN apt-get update && apt-get install -y \
python \
build-essential
# Set the working directory
WORKDIR /app
# Copy your application files
COPY . /app
# Install dependencies
RUN npm install
# Start the application
CMD ["npm", "start"]
In your docker-compose.yml, use the build option to build this customized image:
services:
app:
build: .
ports:
- "3000:3000"
Docker Compose will now build the custom image using the Dockerfile, while still benefiting from the official Node.js base image.
services:
app:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://admin:secret@db:5432/mydb
depends_on:
- db
db:
image: postgres:13
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
In this setup, the db service uses a pre-built PostgreSQL image, while the app service is built using a custom Dockerfile.
While Docker Compose makes it easy to use pre-built images, sometimes you need more control over how your containers are built. This is where Dockerfiles come in. A Dockerfile is a script that contains instructions on how to build a Docker image from scratch or from a base image. By specifying a Dockerfile
in your Docker Compose setup, you can create custom images tailored to your application’s needs.
FROM
: Specifies the base image you want to build from.COPY
or ADD
: Copies files from your host machine into the container.RUN
: Executes commands inside the container to install software, set up the environment, etc.CMD
or ENTRYPOINT
: Defines the default command or executable that runs when the container starts.Here’s an example of a basic Dockerfile
for a Node.js application:
# Use Node.js official image as the base image
FROM node:14
# Set the working directory in the container
WORKDIR /app
# Copy package.json and install dependencies
COPY package.json ./
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
services:
app:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
environment:
NODE_ENV: development
In this example, Docker Compose will look for a Dockerfile in the same directory as the docker-compose.yml file, build the image, and then run the container. The . under build specifies the current directory as the build context, which includes the Dockerfile and the application files.
services:
app:
build:
context: .
dockerfile: ./docker/Dockerfile
This tells Docker Compose to use the Dockerfile located in the docker/ directory.
RUN apt-get update && apt-get install -y python3
ENV NODE_ENV production
EXPOSE 3000
# Build stage
FROM golang:1.17 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp .
# Production stage
FROM alpine:3.15
WORKDIR /app
COPY --from=builder /app/myapp .
CMD ["./myapp"]
In this example, the golang image is used for compiling the Go application, but the final container is based on the lightweight alpine image, making the production image much smaller.
services:
app:
build: .
ports:
- "3000:3000"
command: ["npm", "run", "custom-script"]
This overrides the CMD defined in the Dockerfile and runs the custom script instead.
docker-compose up --build
This will recreate the images based on the updated Dockerfile and redeploy the services.
ARG APP_ENV=development
RUN echo "Building for $APP_ENV"
And in your docker-compose.yml, you can pass the argument during the build process:
services:
app:
build:
context: .
args:
APP_ENV: production
This allows you to customize the build process based on different environments.
You can define volumes in the docker-compose.yml
file under the volumes
section. Volumes can be either named or anonymous. Named volumes have explicit names and can be reused across multiple services, while anonymous volumes are automatically generated by Docker and have no specific name.
volumes:
db_data:
In this example, we’ve defined a volume named db_data that can be shared between services.
Here’s an example of attaching the db_data
volume to a PostgreSQL service to persist the database data:
services:
db:
image: postgres:13
volumes:
- db_data:/var/lib/postgresql/data
In this case, the db_data
volume is mapped to /var/lib/postgresql/data
inside the container, ensuring that any data stored by PostgreSQL is saved outside the container.
services:
app:
image: myapp:latest
volumes:
- ./app:/usr/src/app
In this example, the local directory ./app on the host is mounted to /usr/src/app inside the container. This is especially useful in development environments where you want to reflect changes in real-time.
services:
web:
image: nginx:latest
volumes:
- shared_data:/usr/share/nginx/html
worker:
image: myworker:latest
volumes:
- shared_data:/usr/src/app/data
volumes:
shared_data:
In this setup, both the web and worker services have access to the shared_data volume. The web service stores its static content in the volume, while the worker service reads or processes the same data.
services:
mysql:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: mydatabase
volumes:
- mysql_data:/var/lib/mysql
volumes:
mysql_data:
In this setup, the mysql_data volume is used to persist MySQL data in /var/lib/mysql
. This ensures that even if the mysql container is stopped or recreated, the database data remains intact.
docker run --rm -v db_data:/volume -v $(pwd):/backup busybox tar cvf /backup/db_data.tar /volume
To restore the volume, simply reverse the process:
docker run --rm -v db_data:/volume -v $(pwd):/backup busybox tar xvf /backup/db_data.tar -C /volume
Copy code
volumes:
my_custom_volume:
driver: nfs
driver_opts:
share: "192.168.1.100:/path/to/share"
This example sets up an NFS volume, allowing your service to persist data on a remote NFS server.
docker volume prune
To remove a specific volume, use:
docker volume rm volume_name
Volumes are a powerful feature in Docker Compose that allow you to persist and share data between containers. Whether you’re storing database data, sharing files across services, or mounting host directories, volumes provide a flexible and reliable way to manage data within your containerized applications.
You can scale services by using the --scale
option with docker-compose up
:
docker-compose up --scale web=3
This command will start 3 instances of the web service. To ensure proper load balancing between the scaled services, you may need to configure a load balancer (like NGINX) or rely on Docker's internal round-robin DNS resolution.
Alternatively, you can define service replicas in your docker-compose.yml
:
services:
web:
image: nginx:latest
deploy:
replicas: 3
depends_on
services:
web:
image: nginx
depends_on:
- db
db:
image: postgres
However, note that depends_on only controls the startup order; it does not wait for the dependent service to be "ready" (e.g., wait for the database to be accepting connections). For more robust dependency management, consider using health checks (covered below) or custom retry logic in your application.
services:
db:
image: postgres
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 10s
retries: 5
In this case, Docker will check every 30 seconds whether the Postgres database is ready to accept connections. If the service fails the check 5 times, Docker marks the service as unhealthy.
You can use this health status in combination with other services, ensuring that dependent services only start once the service they rely on is healthy.
services:
web:
image: nginx
deploy:
resources:
limits:
cpus: "0.5"
memory: "512M"
In this example, the web service is limited to using 50% of the CPU and 512MB of memory. You can also set reservation values to guarantee a certain amount of resources for a container.
services:
web:
image: nginx
restart: always
In this case, the web service will always be restarted if it crashes or is stopped unintentionally.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
In this example, the docker-compose.prod.yml file extends or overrides configurations from the base docker-compose.yml file, allowing you to customize settings for production.
services:
web:
image: nginx
profiles:
- production
debug:
image: busybox
profiles:
- debug
You can specify the profile to use when running Docker Compose:
docker-compose --profile production up
In this case, only the web service will be started, as it belongs to the production profile.
services:
app:
image: myapp:latest
secrets:
- db_password
secrets:
db_password:
file: ./secrets/db_password.txt
In this example, the secret db_password
is stored in an external file and made available to the app service. Docker Compose automatically ensures that the secret is only accessible to the service that needs it.
As your application grows, your docker-compose.yml
file can become more complex. Following best practices can help you maintain clean, readable, and scalable configurations, making it easier to manage and deploy your applications. Below are some key best practices to keep in mind when working with Docker Compose files.
Hardcoding values like database passwords, API keys, and service configuration in your docker-compose.yml
file can lead to security risks and reduced flexibility. Instead, use environment variables to manage configuration, particularly when deploying to different environments (e.g., development, testing, production).
services:
app:
image: myapp:latest
environment:
- DATABASE_URL=${DATABASE_URL}
And in your .env
file:
DATABASE_URL=postgres://admin:secret@db:5432/mydb
This setup ensures that sensitive information is not hardcoded, and you can easily switch configurations by modifying the .env file.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
This approach keeps your configurations organized and easier to manage.
services:
db:
image: postgres
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
Named volumes also make it simpler to perform backups or migrate data between environments.
services:
web:
image: nginx:latest
deploy:
resources:
limits:
cpus: "0.5"
memory: "512M"
This ensures that the web service only consumes half a CPU core and 512MB of memory, avoiding resource contention.
services:
debug:
image: busybox
command: sleep 1000
profiles:
- debug
When you deploy to production, simply omit the debug profile:
docker-compose --profile production up
# Install dependencies first
COPY package.json /app
RUN npm install
# Then copy the rest of the application code
COPY . /app
This ensures that if you only modify your application code, Docker can reuse the cached layers for dependency installation, speeding up the build process.
Avoid storing sensitive information, such as database passwords or API keys, directly in your docker-compose.yml
file or environment variables. Instead, use Docker secrets for sensitive data in production environments.
services:
db:
image: postgres
secrets:
- db_password
secrets:
db_password:
file: ./secrets/db_password.txt
This way, secrets are managed securely and are only accessible to the service that needs them.
docker-compose config
This will print the merged configuration, showing any syntax errors or misconfigurations.
docker system prune
To remove unused volumes specifically, run:
docker volume prune
docker-compose.yml
file, including services, networks, and volumes.