DevOps Tools Introduction #07: Container Orchestration

DevOps Tools Introduction #07: Container Orchestration

While individual Docker containers usually run a single program, applications in a microservice architecture are usually composed by combining multiple containers, each contributing a specific set of features that make up the complete application. Coordinating the creation and maintenance of the containers that build an application is the task of container orchestration tools.

Docker Compose and Podman Compose are two of the most widely adopted tools for defining and running multi-container applications. They allow developers to describe an entire application stack in a single YAML file, making it easy to spin up complex environments with a single command. Understanding how they work — and how they differ — is essential for modern Linux system administration and DevOps practices.

Docker Compose

Docker Compose follows a client-server model. The docker compose command (or the legacy docker-compose binary) communicates with the Docker daemon running on the host. The daemon is responsible for creating containers, networks, and volumes based on the instructions defined in the Compose file.
Docker Compose organizes resources around a project. By default, the project name is derived from the directory containing the Compose file. All containers, networks, and volumes belonging to the same project share a common naming prefix, making them easy to identify and manage collectively.

Podman Compose

Podman Compose takes a fundamentally different approach. Unlike Docker, Podman is daemonless — it does not rely on a persistent background service. Each container runs as a direct child process of the user invoking the command, meaning containers can be run as non-root users without privilege escalation.

YAML Compose file

A Compose file is a YAML document describing the services, networks, and volumes of an application. Since version 3, the format has been unified between Docker Compose and Docker Swarm. While the version key is now optional in the latest specification, declaring it explicitly remains a best practice.

A minimal Compose file follows:

version: '3.8'

services:
web:
image: nginx:latest
ports:
- "8080:80"

This snippet defines a service named web in a Docker Compose file, bearing the following attributes:

  • services
    This section lists all the containers that are part of the application. In this simple example, there is only one container, named web.
  • web
    The name of the service. Docker Compose will use this name as the container’s hostname within the internal network.
  • image: nginx:latest
    The container will be created from the official Nginx image, using the latest tag.
  • ports
    This publishes container ports to the host machine.
  • 8080:80
    This string maps port 8080 on the host to port 80 inside the container.

Running an Application

Instead of launching containers individually, Docker Compose or Podman Compose reads the declarative configuration and automatically creates the required networks, volumes, and service containers in the correct order. This approach ensures consistency, repeatability, and simplified lifecycle management, making it easier to deploy, test, and maintain multi-container applications.

Common Docker Compose commands follow:

# Start all services in detached mode
$ docker compose up -d

# View running services
$ docker compose ps

# View logs from all services
$ docker compose logs -f

# Stop and remove all containers and networks
$ docker compose down

What are Services, Networks, and Volumes?

We’ve been using these terms in this article without defining them. Although they are common computer terms, they have particular meanings for Docker and Podman that I’ll describe here.

Services

A service defines how a container should run, including which image to use, how it should be configured, what ports it exposes, which volumes it mounts, and which networks it connects to. Each service represents a specific role within the application, such as a web server, database, cache, or backend API. Although a service typically runs a single container instance by default, it can also be scaled to run multiple replicas. Services allow developers to model multi-container applications in a structured, declarative way, ensuring that all components work together as a cohesive system.

A service can use a pre-built image from a repository or build an image from a local Dockerfile.

Networks

Networks define how services communicate with each other and with the outside world. By default, Compose creates an isolated bridge network where all services can reach one another using their service names as DNS hostnames. Custom networks can be defined to segment traffic, improve security, or control communication boundaries between groups of services. For example, a frontend service may be connected to both a public network and a private backend network, while a database service may only be attached to the private network. This logical separation enhances security and maintains a clean architecture within multi-container applications.

Services can be attached to multiple networks. In the following example, the proxy communicates with both the public internet and the private backend, while the database remains isolated:

services:
proxy:
image: nginx
networks:
- frontend
- backend
db:
image: postgres
networks:
- backend

Volumes

Volumes provide persistent storage for containers. Since containers are ephemeral by design, any data stored inside them is lost when they are removed. Volumes solve this problem by storing data outside the container’s writable layer, ensuring that important information—such as database files, logs, or uploaded content—remains intact across restarts and updates. Compose allows you to define named volumes for managed storage or use bind mounts to map specific host directories into containers. By separating application logic from persistent data, volumes support durability, portability, and safer container lifecycle management.

A Complete Example: Web Application Stack

The following Compose file demonstrates a production-style stack with NGINX, a Node.js app, and PostgreSQL:

version: '3.8'

services:
proxy:
image: nginx:1.25-alpine
ports:
- "80:80"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- app
networks:
- frontend
restart: unless-stopped

app:
build: ./app
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://admin:secret@db:5432/appdb
depends_on:
db:
condition: service_healthy
networks:
- frontend
- backend
restart: unless-stopped

db:
image: postgres:15
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=appdb
volumes:
- db_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin -d appdb"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
restart: unless-stopped

networks:
frontend:
backend:
internal: true

volumes:
db_data:

The proxy service uses the lightweight nginx:1.25-alpine image and exposes port 80 on the host, mapping it to port 80 inside the container. This makes the web application accessible externally. It mounts a local directory (./nginx/conf.d) into the container’s Nginx configuration directory as read-only (:ro), ensuring that custom configuration files are applied securely. The depends_on directive ensures that the app service is started before the proxy. It connects only to the frontend network and uses a restart policy of unless-stopped, meaning that in case of failure it will automatically restart unless explicitly stopped.

The app service is built from a local directory (./app), indicating that it uses a Dockerfile instead of pulling a prebuilt image. It defines environment variables, including NODE_ENV=production and a DATABASE_URL pointing to the db service. Notice that the database hostname is db, which works because services on the same network can resolve each other by service name. The depends_on configuration includes a condition requiring the database service to be healthy before starting, improving startup reliability. The application connects to both the frontend and backend networks, acting as a bridge between external traffic and the internal database layer.

The db service uses the official postgres:15 image and defines environment variables to initialize the database user, password, and database name. It mounts a named volume (db_data) to /var/lib/postgresql/data, ensuring that database data persists even if the container is recreated. A health check is configured using pg_isready, which periodically verifies database availability. The service is connected only to the backend network, isolating it from direct external access. Like the other services, it uses the unless-stopped restart policy.

The networks section defines two custom networks. The frontend network allows communication between the proxy and the application. The backend network is marked as internal: true, meaning it is isolated from external access. This enhances security by preventing direct connections to the database from outside the application.

The volumes section defines the named volume db_data, which is managed by the container engine. This volume stores persistent database data independently of the container lifecycle, ensuring durability across updates and restarts.

This Compose file demonstrates a clean separation of responsibilities, secure network segmentation, health-aware dependencies, and persistent storage management within a structured, declarative application model.

Update Running Containers to Newer Images

One of the key advantages of Docker Compose is the ability to update running services in a controlled and predictable way. Instead of manually stopping containers, pulling new images, and recreating them one by one, Compose manages the entire lifecycle declaratively based on the configuration file.

When a new version of an image becomes available—such as nginx:latest or postgres:15—the first step is to pull the updated images:

$ docker compose pull

This command downloads the newest versions of the images defined in the Compose file without stopping the running containers.

After pulling the updated images, you can recreate the containers using:

$ docker compose up -d

Compose compares the currently running containers with the updated image definitions. If it detects that an image has changed, it stops the old container and creates a new one using the updated image. This process preserves named volumes and networks, ensuring that persistent data (such as database files) remains intact.

Importantly, Docker Compose replaces containers rather than modifying them in place. This immutable infrastructure approach increases reliability and reduces configuration drift. Because volumes are external to the container filesystem, application data persists even when containers are recreated.

By using Docker Compose to manage updates, teams achieve consistent deployments, simplified rollback strategies (by changing image tags), and safer lifecycle management of multi-container applications.

Mastery of these concepts is a foundational requirement for the DevOps Tools certification and for any Linux administrator responsible for containerized application infrastructure.
As you continue strengthening your DevOps skills and mastering container orchestration concepts, don’t forget to download your free copy of the official Learning Material for the DevOps Tools Engineer certification (701-200). It is an excellent resource to deepen your understanding and align your studies with the exam objectives.

<< Read the previous part of this series

Authors

  • Fabian Thorns

    Fabian Thorns is the Director of Product Development at Linux Professional Institute, LPI. He is M.Sc. Business Information Systems, a regular speaker at open source events and the author of numerous articles and books. Fabian has been part of the exam development team since 2010. Connect with him on LinkedIn, XING or via email (fthorns at www.lpi.org).

  • Uirá Ribeiro

    Uirá Ribeiro is a distinguished leader in the IT and Linux communities, recognized for his vast expertise and impactful contributions spanning over two decades. As the Chair of the Board at the Linux Professional Institute (LPI), Uirá has helped shaping the global landscape of Linux certification and education. His robust academic background in computer science, with a focus on distributed systems, parallel computing, and cloud computing, gives him a deep technical understanding of Linux and free and open source software (FOSS). As a professor, Uirá is dedicated to mentoring IT professionals, guiding them toward LPI certification through his widely respected books and courses. Beyond his academic and writing achievements, Uirá is an active contributor to the free software movement, frequently participating in conferences, workshops, and events organized by key organizations such as the Free Software Foundation and the Linux Foundation. He is also the CEO and founder of Linux Certification Edutech, where he has been teaching online Linux courses for 20 years, further cementing his legacy as an educator and advocate for open-source technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *