DevOps Tools Introduction #08: Container Infrastructure

Container infrastructure explained: building OCI images, Dockerfiles, registries, and security practices for DevOps.

While Docker makes it easy to start and manage containers, there must still be a base system hosting the containers. These systems form the infrastructure on which containers run and are covered by objective 702.3 of the DevOps Tools Engineer exam.

Container images are the foundation of modern cloud-native infrastructure. They provide a portable, reproducible way to package applications together with their runtime, dependencies, and configuration. Understanding how to create images, manage them securely, and distribute them properly is essential for DevOps professionals working with Docker, Podman, and other OCI-compliant tools.

Creating container images begins with a Dockerfile (or Containerfile in Podman environments). This file defines, in a declarative way, how the image should be built. For example, a simple Node.js application might start with a base image declared using the FROM instruction, such as FROM node:25-alpine. This line tells the builder to use an existing OCI-compatible image as the foundation. Each subsequent instruction creates a new image layer. The WORKDIR instruction defines the working directory inside the container, COPY transfers application source code into the image, and RUN executes commands such as installing dependencies with npm install. Finally, CMD or ENTRYPOINT defines how the container will start when executed.

A minimal example might look like this in practice:

FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy application files from the host into the container
COPY . .
# Install application dependencies
RUN npm install
# Expose the port used by the Node.js application
EXPOSE 3000
# Start the application
CMD ["node", "server.js"]

The image starts from a lightweight Alpine-based Node image. The working directory is set to /app, application files are copied from the host, dependencies are installed, and the container is configured to start the server with node server.js.

When the command “docker build -t myapp:1.0 .” is executed, Docker reads the Dockerfile and produces an image named myapp with the tag 1.0.

The tagging format follows OCI image naming conventions, typically structured as [registry]/[namespace]/[repository]:[tag].

For example, docker.io/library/nginx:latest specifies the Docker Hub registry, the library namespace, the nginx repository, and the latest tag.

OCI image names are standardized to ensure compatibility across container engines. An image such as ghcr.io/example/api:2.1 clearly identifies the registry (GitHub Container Registry), the organization namespace, the repository, and the version tag. If no registry is specified, Docker defaults to Docker Hub. Understanding this naming structure is critical when pulling, tagging, or pushing images across different environments.

After you build an image, it often needs to be uploaded to a registry so that other systems or deployment pipelines can access it. This requires authentication using docker login, followed by tagging the image appropriately and pushing it using docker image push.

For example, after tagging myapp:1.0 as registry.example.com/team/myapp:1.0, pushing the image makes it available to CI/CD systems or Kubernetes clusters. Registries can be public, like Docker Hub, or private, such as enterprise registries hosted internally.

Security plays a critical role in container image management. Container images may contain vulnerabilities inherited from base images or introduced by application dependencies.

Image scanners analyze images for known CVEs (Common Vulnerabilities and Exposures), outdated libraries, and insecure configurations. Tools such as Trivy, Clair, and Docker Scout inspect image layers and compare them against vulnerability databases. For example, if a base image includes an outdated OpenSSL version with a known exploit, a scanner will flag it before deployment. This allows teams to rebuild images with patched dependencies.

Container virtualization introduces its own security risks. Although containers isolate processes, they share the host kernel. A misconfigured container running as root can potentially exploit kernel vulnerabilities or escape containment if security best practices are not followed. Mitigation strategies include running containers as non-root users using the USER instruction, minimizing image size to reduce attack surface, using read-only filesystems, restricting processes’ capabilities, and scanning images regularly. The VOLUME instruction should be used carefully to avoid unintended data exposure. EXPOSE only documents ports rather than enforcing firewall rules, so external exposure must be managed at the runtime level.

To reduce build-time risks and optimize performance, modern container engines use advanced builders. Docker BuildKit improves performance through parallel builds, layer caching, and secret management. It also supports advanced syntax such as mounting SSH keys during build without persisting them in final layers. Docker buildx extends these capabilities by enabling multi-platform builds, allowing a single command to produce images for amd64 and arm64 architectures simultaneously. This is especially relevant in heterogeneous environments, including cloud and edge deployments.

Podman users rely on the podman build command, which reads a Containerfile and produces OCI-compliant images that run withoutg a daemon. Buildah provides even more granular control, enabling scripted and rootless image builds. These tools align with modern security practices by supporting rootless container builds and tighter integration with Linux namespaces and cgroups.

The .dockerignore file is another critical component of secure and efficient builds. It prevents unnecessary or sensitive files from being included in the image context. For example, you should exclude the .git subdirectory, local configuration files, and secret credentials from the image in order to reduce image size and minimize accidental exposure of confidential data.

To sum up, mastering Dockerfiles and OCI image management requires understanding both the technical and security dimensions. Building images is not merely about packaging software; it is about constructing immutable, portable artifacts that must be optimized, versioned, scanned, and securely distributed. By combining proper Dockerfile design, registry workflows, vulnerability scanning, and modern build tools like BuildKit, buildx, Podman build, and Buildah, DevOps professionals ensure that containerized applications remain efficient, secure, and production-ready.

And don’t forget that the LPI provides official Learning Materials for the DevOps Tools Engineer version 2.0 exam. These resources are comprehensive, freely available, and fully aligned with the exam objectives, making them an excellent primary reference throughout your preparation.

<< Read the previous part of this series

Authors

  • Fabian Thorns

    Fabian Thorns is the Director of Product Development at Linux Professional Institute, LPI. He is M.Sc. Business Information Systems, a regular speaker at open source events and the author of numerous articles and books. Fabian has been part of the exam development team since 2010. Connect with him on LinkedIn, XING or via email (fthorns at www.lpi.org).

  • Uirá Ribeiro

    Uirá Ribeiro is a distinguished leader in the IT and Linux communities, recognized for his vast expertise and impactful contributions spanning over two decades. As the Chair of the Board at the Linux Professional Institute (LPI), Uirá has helped shaping the global landscape of Linux certification and education. His robust academic background in computer science, with a focus on distributed systems, parallel computing, and cloud computing, gives him a deep technical understanding of Linux and free and open source software (FOSS). As a professor, Uirá is dedicated to mentoring IT professionals, guiding them toward LPI certification through his widely respected books and courses. Beyond his academic and writing achievements, Uirá is an active contributor to the free software movement, frequently participating in conferences, workshops, and events organized by key organizations such as the Free Software Foundation and the Linux Foundation. He is also the CEO and founder of Linux Certification Edutech, where he has been teaching online Linux courses for 20 years, further cementing his legacy as an educator and advocate for open-source technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *