DevOps Tools Introduction #09: Machine Deployment

DevOps Tools Introduction #09: Machine Deployment

In previous discussions about DevOps tools, we explored container virtualization and how containers transformed the way applications are packaged and deployed. One of the major advantages of containers is their extremely fast startup time combined with minimal resource overhead compared to traditional virtual machines. Because containers share the host operating system kernel, they can start in seconds and allow organizations to run many isolated workloads efficiently on the same infrastructure.

However, while containers simplify application packaging and execution, managing large numbers of containers across multiple servers quickly becomes complex. Modern applications often consist of dozens or even hundreds of containers that must be scheduled, monitored, restarted when failures occur, and scaled dynamically according to demand. Performing these tasks manually would be impractical in production environments.

This challenge led to the development of container orchestration platforms, designed to automate the deployment, scaling, networking, and lifecycle management of containers across clusters of machines. Among these platforms, Kubernetes has emerged as the most widely adopted solution for orchestrating containerized workloads.

Originally developed by Google and later donated to the Cloud Native Computing Foundation, Kubernetes provides a powerful and extensible architecture that allows organizations to manage distributed containerized applications reliably and at scale. By introducing concepts such as Pods, Services, Deployments, and controllers, Kubernetes enables administrators and DevOps engineers to define the desired state of their infrastructure while the platform continuously ensures that the cluster maintains that state.

In this lesson, we will explore the architecture of Kubernetes, the main components of a Kubernetes cluster, and how administrators can interact with the platform using tools such as kubectl to inspect cluster state and manage resources.

Kubernetes Cluster Architecture

A Kubernetes cluster is a distributed system composed of two main parts.

The control plane manages the cluster and makes scheduling decisions, while worker nodes execute application workloads.

The architecture allows Kubernetes to maintain the desired state of applications. Instead of manually starting processes on servers, administrators declare the intended configuration and Kubernetes automatically reconciles the cluster state to match that configuration.

Core Control Plane Components

The control plane contains several critical components responsible for cluster management and orchestration.

API Server

The Kubernetes API Server (kube-apiserver) is the central communication hub of the cluster. All operations performed by administrators—whether using kubectl, internal components, or automation tools—pass through this API.

Its main responsibilities include:

  • Exposing the Kubernetes REST API
  • Authenticating and authorizing requests
  • Validating resource definitions
  • Persisting cluster state into etcd

Because it acts as the gateway to the cluster, it is designed as a stateless service that can scale horizontally.

etcd

etcd is a distributed key-value database used as the persistent storage backend for Kubernetes. It stores all cluster information, including configuration data, node information, and resource definitions.

Every Kubernetes object—such as Pods, Deployments, and Services—is recorded in etcd. This makes etcd the single source of truth for the cluster.
If etcd becomes unavailable, workloads that are already running may continue, but the cluster cannot process changes such as creating or updating resources.

Controller Manager

The Controller Manager (kube-controller-manager) runs multiple controllers that continuously monitor the cluster state and enforce the desired configuration.
Controllers implement reconciliation loops, meaning they repeatedly compare the actual cluster state with the desired state and take corrective action when differences are detected.
Examples of controllers include:

  • Node Controller
  • ReplicaSet Controller
  • Deployment Controller
  • Service Controller

These components ensure that workloads remain available and consistent with their definitions.

Scheduler

The Kubernetes Scheduler (kube-scheduler) determines where newly created Pods should run within the cluster. It analyzes resource availability, constraints, and scheduling policies to select the most appropriate node.

When a Pod is created without an assigned node, the scheduler evaluates the cluster and assigns it to a suitable node based on criteria such as:

  • CPU and memory availability
  • Node affinity rules
  • Taints and tolerations
  • Workload balancing

Worker Node Components

Worker nodes run application workloads and typically include:

  • kubelet – the node agent responsible for communicating with the control plane
  • container runtime – such as containerd or Docker
  • kube-proxy – handles networking and service routing

These components allow nodes to execute Pods and report their status to the control plane.

Configuring kubectl

kubectl is the command-line interface used to communicate with a Kubernetes cluster. It interacts with the Kubernetes API server to manage resources and query cluster state.
The configuration for kubectl is typically stored in a file named ~/.kube/config. This file defines:

  • Cluster endpoints
  • Authentication credentials
  • Contexts
  • Namespaces

Administrators can manage these configurations using the kubectl config command. Some common commands are described in the following sections.

kubectl get

One of the most common tasks when working with Kubernetes is retrieving information about resources. The kubectl get command lists resources within the cluster.

Example:

$ kubectl get pods

$ kubectl get nodes

$ kubectl get services

These commands provide a concise overview of resources and their status.

Creating and Managing Resources

Kubernetes resources can be created and modified using several kubectl commands.

kubectl create

Creates a resource directly from the command line or from a manifest file.

Example:

$ kubectl create deployment nginx –image=nginx

kubectl apply

Creates or updates resources that are declared in YAML manifest files.

Example:

$ kubectl apply -f deployment.yaml

This method is widely used in DevOps pipelines because it supports declarative infrastructure management.

kubectl run

Creates a Pod running a specified container image.

Example:

$ kubectl run test-pod –image=busybox

Modifying and Scaling Resources

Several commands allow administrators to modify running resources.

kubectl scale

Adjusts the number of replicas in a Deployment or ReplicaSet.

Example:

$ kubectl scale deployment nginx –replicas=5

kubectl set

Updates fields such as container images.

Example:

$ kubectl set image deployment/nginx nginx=nginx:1.25

Inspecting and Debugging Workloads

Administrators frequently need to inspect logs and execute commands inside containers.

kubectl logs

Displays logs from a container.

Example:

$ kubectl logs nginx-pod

kubectl exec

Executes commands inside a running container.

Example:

$ kubectl exec -it nginx-pod — /bin/bash

This is particularly useful for troubleshooting application issues.

For DevOps professionals pursuing the LPI DevOps Tools Engineer certification, understanding these components and mastering kubectl commands are essential. The ability to inspect cluster state, create and modify resources, and troubleshoot running workloads forms the foundation of effective Kubernetes operations in modern cloud-native infrastructures.

A complete lesson covering Kubernetes architecture and usage is available free of charge in the official learning materials provided by the Linux Professional Institute. These materials include detailed explanations of Kubernetes components, command-line interactions with kubectl, and practical examples aligned with the DevOps Tools Engineer certification objectives.

<< Read the previous post of this series

 

Authors

  • Fabian Thorns

    Fabian Thorns is the Director of Product Development at Linux Professional Institute, LPI. He is M.Sc. Business Information Systems, a regular speaker at open source events and the author of numerous articles and books. Fabian has been part of the exam development team since 2010. Connect with him on LinkedIn, XING or via email (fthorns at www.lpi.org).

  • Uirá Ribeiro

    Uirá Ribeiro is a distinguished leader in the IT and Linux communities, recognized for his vast expertise and impactful contributions spanning over two decades. As the Chair of the Board at the Linux Professional Institute (LPI), Uirá has helped shaping the global landscape of Linux certification and education. His robust academic background in computer science, with a focus on distributed systems, parallel computing, and cloud computing, gives him a deep technical understanding of Linux and free and open source software (FOSS). As a professor, Uirá is dedicated to mentoring IT professionals, guiding them toward LPI certification through his widely respected books and courses. Beyond his academic and writing achievements, Uirá is an active contributor to the free software movement, frequently participating in conferences, workshops, and events organized by key organizations such as the Free Software Foundation and the Linux Foundation. He is also the CEO and founder of Linux Certification Edutech, where he has been teaching online Linux courses for 20 years, further cementing his legacy as an educator and advocate for open-source technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *