
In previous discussions about DevOps tools, we explored container virtualization and how containers transformed the way applications are packaged and deployed. One of the major advantages of containers is their extremely fast startup time combined with minimal resource overhead compared to traditional virtual machines. Because containers share the host operating system kernel, they can start in seconds and allow organizations to run many isolated workloads efficiently on the same infrastructure.
However, while containers simplify application packaging and execution, managing large numbers of containers across multiple servers quickly becomes complex. Modern applications often consist of dozens or even hundreds of containers that must be scheduled, monitored, restarted when failures occur, and scaled dynamically according to demand. Performing these tasks manually would be impractical in production environments.
This challenge led to the development of container orchestration platforms, designed to automate the deployment, scaling, networking, and lifecycle management of containers across clusters of machines. Among these platforms, Kubernetes has emerged as the most widely adopted solution for orchestrating containerized workloads.
Originally developed by Google and later donated to the Cloud Native Computing Foundation, Kubernetes provides a powerful and extensible architecture that allows organizations to manage distributed containerized applications reliably and at scale. By introducing concepts such as Pods, Services, Deployments, and controllers, Kubernetes enables administrators and DevOps engineers to define the desired state of their infrastructure while the platform continuously ensures that the cluster maintains that state.
In this lesson, we will explore the architecture of Kubernetes, the main components of a Kubernetes cluster, and how administrators can interact with the platform using tools such as kubectl to inspect cluster state and manage resources.
A Kubernetes cluster is a distributed system composed of two main parts.
The control plane manages the cluster and makes scheduling decisions, while worker nodes execute application workloads.
The architecture allows Kubernetes to maintain the desired state of applications. Instead of manually starting processes on servers, administrators declare the intended configuration and Kubernetes automatically reconciles the cluster state to match that configuration.
The control plane contains several critical components responsible for cluster management and orchestration.
The Kubernetes API Server (kube-apiserver) is the central communication hub of the cluster. All operations performed by administrators—whether using kubectl, internal components, or automation tools—pass through this API.
Its main responsibilities include:
Because it acts as the gateway to the cluster, it is designed as a stateless service that can scale horizontally.
etcd is a distributed key-value database used as the persistent storage backend for Kubernetes. It stores all cluster information, including configuration data, node information, and resource definitions.
Every Kubernetes object—such as Pods, Deployments, and Services—is recorded in etcd. This makes etcd the single source of truth for the cluster.
If etcd becomes unavailable, workloads that are already running may continue, but the cluster cannot process changes such as creating or updating resources.
The Controller Manager (kube-controller-manager) runs multiple controllers that continuously monitor the cluster state and enforce the desired configuration.
Controllers implement reconciliation loops, meaning they repeatedly compare the actual cluster state with the desired state and take corrective action when differences are detected.
Examples of controllers include:
These components ensure that workloads remain available and consistent with their definitions.
The Kubernetes Scheduler (kube-scheduler) determines where newly created Pods should run within the cluster. It analyzes resource availability, constraints, and scheduling policies to select the most appropriate node.
When a Pod is created without an assigned node, the scheduler evaluates the cluster and assigns it to a suitable node based on criteria such as:
Worker nodes run application workloads and typically include:
These components allow nodes to execute Pods and report their status to the control plane.
kubectl is the command-line interface used to communicate with a Kubernetes cluster. It interacts with the Kubernetes API server to manage resources and query cluster state.
The configuration for kubectl is typically stored in a file named ~/.kube/config. This file defines:
Administrators can manage these configurations using the kubectl config command. Some common commands are described in the following sections.
One of the most common tasks when working with Kubernetes is retrieving information about resources. The kubectl get command lists resources within the cluster.
Example:
$ kubectl get pods
$ kubectl get nodes
$ kubectl get services
These commands provide a concise overview of resources and their status.
Kubernetes resources can be created and modified using several kubectl commands.
Creates a resource directly from the command line or from a manifest file.
Example:
$ kubectl create deployment nginx –image=nginx
Creates or updates resources that are declared in YAML manifest files.
Example:
$ kubectl apply -f deployment.yaml
This method is widely used in DevOps pipelines because it supports declarative infrastructure management.
Creates a Pod running a specified container image.
Example:
$ kubectl run test-pod –image=busybox
Several commands allow administrators to modify running resources.
Adjusts the number of replicas in a Deployment or ReplicaSet.
Example:
$ kubectl scale deployment nginx –replicas=5
Updates fields such as container images.
Example:
$ kubectl set image deployment/nginx nginx=nginx:1.25
Administrators frequently need to inspect logs and execute commands inside containers.
Displays logs from a container.
Example:
$ kubectl logs nginx-pod
Executes commands inside a running container.
Example:
$ kubectl exec -it nginx-pod — /bin/bash
This is particularly useful for troubleshooting application issues.
For DevOps professionals pursuing the LPI DevOps Tools Engineer certification, understanding these components and mastering kubectl commands are essential. The ability to inspect cluster state, create and modify resources, and troubleshoot running workloads forms the foundation of effective Kubernetes operations in modern cloud-native infrastructures.
A complete lesson covering Kubernetes architecture and usage is available free of charge in the official learning materials provided by the Linux Professional Institute. These materials include detailed explanations of Kubernetes components, command-line interactions with kubectl, and practical examples aligned with the DevOps Tools Engineer certification objectives.
<< Read the previous post of this series
Stai visualizzando un contenuto segnaposto da Vimeo. Per accedere al contenuto effettivo, clicca sul pulsante sottostante. Si prega di notare che in questo modo si condividono i dati con provider di terze parti.
Ulteriori informazioniStai visualizzando un contenuto segnaposto da YouTube. Per accedere al contenuto effettivo, clicca sul pulsante sottostante. Si prega di notare che in questo modo si condividono i dati con provider di terze parti.
Ulteriori informazioniDevi caricare il contenuto da reCAPTCHA per inviare il modulo. Si prega di notare che in questo modo si condividono i dati con provider di terze parti.
Ulteriori informazioniDevi caricare il contenuto da reCAPTCHA per inviare il modulo. Si prega di notare che in questo modo si condividono i dati con provider di terze parti.
Ulteriori informazioni