DevOps Tools Introduction #10: Basic Kubernetes Operations

DevOps Tools Introduction #10: Basic Kubernetes Operations

Objective 703.2 of the DevOps Tools Engineer 2.0 exam covers Basic Kubernetes Operations. It represents a significant portion of the exam and requires a solid understanding of how to deploy and manage applications in Kubernetes. Candidates should be able to:

  • Work with Kubernetes resources using declarative YAML files
  • Understand the role of Pods as the fundamental execution unit
  • Use Deployments to manage application lifecycle, including scaling and rolling updates
  • Know how to expose applications using Services and Ingress
  • Provide persistent storage through PersistentVolumeClaims

Given the importance of this objective, candidates should dedicate sufficient time to both studying the concepts and practicing hands-on with Kubernetes clusters.

Use of YAML files to declare Kubernetes resources

Kubernetes resources are typically defined declaratively using YAML files. These files describe the desired state of the system, and Kubernetes ensures that the actual state matches the declared configuration. Declarative configuration allows version control, reproducibility, and automation, which are key principles in DevOps.

Typical contents of a Kubernetes YAML file include:

  • apiVersion: Specifies the version of the API used to communicate with the Kubernetes controller
  • kind: Defines the type of resource
  • metadata: Contains resource information such as name and labels
  • spec: Describes the desired state of the cluster

An example of a simple Pod definition follows:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  labels:
    app: web
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80

To create a resource, enter:

$ kubectl apply -f pod.yaml

The principle of a Pod

A Pod is the smallest deployable unit in Kubernetes. It represents one or more containers that share:

  • The same network namespace (IP address and ports)
  • Shared storage volumes
  • Lifecycle management

Pods are ephemeral by nature and are usually managed by higher-level controllers such as Deployments.

A sample declaration of a multi-container Pod follows:

apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod
spec:
  containers:
  - name: app
    image: myapp:1.0
  - name: sidecar
    image: log-collector:1.0

This YAML defines a Pod that runs two containers together in the same execution environment. The initial resources are:

  • apiVersion: specifies the Kubernetes API version used, v1 in this case.
  • kind: Pod indicates that this resource is a Pod, the smallest deployable unit in Kubernetes.
  • metadata.name assigns a name (multi-container-pod) to the Pod.

Inside spec.containers, two containers are defined:

  • The app container runs the main application (myapp:1.0).
  • The sidecar container runs a supporting service (log-collector:1.0), typically used for tasks like logging, monitoring, or proxying.

Both containers share the same network namespace and can communicate via localhost, and they can also share storage volumes if defined. This pattern is known as the sidecar pattern, where auxiliary functionality is tightly coupled with the main application.

How to use Deployments

Deployments manage Pods and ensure that the desired number of replicas are running at all times. They provide:

  • Declarative updates
  • Self-healing (restarting failed Pods)
  • Scaling
  • Rolling updates and rollbacks

A sample definition of a Deployment follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: nginx
        image: nginx:1.25
        ports:
        - containerPort: 80

This YAML defines a Deployment that manages a set of Pods and ensures they are running as expected. The Deployment automatically handles scaling, self-healing (recreating failed Pods), and rolling updates when the configuration changes. The configuration specifies:

  • apiVersion: apps/v1 specifies the API for higher-level controllers like Deployments.
  • kind: Deployment indicates this resource manages Pods declaratively.
  • metadata.name defines the name (web-deployment).

In the spec resource, the nested resources are:

  • replicas: 3 ensures that three identical Pods are always running.
  • selector.matchLabels tells Kubernetes which Pods belong to this Deployment.
  • template defines the Pod that will be created:
    • metadata.labels must match the selector.
    • spec.containers describes the container, specifying that it:
      • Runs nginx:1.25
      • Exposes port 80

Scaling a Deployment

Scaling can be done manually as follows:

$ kubectl scale deployment web-deployment --replicas=5

You can also change the scaling declaratively by modifying the YAML file.

Rolling updates

After you update the container image, you can ask Kubernetes to perform a rolling update:

$ kubectl set image deployment/web-deployment nginx=nginx:1.26

This ensures zero downtime by gradually replacing old Pods with new ones.

Rollback

Rollback reverts a Deployment to a previous working version. This is useful when a new update introduces errors, and allows you to quickly return to a stable state with minimal downtime.

The following command tells Kubernetes to undo the last change made to the Deployment web-deployment, restoring the previous configuration (such as an earlier container image).:

$ kubectl rollout undo deployment/web-deployment

How to make services accessible

Kubernetes provides multiple mechanisms to make applications accessible, ensuring reliable communication both within the cluster and from external clients.

Services

Services play a fundamental role by offering a stable networking endpoint for a group of Pods, abstracting their dynamic comings and goings. Since Pods can be created and destroyed frequently, their IP addresses are not reliable for direct communication.

A Service solves this by providing a consistent IP address and DNS name, and then routing traffic to the appropriate Pods based on label selectors. In addition, Services can be configured with different exposure types, allowing internal communication or controlled external access.

Types of Services include:

  • ClusterIP (default): Internal access within the cluster
  • NodePort: Exposes the service on a port of each node
  • LoadBalancer: Uses the cloud provider’s load balancer

A sample definition of a Service follows:

apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP

For more advanced routing, Ingress resources are used to manage HTTP and HTTPS traffic. Ingress enables powerful and flexible features such as host-based routing, path-based routing, and TLS termination, acting as a centralized entry point to multiple services within the cluster. It defines rule-based external access to services.

A sample Ingress definition follows:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-ingress
spec:
  rules:
  - host: example.com
    http:
      paths:
     - path: /
       pathType: Prefix
       backend:
         service:
           name: web-service
           port:
             number: 80

Ingress requires an Ingress Controller (e.g., the controller from NGINX) to function.

Persistent storage in Kubernetes

Containers are ephemeral, so persistent storage is required for stateful applications.

The main Kubernetes storage components are:

  • PersistentVolume (PV): Represents physical storage
  • PersistentVolumeClaim (PVC): A request for storage
  • StorageClass: Defines dynamic provisioning

Persistent storage in Kubernetes is handled through an abstraction that separates storage provisioning from its usage.

PersistentVolumeClaims (PVCs) allow applications to request storage resources without needing to know the underlying infrastructure details. This abstraction enables portability and flexibility, as the same application configuration can be used across different environments.

A PersistentVolumeClaim specifies requirements such as storage size and access modes, and Kubernetes binds it to a suitable PersistentVolume (PV), either statically provisioned or dynamically created through a StorageClass. This approach provides great flexibility and allows DevOps architects to mold storage to their particular environment, while insulating applications from local storage decisions.

A sample definition of a PVC follows:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

This YAML defines a PersistentVolumeClaim requesting persistent storage in Kubernetes. The apiVersion: v1 and kind: PersistentVolumeClaim lines specify the resource type, while metadata.name assigns the name my-pvc.

In the spec,the accessModes: ReadWriteOnce resource means that the volume can be mounted as read-write by a single node at a time, and resources.requests.storage: 1Gi requests 1 gigabyte of storage.

Kubernetes binds this claim to a suitable PersistentVolume that satisfies these requirements, allowing Pods to use persistent storage independent of their lifecycle.

To use this PVC in a Pod, add it to the Pod’s definition as follows:

apiVersion: v1
kind: Pod
metadata:
  name: storage-pod
spec:
  containers:
  - name: app
    image: nginx
    volumeMounts:
    - mountPath: /data
      name: storage
  volumes:
  - name: storage
    persistentVolumeClaim:
      claimName: my-pvc

This YAML defines a Pod that uses persistent storage through a persistentVolumeClaim resource. The Pod runs a container named app using the nginx image, and mounts a volume at the path /data inside the container. The volumeMounts section links the container to a volume named storage, which is defined in the volumes section. This volume is backed by the PersistentVolumeClaim my-pvc, letting the Pod use the storage previously requested and provisioned. This allows the Pod to store and retrieve data in the /data directory, which persists even if the Pod is restarted or replaced.

Other Kubernetes orchestration resources

Beyond basic workload management, Kubernetes includes several specialized controllers designed for specific operational patterns:

  • DaemonSets ensure that a particular Pod runs on all or selected nodes, which is especially useful for system-level services such as logging or monitoring agents.
  • StatefulSets are intended for applications that require stable identities, persistent storage, and ordered deployment, making them suitable for databases and other stateful services.
  • Jobs run tasks that must complete successfully, typically for batch processing or one-time operations.
  • CronJobs extend the concept of Jobs by scheduling Jobs to run at specified intervals, enabling automated and recurring tasks.

Together, these resources provide a comprehensive orchestration framework capable of supporting a wide range of application requirements.

As you advance your DevOps journey and build proficiency in container orchestration, make sure to explore the official Learning Material for the DevOps Tools Engineer certification, available at no cost. It offers comprehensive coverage of the exam objectives and serves as a valuable guide to support and structure your studies.

<< Read the previous part of this series | Start the series from the beginning >>

Authors

  • Fabian Thorns

    Fabian Thorns is the Director of Product Development at Linux Professional Institute, LPI. He is M.Sc. Business Information Systems, a regular speaker at open source events and the author of numerous articles and books. Fabian has been part of the exam development team since 2010. Connect with him on LinkedIn, XING or via email (fthorns at www.lpi.org).

  • Uirá Ribeiro

    Uirá Ribeiro is a distinguished leader in the IT and Linux communities, recognized for his vast expertise and impactful contributions spanning over two decades. As the Chair of the Board at the Linux Professional Institute (LPI), Uirá has helped shaping the global landscape of Linux certification and education. His robust academic background in computer science, with a focus on distributed systems, parallel computing, and cloud computing, gives him a deep technical understanding of Linux and free and open source software (FOSS). As a professor, Uirá is dedicated to mentoring IT professionals, guiding them toward LPI certification through his widely respected books and courses. Beyond his academic and writing achievements, Uirá is an active contributor to the free software movement, frequently participating in conferences, workshops, and events organized by key organizations such as the Free Software Foundation and the Linux Foundation. He is also the CEO and founder of Linux Certification Edutech, where he has been teaching online Linux courses for 20 years, further cementing his legacy as an educator and advocate for open-source technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *