
Objective 703.2 of the DevOps Tools Engineer 2.0 exam covers Basic Kubernetes Operations. It represents a significant portion of the exam and requires a solid understanding of how to deploy and manage applications in Kubernetes. Candidates should be able to:
Given the importance of this objective, candidates should dedicate sufficient time to both studying the concepts and practicing hands-on with Kubernetes clusters.
Kubernetes resources are typically defined declaratively using YAML files. These files describe the desired state of the system, and Kubernetes ensures that the actual state matches the declared configuration. Declarative configuration allows version control, reproducibility, and automation, which are key principles in DevOps.
Typical contents of a Kubernetes YAML file include:
An example of a simple Pod definition follows:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
To create a resource, enter:
$ kubectl apply -f pod.yaml
A Pod is the smallest deployable unit in Kubernetes. It represents one or more containers that share:
Pods are ephemeral by nature and are usually managed by higher-level controllers such as Deployments.
A sample declaration of a multi-container Pod follows:
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod
spec:
containers:
- name: app
image: myapp:1.0
- name: sidecar
image: log-collector:1.0
This YAML defines a Pod that runs two containers together in the same execution environment. The initial resources are:
apiVersion: specifies the Kubernetes API version used, v1 in this case.kind: Pod indicates that this resource is a Pod, the smallest deployable unit in Kubernetes.metadata.name assigns a name (multi-container-pod) to the Pod.Inside spec.containers, two containers are defined:
app container runs the main application (myapp:1.0).sidecar container runs a supporting service (log-collector:1.0), typically used for tasks like logging, monitoring, or proxying.Both containers share the same network namespace and can communicate via localhost, and they can also share storage volumes if defined. This pattern is known as the sidecar pattern, where auxiliary functionality is tightly coupled with the main application.
Deployments manage Pods and ensure that the desired number of replicas are running at all times. They provide:
A sample definition of a Deployment follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
This YAML defines a Deployment that manages a set of Pods and ensures they are running as expected. The Deployment automatically handles scaling, self-healing (recreating failed Pods), and rolling updates when the configuration changes. The configuration specifies:
apiVersion: apps/v1 specifies the API for higher-level controllers like Deployments.kind: Deployment indicates this resource manages Pods declaratively.metadata.name defines the name (web-deployment).In the spec resource, the nested resources are:
replicas: 3 ensures that three identical Pods are always running.selector.matchLabels tells Kubernetes which Pods belong to this Deployment.metadata.labels must match the selector.spec.containers describes the container, specifying that it:
Scaling can be done manually as follows:
$ kubectl scale deployment web-deployment --replicas=5
You can also change the scaling declaratively by modifying the YAML file.
After you update the container image, you can ask Kubernetes to perform a rolling update:
$ kubectl set image deployment/web-deployment nginx=nginx:1.26
This ensures zero downtime by gradually replacing old Pods with new ones.
Rollback reverts a Deployment to a previous working version. This is useful when a new update introduces errors, and allows you to quickly return to a stable state with minimal downtime.
The following command tells Kubernetes to undo the last change made to the Deployment web-deployment, restoring the previous configuration (such as an earlier container image).:
$ kubectl rollout undo deployment/web-deployment
Kubernetes provides multiple mechanisms to make applications accessible, ensuring reliable communication both within the cluster and from external clients.
Services play a fundamental role by offering a stable networking endpoint for a group of Pods, abstracting their dynamic comings and goings. Since Pods can be created and destroyed frequently, their IP addresses are not reliable for direct communication.
A Service solves this by providing a consistent IP address and DNS name, and then routing traffic to the appropriate Pods based on label selectors. In addition, Services can be configured with different exposure types, allowing internal communication or controlled external access.
Types of Services include:
A sample definition of a Service follows:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- port: 80
targetPort: 80
type: ClusterIP
For more advanced routing, Ingress resources are used to manage HTTP and HTTPS traffic. Ingress enables powerful and flexible features such as host-based routing, path-based routing, and TLS termination, acting as a centralized entry point to multiple services within the cluster. It defines rule-based external access to services.
A sample Ingress definition follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Ingress requires an Ingress Controller (e.g., the controller from NGINX) to function.
Containers are ephemeral, so persistent storage is required for stateful applications.
The main Kubernetes storage components are:
Persistent storage in Kubernetes is handled through an abstraction that separates storage provisioning from its usage.
PersistentVolumeClaims (PVCs) allow applications to request storage resources without needing to know the underlying infrastructure details. This abstraction enables portability and flexibility, as the same application configuration can be used across different environments.
A PersistentVolumeClaim specifies requirements such as storage size and access modes, and Kubernetes binds it to a suitable PersistentVolume (PV), either statically provisioned or dynamically created through a StorageClass. This approach provides great flexibility and allows DevOps architects to mold storage to their particular environment, while insulating applications from local storage decisions.
A sample definition of a PVC follows:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
This YAML defines a PersistentVolumeClaim requesting persistent storage in Kubernetes. The apiVersion: v1 and kind: PersistentVolumeClaim lines specify the resource type, while metadata.name assigns the name my-pvc.
In the spec,the accessModes: ReadWriteOnce resource means that the volume can be mounted as read-write by a single node at a time, and resources.requests.storage: 1Gi requests 1 gigabyte of storage.
Kubernetes binds this claim to a suitable PersistentVolume that satisfies these requirements, allowing Pods to use persistent storage independent of their lifecycle.
To use this PVC in a Pod, add it to the Pod’s definition as follows:
apiVersion: v1
kind: Pod
metadata:
name: storage-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- mountPath: /data
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: my-pvc
This YAML defines a Pod that uses persistent storage through a persistentVolumeClaim resource. The Pod runs a container named app using the nginx image, and mounts a volume at the path /data inside the container. The volumeMounts section links the container to a volume named storage, which is defined in the volumes section. This volume is backed by the PersistentVolumeClaim my-pvc, letting the Pod use the storage previously requested and provisioned. This allows the Pod to store and retrieve data in the /data directory, which persists even if the Pod is restarted or replaced.
Beyond basic workload management, Kubernetes includes several specialized controllers designed for specific operational patterns:
Together, these resources provide a comprehensive orchestration framework capable of supporting a wide range of application requirements.
As you advance your DevOps journey and build proficiency in container orchestration, make sure to explore the official Learning Material for the DevOps Tools Engineer certification, available at no cost. It offers comprehensive coverage of the exam objectives and serves as a valuable guide to support and structure your studies.
<< Read the previous part of this series | Start the series from the beginning >>
You are currently viewing a placeholder content from Vimeo. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More Information