DevOps Tools Introduction #07: Container Orchestration
While individual Docker containers usually run a single program, applications in a microservice architecture are usually composed by combining multiple containers; each contributing a specific set of features that make up the complete application. Coordinating the creation and maintenance of the containers that build an application is the task of container orchestration tools. Multiple container orchestration platforms exist. However, two of them are part of the LPI DevOps Tools Engineer exam; Docker Swarm, which is part of Docker, and Kubernetes, which is its own container orchestration platform.
Docker Swarm is a simple clustering tool which, nowadays, is included in Docker. While Swarm allows you to connect multiple Docker hosts, Docker Compose provides features to describe application stacks containing more than one container and run these stacks on a Docker host or a Swarm cluster.
To learn about setting up a Swarm cluster, how to create a Docker Compose file and how to run your service on a Swarm cluster, spend a few hours and take the Container Orchestration with Docker and Swarm workshop by Jérôme Petazzoni. It explains Swarm and Compose in a clear, understandable and hands-on way. The slides encourage you to use Play with Docker, a service we already used last week. After logging in using your Docker ID, Play with Docker grants you access to up to five Docker nodes in the cloud and facilitates access to their exposed ports. Don’t just read the slides, spend the time to follow all the exercises and examples on your own. It is definitely worth it!
Kubernetes is an alternative tool for container orchestration. Originally developed by Google, Kubernetes is becoming more and more popular and is integrated with numerous other container platform products. Setting up a Kubernetes Cluster is not as simple as creating a Docker Swarm. Fortunately, the LPI DevOps Tools Engineer objectives do not ask you to set up Kubernetes. Instead, you’re expected to work with an existing Kubernetes setup.
Even better, there is the Kubernetes Basics Tutorial which guides you through your first major steps with Kubernetes. Each course module contains an interactive tutorial where you’re working with Kubernetes in a terminal in your web browser.
Similar to Play with Docker, Play with Kubernetes provides you with a free Kubernetes playground in your web browser. To create your own Kubernetes playground, consider using a hosted Kubernetes service. If you want to install Kubernetes locally, take a look at Minikube or CoreOS Container Linux.
If you enjoyed learning Docker Swarm with the Docker Orchestration workshop, you’ll be happy to hear that Jérôme Petazzoni, together with AJ Bowen, also offers a workshop titled Deploying and Scaling Microservices with Docker and Kubernetes. It is definitely worth the time to learn how to deploy the workshop’s example app, a distributed ‘Dockercoin’ miner, on Kubernetes.
Since Kubernetes uses its own orchestration model, take some time to get the terminology and concepts correct. Make sure you read the Pod Overview, Pods Details and Pod Lifecycle chapters of the Kubernetes Documentation. The exam objectives also explicitly mention Deployments, ReplicaSets and Services as important configuration elements in Kubernetes.
In order to give these concepts more context, visit the Task section of the Kubernetes documentation. Here you’ll find groups of common tasks, including instructions how to approach them. Browse the sections and tasks and try to follow some of the procedures that seem useful to you.
We’ve covered a huge amount of knowledge this week. Once you worked through all the references, you’ll be familiar with all the key aspects of deploying containerized applications and services. Next week we’ll focus on the other end of containers and see which infrastructure is required to run Docker on your local devices, in your data center and in the cloud.