Ensuring your Kubernetes component, such as a controller or an operator, works correctly is an important step before merging a pull request or deploying it to the production. You want to be sure that incoming changes will not introduce any regression or negatively effect any part of the system. This is usually done by running integration and end-to-end tests in the CI/CD pipeline. They test the component on all incoming changes, automatically and in a clean environment, and thereby prevent potential errors and flakes.
However, integration and E2E tests assume there is a working Kubernetes cluster. While there are many tools for running Kubernetes (such as
kubeadm or tools based-on
kubeadm) many developers experience a lot of problems trying to get Kubernetes running in the CI environment. The CI environments are usually as minimal as possible, while Kubernetes has many dependencies. Installing all the needed dependencies can take a lot of time and sometimes it’s not even possible.
We want to run Kubernetes using what we already have installed and configured in the CI pipeline and usually that’s Docker. In this blog post you’re going to see how you can do this by using kind and how is it going to change your experience.
This blog post is a follow-up and update of the talk I held during KubeCon 2018 Seattle: Spawning Kubernetes in CI for Integration Tests.
kind is a tool for running local Kubernetes clusters using Docker containers as nodes. It supports Kubernetes 1.11+, multi-node, and high-available clusters.
kind is suitable for running on local machines for development purposes, with support for Linux, macOS, and Windows. It has been developed with the objective to ensure you can easily get a Kubernetes clusters, but also with the intention to be used in the CI pipelines for integration tests.
kind clusters are customizable, either by using CLI flags, by specifying a kind configuration file, or by using a custom node image. We’ll see more about those concepts throughout the blog post.
Before we proceed to running clusters using kind, let’s have a look at some core concepts and the main differences between kind and other solutions.
There are many different tools for running Kubernetes clusters, like for instance Minikube and
kubeadm to mention the most popular ones. This raises the question what the difference between them is and when you should choose kind.
While Minikube is doing a great job at bringing local Kubernetes clusters, bringing Kubernetes 1.11+ clusters requires
systemd if used with the
--vm-driver=none option, which is often unavailable in the CI environments, or a virtual machine which requires a hypervisor and takes plenty of resources. Similar,
kubeadm also needs
systemd to fully provision the cluster and may be hard to configure depending on the environment.
kind is pursuing a different approach: it uses Docker containers as cluster nodes. Before bootstrapping a cluster, kind creates a container using the node image, which contains everything needed for
kubeadm to bootstrap a cluster:
kubeadm itself, and all needed dependencies. Once the node container is created, kind invokes
kubeadm which sets up Kubernetes inside the newly created container.
This way, we can run Kubernetes using just Docker, which is very suitable for CI and testing environments. There are no complex dependencies and you can quickly create and destroy clusters.
In this blog post we’ll not go in-depth into kind design, but if you’d like to learn more, make sure to check out kind Design Principles.
Now that we have an idea how kind works, let’s see how to use kind and how to run it in the CI pipeline.
kind comes with a simple and straightforward CLI. You can create a cluster with a single command, which takes care of everything including setting up all needed Docker containers, provisioning the cluster and creating the Kubeconfig file.
Before we start, we need to download and install kind. There are two ways: using
go get or downloading a binary from GitHub Releases.
Using the Go toolchain may be the easiest way and you always get all the latest changes, but running from the master branch always introduces risks as it can break at any time. The kind team is doing a great job at ensuring the master branch never breaks but if stability is very important to you, you should consider using releases.
If using the Go toolchain, you can download kind with:
go get -u sigs.k8s.io/kind
Alternatively, if you prefer using stable releases, you can obtain kind using cURL and then move it to the
curl -Lo kind https://github.com/kubernetes-sigs/kind/releases/download/0.1.0/kind-linux-amd64 && chmod +x kind && sudo mv kind /usr/local/bin/
After that, you should be able to create a cluster using the following command:
kind create cluster
This command creates a single-node Kubernetes cluster. The cluster version depends on the node image that your kind version uses, but you can always specify the node image to be used with the
kind create cluster --image "kindest/node:v1.13.3"
Note: If you’re running kind version 0.1.0, it’s highly recommend to use the
kindest/node:v1.13.3 image instead of default (
kindest/node:v1.13.2) due to the recently discovered CVE-2019-5736!
Once the cluster is provisioned, you can use the
kind get kubeconfig-path command to get a path to the Kubeconfig file. If you’re using
kubectl to interact with the cluster, you can set the
KUBECONFIG environment variable, so you don’t always have to specify the path when interacting with
export KUBECONFIG=$(kind get kubeconfig-path)
Note: If you’re running
kubectl using Makefiles, this approach might not work. Instead, you should inline the environment variable or use the
KUBECONFIG=$(kind get kubeconfig-path) kubectl get nodes kubectl --kubeconfig=$(kind get kubeconfig-path) get nodes
In case you need to delete a cluster, you can use:
kind delete cluster
When you use commands such as
delete, they use
kind as the default cluster name. You can configure the cluster name with the
--name flag. This also allows you to create and run multiple clusters at the same time.
The kind CLI can be only used for basic configuration, such as setting the cluster name or a node image to be used. Configuring multi-node or high-available clusters and changing advanced options are done via the kind configuration file.
For example, if you want to create a cluster with a control plane node and three worker nodes, you’d use a configuration file like this:
apiVersion: kind.sigs.k8s.io/v1alpha2 kind: Config nodes: - role: control-plane - role: worker replicas: 3
The configuration is provided to the
create cluster command using the
kind create cluster --config config.yaml
You can check out an example configuration file for more details on how you can use configuration files to configure your clusters.
Note: The kind configuration file is in the alpha status at time of writing this post. Breaking changes are expected in the upcoming period, so make sure to check kind documentation for an up-to-date reference.
Finally, we can use this knowledge to run a cluster in the CI environment.
For this blog post, we’ll see how to use kind in Travis-CI as it’s the most popular CI pipeline among the open source projects. These steps should work in any other CI pipeline as long there is properly configured Docker available.
We just need to grab kind binary and use the
kind create cluster command to start the cluster. Optionally, you may want to download
kubectl as well. We’re going to do that in the
before_script phase, which is used to prepare the environment and we’ll run actual tests in the
language: go go: - ‘1.12.x’ services: - docker jobs: include: - stage: Integration Tests before_script: # Download and install kubectl (optional) - curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# Download and install kind using Go toolchain - go get sigs.k8s.io/kind # Download and install kind using cURL # - curl -Lo kind https://github.com/kubernetes-sigs/kind/releases/download/0.0.1/kind-linux-amd64 && chmod +x kind && sudo mv kind /usr/local/bin/ # Create a new Kubernetes cluster using kind - kind create cluster # Set KUBECONFIG environment variable - export KUBECONFIG=“$(kind get kubeconfig-path)” script: make test-integration
go get. This ensures potential backwards incompatible changes are not going to break your tests and the CI pipeline.
For more details, you can check out
travis-kind example repository that contains
.travis.yml along with the additional resources.
Often when running integration and end-to-end tests, you want to build a Docker image for your component locally, in the pipeline, and use it for tests. Pushing images for tests to a remote registry is something we want to avoid. As kind clusters are using Docker in the node container, we need to push image from Docker running on the local machine to Docker running in the node container.
Docker images can be loaded into kind clusters using the
kind load docker-image command. Usually we’d do something such as:
docker build -t my-image:tag . kind load docker-image my-image:tag kubectl create -f manifest-using-my-image.yaml
kind load command is not available in the
Note: You should avoid using the
latest tag when building images for kind clusters. By default, Kubelet pulls image from a remote registry if the
latest tag is used unless
imagePullPolicy isn’t set to
kind is a new project and currently in the alpha status. However, it’s very stable and many projects are using it for their CI tests, including:
If you want to get involved, make sure to check out the project repository and the project website. kind has the #kind channel on Kubernetes Slack where you can get in touch with users and maintainers.
kind is a fast and easy to use tool for creating local Kubernetes clusters. It has been developed with the objective to be used in CI environments and so far works very well for many projects. This blog post should give you a quick introduction on how to get started. However, kind has many more features, so take a look at the kind website for more details on design and advanced use cases.