Icon
Sreeram Venkitesh
Published on

Building and running Kubernetes with kind

680 words • 4 min read

I use kind a lot to spin up Kubernetes clusters locally. Kind also lets you compile the Kubernetes source code into node images that you use to run clusters with. This is what I do to quickly build and test changes to the Kubernetes source code.

Note: You can also use the hack/local-up-cluster.sh script to spin up a Kubernetes cluster from source. However local-up-cluster.sh only works in Linux environments and is not supported for MacOS.

Installing kind

I'm installing kind on my Mac with brew.

brew install kind

Once you've installed kind, you can quickly spin up clusters in your machine. Each node of your cluster would be running as a docker container in your machine.

kind create cluster --name test-cluster

Building your node image from source

You can clone kubernetes/kubernetes and run the following from inside the repository to build the source code into a kind node image which you can use to create clusters.

kind build node-image --arch arm64 .

I'm passing the arch flag as arm64 since I'm using an M1 MacBook Pro. Also note that I'm passing the path to the kubernetes source code as an argument here. Since we're running the command from inside the cloned repository, this is the . directory.

kind cluster config

Once the above command finishes running, you can see that a new container image kindest/node:latest will be created in your machine. This is the source code compiled into a container image. You can use the node image directly with kind create cluster or you can setup your cluster with a configuration file. The kind config file lets you configure things like feature gates for different each components, set values for different flags and so on.

Here's an example config file for a cluster with 3 worker nodes and the PodLifecycleSleepActionAllowZero feature gate turned on:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: sleep-cluster
featureGates:
  PodLifecycleSleepActionAllowZero: true
nodes:
  - role: control-plane
    image: kindest/node:latest
  - role: worker
    image: kindest/node:latest
  - role: worker
    image: kindest/node:latest
  - role: worker
    image: kindest/node:latest

Enabling feature gates for specific nodes

Sometimes when testing different features you might want to enable or disable a feature in one node or enable feature gates for one component of one node in your cluster. For example, you might want to disable a feature gate for the kube-apiserver on your control plane node but want to enable it for the kubelet. This can be used to test edge cases like version skew between different components and behaviour between different nodes. This is how you'd configure these for your kind cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: ippvs-cluster
nodes:
- role: control-plane
  image: kindest/node:latest
  kubeadmConfigPatches:
    - |
      apiVersion: kubeadm.k8s.io/v1beta3
      kind: ClusterConfiguration
      apiServer:
        extraArgs:
          v: "3"  # Set verbosity level here
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        feature-gates: "InPlacePodVerticalScaling=true"
- role: worker
  image: kindest/node:v1.30.0
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        feature-gates: "InPlacePodVerticalScaling=false"
- role: worker
  image: kindest/node:v1.31.0
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        feature-gates: "InPlacePodVerticalScaling=false"

In the above config, we have done the following:

  • Configured our control plane node to run the compiled source code with the kindest/node:latest image
  • Set the verbosity flag for kube-apiserver to 3. This would print all the klog.V(3) (and lower verbosity) log statements you've added in the code.
  • Enable the InPlacePodVerticalScaling feature gate for kubelet in your control plane node
  • Start two worker nodes, one running Kubernetes v1.30.0 and another running v1.31.0.
  • Disable the InPlacePodVerticalScaling feature gate for both the worker nodes.

I had used this cluster setup to test the version skew behaviour when trying to update a Pod which was scheduled to a v1.30.0 node. v1.31 introduced a new behaviour to how Pods are restarted when fields other than the image is updated. The version skew had to be handled for InPlacePodVerticalScaling too since the behaviour of Pod restart would be different prior to v1.31.0 and after this change was introduced. Kind gives you the power to reproduce such scenarios in your cluster in a straightforward manner.