skip navigation
"Software developer with a kaizen mindset"

Adding kubelets the hard way

Tuesday Jan 12, 2021 | updated May 4, 2021 | Series Kubernetes

In this article we’ll continue our journey on getting a multi node kubernetes cluster up and running. I’m doing this on VMs under kvm/qemu using Fedora 33 Server editions, purely out of preference and hardware available. This should however work with any hardware setup you’d like. If you don’t have your master running yet please read K8s The Hard Way

Before we dive in I’d like to repeat my disclaimer from the previous article. This is not a production ready setup, neither in security (despite it working with SELinux and firewalls enabled) or availability.

Now we can start and it begins pretty much the same as the preparations for the control plane from last time.

Update system

We’ll start right as the install is finished so I will update my system first. At this time Fedora 33 is the latest so a simple sudo dnf upgrade --refresh will suffice to get everything up to date.

Update May 2021: I’ve now upgraded all nodes to Fedora 34 while the upgrade for each node worked fine out of the box, the master node upgrade took some work. Fedora 34 comes with kubernetes 1.20.5 and containerd 1.5.0 compared to kubernetes 1.18.2 and containerd 1.4.4 of Fedora 33. Please have a read here for the changes compared to this guide.

Configuring GRUB2

Next we’ll configure grub a little more to my and the clusters liking.

Let’s edit the configureation: sudo vi /etc/default/grub

First we’ll change our cgroup configuration from v2 to v1. Unfortunately it looked like that even with containerd v1.4.3 it is needed. Hopefully fedora 34 server edition will ship with docker 20.10 or newer like the workstation and this is no longer necessary. Add systemd.unified_cgroup_hierarchy=0 to GRUB_CMDLINE_LINUX

Optionally edit the timeout GRUB_TIMEOUT=0 I simply prefer to have my nodes and master to boot up straight away. Do realize this will make it more difficult to switch kernels if for example an update renders the OS useless.

Finally to save our configuration we run one of the following commands:

When using BIOS use sudo grub2-mkconfig -o /boot/grub2/grub.cfg

When using UEFI use sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

(Optional) Configure search domain

Since i’m using DNS and a local domain I update my systemd-resolved configuration. sudo vi /etc/systemd/resolved.conf

Add DNS server: 192.168.10.1 and add a search domain: local.haukerolf.net (optionally) disable the caching done by systemd-resolved

This allows me to just connect by host name instead of FQDN.

At this point I like to do a reboot just incase. Optionally you could just restart it.

sudo systemctl daemon-reload

sudo systemctl restart systemd-resolved

Install kubernetes

From here things start to differ slightly. Now we get to install the node or kubelet.

sudo dnf install kubernetes-node kubernetes-cni

This will give you 2 services

  • Kubelet
  • Kube-proxy

We’re only installing the dependencies that are required to run a node and leave out all the control plane services.

Kubernetes CNI will install plugins we need for node to node pod communication.

Configure containerd

I chose direct integration with containerd over docker as it requires less tweaking of SELinux compared to using docker. Both should work though. In order for our network plugin later on to work we need to tweak some settings for our container runtime (CRI)

First we’ll generate the default configuration:

sudo containerd config default | sudo tee /etc/containerd/config.toml

Then we’ll point it to the cni plugin directory, this is unfortanetly not the more common default in /opt as in some other distros. Under [plugins."io.containerd.grpc.v1.cri".cni] edit the bin_dir to the following:

bin_dir = "/usr/libexec/cni"

Configure kubernetes node

Create /etc/kubernetes/kubelet-config.yaml

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
clusterDomain: cluster.local
clusterDNS:
- 10.254.0.10
kubeletCgroups: "/systemd/system.slice"
cgroupDriver: systemd
failSwapOn: false
authentication:
  x509: {}
  webhook:
    enabled: false
    cacheTTL: 2m0s
  anonymous:
    enabled: true
authorization:
  mode: AlwaysAllow
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s

This yaml will configure the kubelet (node), previously this was done through commandline arguments. We’ll bind this to 0.0.0.0 so it is reachable. Special note for the ip’s configured here, they should be the same as the cidr ranges we used in earlier control plane and certificate configurations. Last note the authentication, this is insecure and completely open!

Note: configuration through yaml looks to be the direction they’re heading with kubernetes, it should be possible for the control plane too. However I have not researched this at this time.

Next we need to make sure we configure the kubelet to ignore the environment variables loaded from /etc/kubernetes/kubelet and instead pickup the yaml and kubeconfig files. To do this we define an override in systemctl.

sudo systemctl edit kubelet

[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --config /etc/kubernetes/kubelet-config.yaml --kubeconfig /etc/kubernetes/kubelet.kubeconfig --container-runtime remote --container-runtime-endpoint unix:///var/run/containerd/containerd.sock

This tells the kubelet to use containerd instead of the default docker(-shim). If you’d like to use docker set the following to ensure you can define a network plugin later --cni-bin-dir /usr/libexec/cni --network-plugin cni and leave out both container-runtime options.

Last we also need to tell the kube proxy where to find it’s settings. First create the configuration:

sudo vi /etc/kubernetes/kube-proxy-config.yaml

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/etc/kubernetes/kubelet.kubeconfig"
mode: "iptables"
clusterCIDR: "10.244.0.0/16"

And make sure kube-proxy knows how to find it

sudo systemctl edit kube-proxy

[Service]
ExecStart=
ExecStart=/usr/bin/kube-proxy --config /etc/kubernetes/kube-proxy-config.yaml

Now we can enable kubelet and the proxy to start next time we boot up. Same as the control plane, we’ll start everything up later.

systemctl enable kubelet kube-proxy

Configure local client

Now we can quickly configure the kubectl client so we can check if things are working later.

cp /etc/kubernetes/kubelet.kubeconfig ~/.kube/config

Update May 2021: I never specified what the content of the kubeconfig should be, by now it is outdated. Please see this post for the details of the kubeconfig.

Change firewall

Up to now we’ve accessed everthing locally but for it to be useful we need to open up the firewall. While completely disabling it is an option (if the deployment type allows). But I wanted it enabled and went with the following rules that I found here https://stackoverflow.com/questions/60708270/how-can-i-use-flannel-without-disabing-firewalld-kubernetes

Note that we’re opening less ports compared to the control plane node.

# Node
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=8285/udp # Flannel
firewall-cmd --permanent --add-port=8472/udp # Flannel
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --add-masquerade --permanent
firewall-cmd --reload
systemctl restart firewalld

Turn on the node

We’re almost ready to turn everything on. However SELinux does not like some of the things kubernetes and flannel tries to do. So we’ll add a few policies. Beware: I am no SELinux expert and these settings are purely to have the cluster work with SELinux enabled, I can not comment on how secure this configuration is. It most likely is not secure, but hopefully better than fully disabling SELinux

To make life easier we’ll temporarily stop SELinux from enforcing its policies. Later we’ll reenable it and configure it in hopefully mostly in one go, without it interferring with any troubleshooting on the cluster itself. Run setenforce 0, next boot it will be enforcing again.

Now we can start all services that we may need.

sudo systemctl start containerd (skip this when using docker)

systemctl start kubelet kube-proxy

At this point we should have a working cluster and all services should be running without restart loops or failures.

We can check the cluster with a simple kubectl version and we should see the following.

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"archive", BuildDate:"2020-07-28T00:00:00Z", GoVersion:"go1.15rc1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"archive", BuildDate:"2020-07-28T00:00:00Z", GoVersion:"go1.15rc1", Compiler:"gc", Platform:"linux/amd64"}

Configure SELinux

Now we need to make sure that SELinux doesn’t block anything essential after we reboot and the enforcing is enabled again.

sudo audit2allow -a -M kubernetes will create the policies that would have been blocked based on the audit logs if we had enforcement enabled.

sudo semodule -i kubernetes.pp should be run to install the new policies.

Afterward sudo audit2allow -a should look something like this below. It may take a few restarts and rerunning of the above 2 commands before all the things flannel tries to do are captured. Just keep going and rebooting until nothing new is added and it looks similar.

#============= init_t ==============

#!!!! This avc is allowed in the current policy
allow init_t kubernetes_file_t:file { open read };

Now reboot and check if everything is running from services to the flannel pod.

Wrap up

You should have a multi-node cluster running now.

Have fun with your new homelab cluster!

Enjoyed this? Read more in the Kubernetes series.
Share on:
Support: