skip navigation
"Software developer with a kaizen mindset"

Creating a k8s master node, the hard way

Tuesday Jan 12, 2021 | updated May 4, 2021 | Series Kubernetes

I recently got some new hardware in an effort to streamline my homelab. This required a move of my old kubernetes cluster and I decided to properly document that move this time. As I realised that the last guide I used was a lot harder to find now and I’m too stubborn to use k8s admin. Just to be absolutely clear, this guide will get you a running kubernetes cluster suitable as a learning environment or homelab. This is not a secure setup and this walkthrough/guide should not be used to setup a staging or production environment.

This kubernetes master node will be running on Fedora 33, configured with Fedora Server + guest agents and headless management packages installed. The guest agents are simply there since I’m running under kvm/qemu, actually will be running the entire cluster on a single workstation. Headless management comes with cockpit that I really like to have a quick look at the machine without having to ssh in.

Despite this being a homelab situation I did want to keep SELinux enforced and firewalls in place. It added a nice learning opportunity and disabling it completely seemed like the classic just chmod 0777 / overreaction. Having said that, I am in no means an experienced with SELinux and the actions below are simply to make it work, not to provide a secure setup.

Most of this setup is based on my experience with the following guide

These however seemed to have gone without updates and are a little out of date.

Last thing before we dive in, the below assumes a clean Fedora 33 install with hostname already configured and earlier mentioned packages installed. In my case this will install version 1.18.2 of kubernetes as that is what dnf provides. I have not bothered trying to get a more recent version. At the time of writing 1.20 is the most up to date version of kubernetes.

Update system

We’ll start right as the install is finished so I will update my system first. At this time Fedora 33 is the latest so a simple sudo dnf upgrade --refresh will suffice to get everything up to date.

Update May 2021: I’ve now upgraded all nodes to Fedora 34 while the upgrade for each node worked fine out of the box, the master node upgrade took some work. Fedora 34 comes with kubernetes 1.20.5 and containerd 1.5.0 compared to kubernetes 1.18.2 and containerd 1.4.4 of Fedora 33. Please have a read here for the changes compared to this guide.

Configuring GRUB2

Next we’ll configure grub a little more to my and the clusters liking.

Let’s edit the configureation: sudo vi /etc/default/grub

First we’ll change our cgroup configuration from v2 to v1. Unfortunately it looked like that even with containerd v1.4.3 it is needed. Hopefully fedora 34 server edition will ship with docker 20.10 or newer like the workstation and this is no longer necessary. Add systemd.unified_cgroup_hierarchy=0 to GRUB_CMDLINE_LINUX

Optionally edit the timeout GRUB_TIMEOUT=0 I simply prefer to have my nodes and master to boot up straight away. Do realize this will make it more difficult to switch kernels if for example an update renders the OS useless.

Finally to save our configuration we run one of the following commands:

When using BIOS use sudo grub2-mkconfig -o /boot/grub2/grub.cfg

When using UEFI use sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

(Optional) Configure search domain

Since i’m using DNS and a local domain I update my systemd-resolved configuration. sudo vi /etc/systemd/resolved.conf

Add DNS server: 192.168.10.1 and add a search domain: local.haukerolf.net (optionally) disable the caching done by systemd-resolved

This allows me to just connect by host name instead of FQDN.

At this point I like to do a reboot just incase. Optionally you could just restart it.

sudo systemctl daemon-reload

sudo systemctl restart systemd-resolved

Install kubernetes

Next up installation sudo dnf install kubernetes kubernetes-cni etcd

This will install everything you need to run a control plane with a node on a single machine.

The master will consist of 3 services:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

The node will consist of 2 services:

  • kubelet
  • kube-proxy

For more indepth information on the components checkout the documentation over here: https://kubernetes.io/docs/concepts/overview/components/

ETCD is there to provide the cluster with a key value store, my configuration will only have a single store for the entire cluster. In a more production ready environment this should obviously be high available, redundant and backed up.

Kubernetes CNI will install plugins we need for node to node pod communication, we’ll get back to that when choosing a network plugin.

Configure etcd

The default configuration will be enough if you only want to run a single node and control plane on the same machine. However when I added another node on a different vm it looked like I had to change the settings a little. At this time I’m not sure if this is 100% necessary (there were a couple of issues at the same time) but with it it works.

Edit the configuration file sudo vi /etc/etcd/etcd.conf And change the following lines ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" and ETCD_ADVERTISE_CLIENT_URLS="http://server:2379"

This should make etcd listen to connections from outside and let our nodes know where to reach the store.

Configure containerd

I chose direct integration with containerd over docker as it requires less tweaking of SELinux compared to using docker. Both should work though. In order for our network plugin later on to work we need to tweak some settings for our container runtime (CRI)

First we’ll generate the default configuration:

sudo containerd config default | sudo tee /etc/containerd/config.toml

Then we’ll point it to the cni plugin directory, this is unfortanetly not the more common default in /opt as in some other distros. Under [plugins."io.containerd.grpc.v1.cri".cni] edit the bin_dir to the following:

bin_dir = "/usr/libexec/cni"

Create CA & certificates

Certificates are not strictly necessary to get the cluster up and running and it will allow you to deploy pods. However if you want to run some (provided) tooling it might be good to get TLS setup with self-signed certificates. A lot of these tools will be configured to connect to the master with https by default, without the certificates we setup they will not work.

So we’ll generate some of our own certificates. For this we will setup a certficate authority and sign some server certificates. If you want a more indepth explanation have a look here: https://gist.github.com/Soarez/9688998

I used that as a guide to get to the script and configurations below. The following script is what I used to create all the certificates in one go. It needs the configuration files below it to work.

generate.sh

#!/bin/bash

# generate a key and certificate signing request based on the configuration provided for the subject
openssl req -new -out {server-hostname}.csr -config {server-hostname}-csr.conf

# generate our CA
openssl genrsa -out ca.key 2048

openssl req -new -x509 -key ca.key -out ca.crt -config ca-csr.conf -days 7300

# Sign our subject CSR
openssl x509 -req -in {server-hostname}.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out {server-hostname}.crt -extensions v3_ca -extfile ext3-ca.conf -days 7300

Our certificate request will be created with the following parameters, note the alt names may not all be necessary. Make sure to change DNS.6, IP.1 and IP.2 to that of your environment. I recommend leaving IP.2 to 10.244.0.1 if you’re interested in running a multi node cluster with flannel (described later in this article). This is your cluster ip range (CIDR) and needs to match the configuration of your control plane.

{server-hostname}-csr.conf

[ req ]
default_bits = 2048
default_keyfile = {server-hostname}.key
encrypt_key = no
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn

[ dn ]
C = nl
ST = province
L = town
O = haukerolf
OU = haukerolf kubernetes
CN = {server-hostname}.local.haukerolf.net

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.6 = {server-hostname}.local.haukerolf.net
IP.1 = 192.168.10.47
IP.2 = 10.244.0.1
IP.3 = 10.254.0.1

In order to not have to configure the CA name manually we have the following configuration for it

ca-csr.conf

[ req ]
default_bits = 2048
encrypt_key = no
prompt = no
default_md = sha256
distinguished_name = dn

[ dn ]
C = nl
ST = province
L = town
O = Sillmaur
OU = homelab

And since it seems openssl does not include v3 extensions from the CSR we need to provide those again. Make sure to copy any changes in {server-hostname}-csr.conf here too

ext3-ca.conf

[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.6 = {server-hostname}.local.haukerolf.net
IP.1 = 192.168.10.47
IP.2 = 10.244.0.1
IP.3 = 10.254.0.1

[ v3_ca ]
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names

Run ./generate.sh or whatever you called the script outlined above.

Create the directory where we’ll place the certificates sudo mkdir /etc/kubernetes/pki

Copy the following filesca.crt, ca.key, {server-hostname}.crt, {server-hostname}.key to /etc/kubernetes/pki

Change owner to kube so kubernetes and no one else can read these. The other permissions should be correct, i.e. only owner can read/write the private keys and others can only read the public keys.

Configure the control plane

We need to make sure all parts know where to find the api server and how to connect for this we configure /etc/kubernetes/config with the following line:

KUBE_MASTER="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig"

Configure /etc/kubernetes/apiserver:

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://{server-hostname}:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,SecurityContextDeny"

# Add your own!
KUBE_API_ARGS="--client-ca-file=/etc/kubernetes/pki/ca.crt --tls-cert-file=/etc/kubernetes/pki/{server-hostname}.crt --tls-private-key-file=/etc/kubernetes/pki/{server-hostname}.key"

Compared to the default I’ve changed the KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" to ensure it is reachable from the outside. I removed the backup etcd server (as I’m not configuring that) KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379". And last we add the generated certificates to KUBE_API_ARGS. The --client-ca-file will let clients know to trust our self-signed certificates.

Last thing we need to configure for the control plane is the controller manager. We’ll also let it know where to find our certificates and private keys so we can generate service accounts. And also that it needs to assign our nodes with ip from the cluster ip range that we decided on earlier.

Configure /etc/kubernetes/controller-manager: KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/{server-hostname}.key --service-cluster-ip-range=10.254.0.0/16 --cluster-cidr=10.244.0.0/16 --allocate-node-cidrs=true"

Enable services

Now we can enable the control plane services, so they’ll startup next boot. We’ll start running the later, after we’ve configured the kubelet and proxy.

sudo systemctl disable docker.service docker.socket

sudo systemctl stop docker.service docker.socket

systemctl enable etcd containerd kube-apiserver kube-controller-manager kube-scheduler

If you intend to use docker don’t disable the docker service or socket and don’t enable the containerd service.

Configure kubernetes node

Create /etc/kubernetes/kubelet-config.yaml

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
clusterDomain: cluster.local
clusterDNS:
- 10.254.0.10
kubeletCgroups: "/systemd/system.slice"
cgroupDriver: systemd
failSwapOn: false
authentication:
  x509: {}
  webhook:
    enabled: false
    cacheTTL: 2m0s
  anonymous:
    enabled: true
authorization:
  mode: AlwaysAllow
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s

This yaml will configure the kubelet (node), previously this was done through commandline arguments. We’ll bind this to 0.0.0.0 so it is reachable. Special note for the ip’s configured here, they should be the same as the cidr ranges we used in earlier control plane and certificate configurations. Last note the authentication, this is insecure and completely open!

Note: configuration through yaml looks to be the direction they’re heading with kubernetes, it should be possible for the control plane too. However I have not researched this at this time.

Next we need to make sure we configure the kubelet to ignore the environment variables loaded from /etc/kubernetes/kubelet and instead pickup the yaml and kubeconfig files. To do this we define an override in systemctl.

sudo systemctl edit kubelet

[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --config /etc/kubernetes/kubelet-config.yaml --kubeconfig /etc/kubernetes/kubelet.kubeconfig --container-runtime remote --container-runtime-endpoint unix:///var/run/containerd/containerd.sock

This tells the kubelet to use containerd instead of the default docker(-shim). If you’d like to use docker set the following to ensure you can define a network plugin later --cni-bin-dir /usr/libexec/cni --network-plugin cni and leave out both container-runtime options.

Last we also need to tell the kube proxy where to find it’s settings. First create the configuration:

sudo vi /etc/kubernetes/kube-proxy-config.yaml

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/etc/kubernetes/kubelet.kubeconfig"
mode: "iptables"
clusterCIDR: "10.244.0.0/16"

And make sure kube-proxy knows how to find it

sudo systemctl edit kube-proxy

[Service]
ExecStart=
ExecStart=/usr/bin/kube-proxy --config /etc/kubernetes/kube-proxy-config.yaml

Now we can enable kubelet and the proxy to start next time we boot up. Same as the control plane, we’ll start everything up later.

systemctl enable kubelet kube-proxy

Configure local client

Now we can quickly configure the kubectl client so we can check if things are working later.

cp /etc/kubernetes/kubelet.kubeconfig ~/.kube/config

Update May 2021: I never specified what the content of the kubeconfig should be, by now it is outdated. Please see this post for the details of the kubeconfig.

Change firewall

While completely disabling it is an option (if the deployment type allows). But I wanted it enabled and went with the following rules that I found here https://stackoverflow.com/questions/60708270/how-can-i-use-flannel-without-disabing-firewalld-kubernetes

# Master
firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API server
firewall-cmd --permanent --add-port=8080/tcp # Kubernetes API server
firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler
firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager
firewall-cmd --permanent --add-port=8285/udp # Flannel (CNI)
firewall-cmd --permanent --add-port=8472/udp # Flannel (CNI)
firewall-cmd --add-masquerade --permanent
# only if you want NodePorts exposed on control plane IP as well
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --reload
systemctl restart firewalld

Turn on the node

We’re almost ready to turn everything on. However SELinux does not like some of the things kubernetes (and later flannel) tries to do. So we’ll add a few policies. Beware: I am no SELinux expert and these settings are purely to have the cluster work with SELinux enabled, I can not comment on how secure this configuration is. It most likely is not secure, but hopefully better than fully disabling SELinux

To make life easier we’ll temporarily stop SELinux from enforcing its policies. Later we’ll reenable it and configure it in hopefully mostly in one go, without it interferring with any troubleshooting on the cluster itself. Run setenforce 0, next boot it will be enforcing again.

Now we can start all services that we may need.

systemctl start etcd

sudo systemctl start containerd (skip this when using docker)

systemctl start kube-scheduler kube-controller-manager kube-apiserver kubelet kube-proxy

At this point we should have a working cluster and all services should be running without restart loops or failures.

We can check the cluster with a simple kubectl version and we should see the following.

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"archive", BuildDate:"2020-07-28T00:00:00Z", GoVersion:"go1.15rc1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"archive", BuildDate:"2020-07-28T00:00:00Z", GoVersion:"go1.15rc1", Compiler:"gc", Platform:"linux/amd64"}

If it is not working, see some troubleshooting steps at the end of this article.

(Optional) Configure flannel

The following step is probably only really necessary if you want to run multiple nodes. Even if you have no plans now, it is a really nice insight into how networking works across nodes. Play around with it and readup on https://kubernetes.io/docs/concepts/cluster-administration/networking/ and https://github.com/coreos/flannel#flannel

First we prepare fallback device for our networking that we’ll use later. mdkir /etc/cni/ mkdir /etc/cni/net.d/ /etc/cni/net.d/99-loopback.conf

{
    "cniVersion": "0.3.1",
    "name": "lo",
    "type": "loopback"
}

And configure some features for routing /etc/sysctl.d/

net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1

To install flannel simply add it as a resource in the cluster. Be sure to change the config map if you did not choose 10.244.0.0/16 as your CIDR. Save the yaml locally as we need to make some changes.

wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

While it should work with vxlan as the backend I could not get it stable. After changing to host-gw it worked perfectly. Another thing I had to adjust was the interface it would bridge with. By default it grabs the first it can find, this was however not the one I needed it to bridge with.

So we adjustby adding - --iface=enp1s0 under args like below

      containers:
      - name: kube-flannel
      ...
      args:
      ...
      - --iface=enp1s0

and vxlan to host-gw like:

  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "host-gw"
      }
    }

Now kubectl apply -f kube-flannel.yml and check if it is running kubectl get pods -n kube-system

With SELinux not enforcing anything it should be in a running state, if not continue with the actions below. If even after the SELinux configuration it isn’t working, be sure the check the logs of the pod and the troubleshooting section at the end.

Configure SELinux

Now we need to make sure that SELinux doesn’t block anything essential after we reboot and the enforcing is enabled again.

sudo audit2allow -a -M kubernetes will create the policies that would have been blocked based on the audit logs if we had enforcement enabled.

sudo semodule -i kubernetes.pp should be run to install the new policies.

Afterward sudo audit2allow -a should look something like this below. It may take a few restarts and rerunning of the above 2 commands before all the things flannel tries to do are captured. Just keep going and rebooting until nothing new is added and it looks similar.

#============= init_t ==============

#!!!! This avc is allowed in the current policy
allow init_t kubernetes_file_t:file { open read };

Now reboot and check if everything is running from services to the flannel pod.

Troubleshooting

ContainerManager issues

I’ve run into a few issues with k8s not being able to interact with docker. The error below being the most recurring one. Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache

This error does not return many results when googling. For me it usually was an issue with cgroups. By trying to run docker run hello-world I got to the underlying issue. In my case the cgroup change did take hold, and I had to redo my grub configuration (see start of this article).

SELinux issues

Most of my other issues had to do with running flannel, in almost all of the cases this had to do with SELinux blocking operations that flannel needed. You may need to redo the audit2allow step a few times with reboots inbetween before all permissions are set correctly.

Wrap up

You should have a single machine running your control plane and node. In K8s Nodes The Hard Way we’ll add another node to the cluster.

Have fun with your new homelab cluster!

Enjoyed this? Read more in the Kubernetes series.
Share on:
Support: