skip navigation
"Software developer with a kaizen mindset"

K8s Upgrade Master 1.20

Monday May 3, 2021 | Series Kubernetes

So it turned out upgrading my master from Fedora 33 to Fedora 34 (and thus from kubernetes 1.18 to 1.20) broke my master. Lets have a look at how to get it running again!

I’ll add the same disclaimer as on my other kubernetes posts. This is not a production deployment, it is completely insecure and not resilient. It has however served me well as a homelab for running various tools.

As I added to my previous posts, upgrading all the nodes turned out to be no issue. Simply running the dnf system-upgrade did the trick and they were running kubernetes 1.20.5 and containerd 1.5.0 while integrating with the rest of the cluster. Easy and safe failover of the pods to the rest of the nodes when necessary.

I sort of expected the same for the master node. However after the upgrade my apiserver never booted up and my entire cluster stopped functioning. The latter being obvious as I had a single master and the control plane was not functioning. Although there were a few errors in the logs about new settings that had become mandatory the main issue was not clear at first.

Apparently there had been a (for me) breaking change going from 1.18 to 1.20. Something that I was using had been deprecated in 1.19. It was the usage of the insecure port (8080). All connections had to go through https now by default on port 6443.

So lets have a look at the changes that needed to be made.

Api server config

The changed version of /etc/kubernetes/apiserver is below

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
# KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--tls-cert-file=/etc/kubernetes/pki/{server-hostname}.crt --tls-private-key-file=/etc/kubernetes/pki/{server-hostname}.key --allow-privileged=true --service-account-signing-key-file=/etc/kubernetes/pki/{server-hostname}.key --service-account-issuer=https://{server-hostname}:6443 --service-account-key-file=/etc/kubernetes/pki/{server-hostname}.key --authorization-mode=ABAC --authorization-policy-file=/etc/kubernetes/abac.policy --anonymous-auth=true"

So compared to the ‘K8s Nodes The Hard Way’ version I commented out the api port, the insecure-bind-address, removed the client-ca-file as it wasn’t used and added a few arguments to the KUBE_API_ARGS.

The following arguments are now mandatory but to be honest I still need to research how they’re used and what they do exactly. For now I just wanted it to work and this is what did the trick. Add --service-account-signing-key-file=/etc/kubernetes/pki/{server-hostname}.key --service-account-issuer=https://{server-hostname}:6443 --service-account-key-file=/etc/kubernetes/pki/{server-hostname}.key. It is reusing the certificate, so that is probably strike #100 in the security handbook.

Authentication and authorization

Now that the insecure port was no longer supported, anonymous access became a little more difficult. While not best practice I’m at this moment not really interested in protecting my cluster with login, proper certificates and ensuring valid tls verification on my kubelets. So I added the following to open up access, similar to how I had it with the insecure port and anonymous access. For that I added --authorization-mode=ABAC --authorization-policy-file=/etc/kubernetes/abac.policy --anonymous-auth=true. Unfortunately anonymous auth is not supported when configuring authorization-mode=AlwaysAllow so I had to find something else. And the ABAC gave me the easiest way to configure unauthenticated access without restrictions. RBAC might be more preferable, but I’ll dig into that at a later date.

Now for the policy, the below is what I ended up with. Basically allowing anyone to do anything. Highly insecure, but really convenient. For details see the previous link to the ABAC docs.

/etc/kubernetes/abac.policy:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}

Kubelet configuration

We’re almost back to having a working cluster again. Only thing left to configure is all the nodes, including the master, with the right settings for connecting to the cluster. For that see the following /etc/kubernetes/kubelet.kubeconfig

apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: https://{server-hostname}:6443/
      insecure-skip-tls-verify: true
    name: local
contexts:
  - context:
      cluster: local
      user: local
    name: local
current-context: local
users:
- name: local
  user:
    username: a
    password: a

We configure it to connect over https and to skip tls verify as my certificate is self-signed. Better option would be to copy over the public key and tell kubeconfig to verify against that though. And now that the connection is ‘secure’ (hah) kubernetes requires us to use a username and password. As you can see it can be anything as it isn’t actually used or checked. Besides we gave all authenticated and unauthenticated users access to everything in the ABAC policy. Unfortunately there is no way to instruct kubectl to just not send any username and password.

So that is it. The cluster should be working again and just as insecure and prone to down time as we had it in the last configuration. It does however give us a few pointers on how te secure it a bit better. So maybe I’ll write something about that next time I tinker with the setup.

Enjoyed this? Read more in the Kubernetes series.
Share on:
Support: