Upgrade k8s from 1.15.1 to 1.16.8

Upgrade my k8s cluster and some errors

Upgrade k8s from 1.15.1 to 1.16.8

Upgrade kubeadm

apt-mark unhold kubeadm
apt-get install -y kubeadm=1.16.8-00
apt-mark hold kubeadm

Drain the control plane node

kubectl drain k8sn0 --ignore-daemonsets
node/k8sn0 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-2mllk, kube-system/weave-net-t42h6, metallb-system/speaker-2xpm9
evicting pod "coredns-5c98db65d4-d9bbw"
evicting pod "coredns-5c98db65d4-cnjqr"
pod/coredns-5c98db65d4-d9bbw evicted
pod/coredns-5c98db65d4-cnjqr evicted
node/k8sn0 evicted

Upgrade

sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.15.1
[upgrade/versions] kubeadm version: v1.16.8
I0328 10:36:59.032384   16569 version.go:251] remote version is much newer: v1.18.0; falling back to: stable-1.16
[upgrade/versions] Latest stable version: v1.16.8
[upgrade/versions] Latest version in the v1.15 series: v1.15.11

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     4 x v1.15.1   v1.15.11

Upgrade to the latest version in the v1.15 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.15.1   v1.15.11
Controller Manager   v1.15.1   v1.15.11
Scheduler            v1.15.1   v1.15.11
Kube Proxy           v1.15.1   v1.15.11
CoreDNS              1.3.1     1.6.2
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

 kubeadm upgrade apply v1.15.11

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     4 x v1.15.1   v1.16.8

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.15.1   v1.16.8
Controller Manager   v1.15.1   v1.16.8
Scheduler            v1.15.1   v1.16.8
Kube Proxy           v1.15.1   v1.16.8
CoreDNS              1.3.1     1.6.2
Etcd                 3.3.10    3.3.15-0

You can now apply the upgrade by executing the following command:

 kubeadm upgrade apply v1.16.8

_____________________________________________________________________

Apply Upgrade

sudo kubeadm upgrade apply v1.16.8
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.8"
[upgrade/versions] Cluster version: v1.15.1
[upgrade/versions] kubeadm version: v1.16.8
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Error getting Pods with label selector "k8s-app=upgrade-prepull-etcd" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.8"...
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-controller-manager-k8sn0 hash: dd10dc0af95d5a94f7f05e2e39f95ab7
Static pod: kube-scheduler-k8sn0 hash: ecae9d12d3610192347be3d1aa5aa552
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/apply] FATAL: failed to retrieve the current etcd version: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher

After several minutes, I tried again.

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.8"
[upgrade/versions] Cluster version: v1.15.1
[upgrade/versions] kubeadm version: v1.16.8
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Error getting Pods with label selector "k8s-app=upgrade-prepull-etcd" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[apiclient] Error getting Pods with label selector "k8s-app=upgrade-prepull-kube-scheduler" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[apiclient] Error getting Pods with label selector "k8s-app=upgrade-prepull-kube-controller-manager" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[apiclient] Error getting Pods with label selector "k8s-app=upgrade-prepull-kube-apiserver" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.8"...
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-controller-manager-k8sn0 hash: dd10dc0af95d5a94f7f05e2e39f95ab7
Static pod: kube-scheduler-k8sn0 hash: ecae9d12d3610192347be3d1aa5aa552
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8sn0 hash: 3b17e235d10a6c648541a0fb10af876e
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-28-11-12-29/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8sn0 hash: 3b17e235d10a6c648541a0fb10af876e
Static pod: etcd-k8sn0 hash: 3b17e235d10a6c648541a0fb10af876e
Static pod: etcd-k8sn0 hash: 3b17e235d10a6c648541a0fb10af876e
Static pod: etcd-k8sn0 hash: ee888eebab672aef43e99687278abe3b
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests220601122"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-28-11-12-29/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: e40c510be77985f4d3948611be0aca36
Static pod: kube-apiserver-k8sn0 hash: 2986f6007c6e5086b89289db11e45370
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-28-11-12-29/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8sn0 hash: dd10dc0af95d5a94f7f05e2e39f95ab7
Static pod: kube-controller-manager-k8sn0 hash: dd10dc0af95d5a94f7f05e2e39f95ab7
Static pod: kube-controller-manager-k8sn0 hash: 9b0c56b19392eae57378df103efeb349
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-28-11-12-29/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8sn0 hash: ecae9d12d3610192347be3d1aa5aa552
Static pod: kube-scheduler-k8sn0 hash: 10056ca293bb3c323d190c817f5ff526
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.8". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

Upgrade kubectl and kubelet

Unhold kubectl and kubelet

sudo apt-mark unhold kubelet kubectl
sudo apt install -y kubectl=1.16.8-00 kubelet=1.16.8-00
sudo apt-mark hold kubectl kubelet

Restart kubelet

sudo systemctl restart kubelet

Uncordon master node

kubectl uncordon k8sn0

Upgrade work nodes

Upgrade kubeadm

sudo apt-mark unhold kubeadm
sudo apt install -y kubeadm=1.16.8-00
sudo apt-mark hold kubeadm

Drain node

kubectl drain k8sn1 --ignore-daemonsets

It should output like this

node/k8sn2 already cordoned
evicting pod "kubia-0"
evicting pod "kubia-2"
evicting pod "coredns-5644d7b6d9-h4q6d"
pod/kubia-0 evicted
pod/kubia-2 evicted
node/k8sn2 evicted

But sometimes I got this:

node/k8sn1 cordoned
error: unable to drain node "k8sn1", aborting command...

There are pending nodes to be drained:
 k8sn1
error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): custom-namespace/kubia-manual

Tried cordon k8sn1 and deleted all Pods running on this node, but doesn't work, so use --force

kubectl drain k8sn1 --force --ignore-daemonsets
node/k8sn1 already cordoned
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: custom-namespace/kubia-manual; ignoring DaemonSet-managed Pods: kube-system/kube-proxy-fks7x, kube-system/weave-net-kfp2t, metallb-system/speaker-tf976
evicting pod "coredns-5644d7b6d9-wn9s8"
evicting pod "kubia-manual"
pod/coredns-5644d7b6d9-wn9s8 evicted
pod/kubia-manual evicted
node/k8sn1 evicted

Upgrade

sudo kubeadm upgrade node
Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

Unhold kubectl and kubelet

sudo apt-mark unhold kubelet kubectl

Upgrade kubectl and kubelet

sudo apt update
sudo apt install -y kubectl=1.16.8-00 kubelet=1.16.8-00

Hold kubectl and kubelet

sudo apt-mark hold kubectl kubelet

Restart kubelet

sudo systemctl restart kubelet

Recovery node schedulable

kubectl uncordon k8sn1