I recently faced one error in my old single host minikube deployment on CentOS. While starting my minikube cluster, I was repeatedly getting below error,
[certs] Using existing ca certificate authority
stderr:
W0419 03:27:58.522598 568 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/3.10.0-1160.53.1.el7.x86_64\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase certs/apiserver: failed to write or validate certificate "apiserver": failure loading apiserver certificate: failed to load certificate: the certificate has expired
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
Solution
While the first few warnings are self-explanatory, I figured the error coming afterward related to expired certificates is the actual cause of failure. I wanted to troubleshoot more on this error but figured the easy way is to rebuild my minikube cluster.
[centos@centos7 ~]$ /usr/local/bin/minikube delete
* Deleting "minikube" in docker ...
* Deleting container "minikube" ...
* Removing /home/centos/.minikube/machines/minikube ...
* Removed all traces of the "minikube" cluster.
Once my local cluster is removed, I restart minikube.
[centos@centos7 ~]$ /usr/local/bin/minikube start
* minikube v1.14.0 on Centos 7.9.2009
* Automatically selected the docker driver
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2200MB) ...
* Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube" by default
[centos@centos7 ~]$ kubectl get pods
No resources found in default namespace.
Why I chose to work this way was, because I knew my host machine is working as expected. There is no problem with my docker services or any issues with group drivers. Recreating cluster verified that. I know it’s not the best solution but you may use it if all other try fails.