Unlike the common notion of "if it ain't broke, don't fix it", I thought of upgrading my lab Kubernetes cluster from v1.23 to v1.24, just for fun. Now since it was a lab, it's easier for me to decide and simply go on with it, but for production, of course, you need to decide if … Continue reading Kubernetes Upgrade v1.23 to v1.24 – Common Errors and Solutions
Containers & Kubernetes
Docker Desktop for Linux – Ubuntu 22.04 + Docker Desktop 4.8.0
Docker Con 2022's biggest highlight was the launch of the Docker Desktop for Linux Users. For Windows and Mac users, this setup was already available for obvious reasons. But for Ubuntu desktop users like me, the only way to work was to use the Docker CLI for calling the backend containerd run time engine. Hence, … Continue reading Docker Desktop for Linux – Ubuntu 22.04 + Docker Desktop 4.8.0
Docker Image Scan with SYFT
As we move toward the Zero-Trust model for Infosecurity, concern over application-level security deployed on containers is raised by the security experts. Recently, a vulnerability CVE-2022-0811 identified in the CRI container runtime engine has turned this fact into reality. For those who are not aware, as per CrowdStrike, when invoked, an attacker could escape from … Continue reading Docker Image Scan with SYFT
Minikube Error – apiserver certificate: failed to load certificate: the certificate has expired
I recently faced one error in my old single host minikube deployment on CentOS. While starting my minikube cluster, I was repeatedly getting below error, [certs] Using existing ca certificate authority stderr: W0419 03:27:58.522598 568 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup … Continue reading Minikube Error – apiserver certificate: failed to load certificate: the certificate has expired
Kubelet Service: “Error getting node” err=”node \”workernode\” not found”
I faced this error while resetting my cluster to swap memory enabled node. Basically this error comes if you have swap memory enabled and kubelet service is not able to find eligible node for running services. After resetting kubernetes cluster on Master node I initialized my kubernetes cluster with extra argument of “–fail-swap-on=false” . root@ubmaster:/home/ubuntu# echo … Continue reading Kubelet Service: “Error getting node” err=”node \”workernode\” not found”
Kubernetes Cluster using ‘swap memory’ on Host Node
Before Kubernetes release 1.22, swap space must be disabled on host node in order to provide full disk resource for cluster and pods running on it. With new release, swap space can remain on host node. For new deployments, this seems pretty straightforward but what if you have kubernetes cluster already deployed, running without swap … Continue reading Kubernetes Cluster using ‘swap memory’ on Host Node
Deploying Kubernetes Cluster on Ubuntu 20.04
Before beginning with actual steps, I would suggest techies trying to deploy kubernetes cluster on CentOS to migrate to Ubuntu 20.04. With CentOS going EOL, its recommended to use Linux Distribution with long term support. For this lab deployment, I created 1 Ubuntu VM with 3GB RAM and 2 CPU on my oracle virtualbox manager. … Continue reading Deploying Kubernetes Cluster on Ubuntu 20.04
“Failed to run kubelet” err=”failed to run Kubelet: misconfiguration: kubelet cgroup driver: \”systemd\” is different from docker cgroup driver
With support of different type of container runtimes provided with current kubernetes release, one of important aspect to consider is using same cgroup manager for both kubelet and container runtime engines so resource allocation to pods can be managed centrally. While, kubelet service itself integrates with linux default "systemd" as cgroup manager, runtime engines like … Continue reading “Failed to run kubelet” err=”failed to run Kubelet: misconfiguration: kubelet cgroup driver: \”systemd\” is different from docker cgroup driver
K8 v1.23.1 – The connection to the server localhost:8080 was refused – did you specify the right host or port?
Above, is one of the most common error we get whenever we install a new K8 cluster or start working on one deployed by other users. First thing first, this error is not version specific and also doesn't imply something is wrong with your configuration. (So please don't send that mail you wrote for your … Continue reading K8 v1.23.1 – The connection to the server localhost:8080 was refused – did you specify the right host or port?