Deploying Kubernetes Cluster on Ubuntu 20.04

Before beginning with actual steps, I would suggest techies trying to deploy kubernetes cluster on CentOS to migrate to Ubuntu 20.04. With CentOS going EOL, its recommended to use Linux Distribution with long term support.

For this lab deployment, I created 1 Ubuntu VM with 3GB RAM and 2 CPU on my oracle virtualbox manager. I am using Bridged because of two reasons,

  • I want master and worker node to communicate on same L2 network
  • I want my VMs to access internet without NAT

Had I chosen NAT and Host only Internal adaptor provided by virtualbox, I would have needed to create two ports inside my VM for each requirement.

Step 1 – Disable Swap (optional from 1.22 onwards)

First, need to disable swap partition from your Ubuntu machine. This step was mandatory till 1.21 release, as we want kubernetes to have 100% allocation of resources and providing some of your disk as swap memory contradicts this notion.

However, this is now available from version 1.22 onwards. Refer my blog on how to make kubernetes service run with swap on.

swapoff -a

Step 2 – Install Docker (Support removed from 1.24 onwards)

Update repos for docker and install docker as container runtime engine.

From release 1.24 onwards in April’22, docker support will be removed and we need to use either containerd or CRI-O as our container runtime engines. To migrate your exiting docker runtime engine to CRI-O, refer my blog for same.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo   "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io

systemctl start docker

systemctl enable docker

Step 3 – Update bridge netfilter to use linux iptables

Avoiding geeky knowledge, simple role of iptables in linux is writing allow or deny rules for kernel. Front-end linux firewall service uses iptables to allow traffic on network or block ports.

CNI or container network interface are used by runtime engines. Since containers use overlay network, linux kernel iptables should allow bridge or ovs to use as proxy. Kubernetes or kub-proxy use noop plugin by default, to make it work with simple configuration of docker bridge and iptables.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl --system

Step 4 – Install Kubeadm, Kubelet and Kubectl

Kubeadm is admin utility, which can help you initialize, upgrade or edit kubernetes cluster.

Kubelet act as interface with Container runtime engines to trigger new pods, services or deployment sets etc.

Kubctl is command line utility to manage your cluster.

sudo apt-get update

sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl

Before moving to net step, remember till now your Ubuntu VM is not designated as Master or Worker node. You may clone your existing VM, change IP and hostname to create multiple Worker nodes.

Wherever you going to run Step 5 will act as Master node, while Step 6 will make Worker nodes.

Step 5 – Initialize cluster using Kubeadm

Now this step is critical as you may provide many additional parameters while initializing your cluster including manadatory Pod overlay network. This network range is important you must make sure that you use valid subnet.

For me choice was simple, I used my bridged network range, from which I assigned IP to my host machines. But you may use any overlay network as long as it it reachable from you master and worker nodes.

This step will automatically generate static pod files used by kube-system, self signed certs for communication and token for joining worker node.

kubeadm init --pod-network-cidr xx.xx.xx.xx/xx

Above command will take few minutes and provide few steps in end, to run on your Master and Worker nodes.

For Master node,

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 6 – Join Worker Nodes to Cluster

In end of kubeadm init command, token is generated which allows any node we want to act as worker node to join cluster’s master node. If you miss it, no need to worry as you can regenerate it easily.

Run below on each of your worker node. Content must be according to output generated.

kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Advertisement

One thought on “Deploying Kubernetes Cluster on Ubuntu 20.04

  1. Pingback: [Blog 1] CKS – Certified Kubernetes Security Specialist

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s