Platformer Blog

Running a Kubernetes Cluster on Ubuntu with Calico

Image for post
Source: https://ubuntu.com/blog/ubuntu-kubernetes

Ina previous article I wrote on how to set up a simple kubernetes cluster on Ubuntu and CentOS. Today I will discuss how to run a production grade cluster on Ubuntu with calico as the CNI plug-in.

Most of the commands below will be similar to setting up a simple K8s cluster, with a few exceptions.

Node Prerequisites

  • Master node’s minimal required memory is 2GB and the worker node needs a minimum of 1GB
  • The master node needs at least 1.5 and the worker node need at least 0.7 CPU cores.

Here we will set up 3 Ubuntu 18.04.2 LTS (Bionic Beaver) servers

1. Server 1 -kube-master (Master Node)

2. Server 2 -kube-worker-node-1 (Worker Node)

3. Server 3 -kube-worker-node-2 (Worker Node)

Note: You can use any number of worker nodes with a single master node.

Now let’s install the following software packages on all 3 servers;

Kubernetes Installaion

  • Install Docker
sudo apt-get updatesudo apt install docker.io
  • Check the Docker version
docker --version
  • Enable Docker
sudo systemctl enable docker
  • Add the kubernetes gpg key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
  • If Curl is not installed on your system, you can install it through the following command
sudo apt install curl
  • Add the Xenial Kubernetes repository
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
  • Install kubeadm
sudo apt-get updatesudo apt install kubeadm
  • You can check the version number of Kubeadm and also verify the installation through the following command
kubeadm version

Kubernetes Deployment

  • It is necessary to disable swap memory on the nodes as Kubernetes does not perform well on a system that uses swap memory
sudo swapoff -a

Now let’s initialize the cluster on Master (Apply the below commands only on Master);

  • Initialize the cluster, also by passing a flag that is later needed for the networking plugin (CNI).
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Note: If 10.244.0.0/16 is already in use within your network you must select a different pod network CIDR, replacing 10.244.0.0/16 in the above command as well as in any manifests applied below.

  • Or else you can initialize the cluster by passing the kubeadm-config.yaml file with specific values as given below
https://gist.githubusercontent.com/nilesh93/c743205d34fedb5f48ae4d37d959ba4b/raw/c21b63d0e30449b0382e7508c2db4726a7675bab/kub
https://gist.githubusercontent.com/nilesh93/c743205d34fedb5f48ae4d37d959ba4b/raw/c21b63d0e30449b0382e7508c2db4726a7675bab/kubeadm-config.yaml

Here you can change the kubernetesVersion, external IP addresses and domain names you would use to connect to the Kuberentes API server, Pod CIDR and Service CIDR based on your networking requirements etc.

  • You can init the master with the following command.
curl https://gist.githubusercontent.com/nilesh93/c743205d34fedb5f48ae4d37d959ba4b/raw/c21b63d0e30449b0382e7508c2db4726a7675bab/kubeadm-config.yaml -o kubeadm-config.yaml# update the file as necessary and then run belowkubeadm init --config kubeadm-config.yaml

This will output a join command. Save this command somewhere as this will later beused to connect worker nodes to the master.

  • Configure kubectl as it lets you connect to the cluster from the master (The command for this step is given in the output of the previous command).
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Type the below command to check if kubectl is working (The status will be ‘NotReady’ as we haven’t set up our networking yet. )
kubectl get nodes

Now let’s deploy a Pod Network through the Master Node.

Here we will use Calico as the network of choice.

Note: If you are running this on GCP’s Compute Engine, it blocks traffic between hosts by default; run the following command to allow Calico traffic to flow between containers on different hosts (where the source-ranges parameter assumes you have created your project with the default GCE network parameters — modify the address range if yours is different):

gcloud compute firewall-rules create calico-ipip --allow 4 --network "default" --source-ranges "10.128.0.0/9"

You can verify the rule with this command:

gcloud compute firewall-rules list
  • Install the Calico Networking Plugin in the Cluster (This will handle networking between different pods and nodes).This can be done by applying a yaml file that describes the objects needed to create in the cluster.
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

You should see the following output.

Image for post

Wait until each pod has the STATUS of Running.

  • Confirm that all of the pods are running with the following command.
watch kubectl get pods --all-namespaces
  • Press CTRL+C to exit watch.
  • Now check if kubectl is working (The status should be ‘Ready’ now ).
sudo kubectl get nodes -o wide

Now let’s add the Worker nodes to the Network in order to form a Cluster.

  • The kubeadm init command that you ran previously on the master should output a kubeadm join command containing a token and hash. If you have copied that command from the master and saved it somewhere, run it on both worker nodes with sudo to connect them to the master.
sudo kubeadm join 192.168.100.6:6443 --token 06tl4c.oqn35jzecidg0r0m --discovery-token-ca-cert-hash sha256:c40f5fa0aba6ba311efcdb0e8cb637ae0eb8ce27b7a03d47be6d966142f2204c
  • Now check if the 2 worker nodes are connected (on master).
sudo kubectl get nodes
  • Make sure all 3 of your nodes are listed with the above command and have a status of ‘Ready’.
  • Use the following command to check on your pods (There won’t be any since none were created).
kubectl get pods
  • With the following command you can see some of back-end system pods that are fully up and running.
kubectl get pods --all-namespaces

At this point we have a kubernetes cluster fully up and running on Ubuntu!

Kubernetes
Dimuthu De Silva • 1/12/2021