Highly available Kubernetes Cluster Setup with remote access.

Saumik Satapathy
6 min readSep 25, 2020

Kubernetes bring reliability and stability to our infrastructure and how our applications deployed onto it. But what would happen if the master itself goes down or crashes? The application deployed on the worker node still run as they run previously but we will lose control over them as well as we won’t able to deploy new applications. To overcome this problem we can go for high available cluster or remove a single point of failure. The master components are replicated with other master nodes and will prevent the single master failure. To get this solution we’ll implement two Master in two different Availability zone and a Load Balancer. So our setup will look like,

  1. Two Kubernetes Master
  2. One load balancer
  3. three worker/slave nodes

First, we will launch two EC2 instances of Ubuntu 18.04 LTS. We will choose t2.medium as the instance type.

In the Configure Instance Details choose the Subnet as 1a.

Add the Name tag as Master-1.

In the Security group page, create a new Security Group and open the port 22(SSH), 6443, 10250, 10251, 10252, 2379–2380, 30000–32767.

Review and Launch the Instance.

Launch a similar instance in a different Availability Zone by selecting the instance and going to Actions → Launch more like this. Going to the previous page by clicking Previous and changing the Subnet and Name tag.

Now, Follow the above step and launch 4 more similar instances. ( One for the load balancer, Three for Worker nodes). Tag the instance name accordingly.

After the Instances Status check becomes 2/2 checks passed login individual instances one by one and update the /etc/hosts with all the instance private IP and their respective hostname and hostname as well. e.g.

$ hostnamectl set-hostname master-1 (change accordingly)

$ cat /etc/hosts

Install Docker in all the nodes except Loadbalancer node.

$ sudo apt-get remove docker docker-engine docker.io containerd runc
$ sudo apt-get update -y
$ sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt-get update -y
$ sudo apt-get install docker-ce docker-ce-cli containerd.io -y
$ sudo systemctl start docker
$ sudo systemctl enable docker
$ docker --version

Login into the Loadbalancer node and install haproxy.

$ sudo apt-get update
$ sudo apt-get install haproxy -y

Change the HAProxy configuration to work as a load balancer for all the three master nodes.

$ sudo vim /etc/ha bproxy/haproxy.cfg

Add the following lines at the end.

###########################################################frontend kubernetes
bind <Loadbalancer Private IP>:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server Master-1 <Master-1 private IP>:6443 check fall 3 rise 2
server Master-2 <Matser-2 private IP>:6443 check fall 3 rise 2

Restart the haproxy service.

$ sudo systemctl restart haproxy

Check wheather the HPPoxy working or not from outside.

$ nc -vz <Loadbalancer Public IP> 6443

If the above command gives a sucessed output that means the Load Balancer working fine.

Install ‘kubectl’,’kubeadm’ and ‘kubelet’ in every node except the load Balancer.

$ sudo swapoff -a
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
> deb https://apt.kubernetes.io/ kubernetes-xenial main
> EOF
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl

Login into ‘Master-1’ node and pull the ‘kubeadm’ configuration.

$ sudo kubeadm config images pull

Initialize the kubeadm and pass the Load Balancer private IP as Control Plane endpoint.

$ sudo kubeadm init --control-plane-endpoint <Loadbalancer Private IP>:6443 --upload-certs --pod-network-cidr=192.168.0.0/16

Copy and save the instruction somewhere for further reference.

Log in into each master node and paste the control-plane join command as root user.

Create the kuberbetes Folder and change the folder permission.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ sudo export KUBECONFIG=/etc/kubernetes/admin.conf

Login into the worker nodes and paste the worker node join command to initialize the communication.

Once all the mater and worker nodes joined run the ‘Calico’ Network plugin command in the Master-1 node.

$ $ kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml

Check if all the master and worker nodes are in the ready state.

$ kubectl get nodes

Rename all the worker nodes.

$ kubectl label node node-1 node-role.kubernetes.io/worker=worker
$ kubectl label node node-2 node-role.kubernetes.io/worker=worker
$ kubectl label node node-3 node-role.kubernetes.io/worker=worker

Login into the ‘master-1’ node and do the following to setup remote access.

$ mkdir kubernetes
$ cd kubernetes

Create a yaml file for Cluster Role Binding and Service Account as well.

$ vim cluster-user.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user-bind
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin-user
namespace: default

Run the kuberctl apply command to create the user.

$ kubectl -f cluster-user.yaml

Get the details of secret token.

$ kubectl get serviceaccounts --namespace default
$ kubectl get secret --namespace default
$ kubectl describe secret admin-user-token-2m629 --namespace default

In Remote Laptop install ‘kubectl’ and run the following command.

> kubectl config set-credentials user/saumik --token=<paste the token copied earlier>
> kubectl config set-cluster saumik --insecure-skip-tls-verify=true --server=https://<loadbalancer public IP>:6443
> kubectl config set-context default/saumik/user --user=user/saumik --namespace=default --cluster=saumik
> kubectl config use-context default/saumik/user

If everything is configured properly and correctly then in the remote system run the ‘kubectl get nodes’ command and it will give all the nodes.

N.B. :- We can create Master node in each availability zone for more HA.

--

--

Saumik Satapathy

A passionate software Engineer with good hands on experience in the field of DevOps/SRE. Love to share knowledge and intersted to learn from others.