Kubernetes 3 Node(1 Master 2 worker) set up in AWS

Saumik Satapathy
6 min readJun 26, 2020

In today’s technology world container are playing a vital role in every aspect. Be it launching an Environment or deploy code in all the environments irrespective of the Backend OS. Nowadays within few clicks, the complete deployment is done which took lots of time in earlier days to set up. In this document, I will set up a three-node Kubernetes cluster setup that is identical to the CKA environment. If anyone interested to write the CKA exam then it’s a good starting point to begin the journey.

To set up the Environment I choose AWS Cloud. Which is user friendly and easy to set up.

  1. First first, login into the AWS Web console by hitting the URL, https://aws.amazon.com. If someone doesn’t have an AWS account, I will suggest creating a Free tier account to follow along with the steps.
  2. In ‘services’ click on ‘EC2’.

3. Click on “Launch Instance”.

4. Search for “Ubuntu Server 18.04 LTS (HVM), SSD Volume”.

5. Click on “Select”.

6. Choose the “Instance Type” selection window choose “t2.medium” as the instance type.

Many people ask me why to choose “t2.medium” why not “t2.micro” as “t2.micro” is free tier eligible but not “t2.medium”. To answer this question I will suggest referring to the Kubernetes document,
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin.
They mention that at least 2 CPU core is required for the installation and “t2.micro” have only one core. If you choose the “t2.micro” as instant type then at the installation page it will throw an error “the number of available CPUs 1 is less than the required 2”.
To overcome this problem the lowest instance we can choose is “t2.medium”.

7. Choose no. of instances to 3( Once master and two workers) in the “configure Instance” tab.

8. Although 8 GBroot’ volume is enough for the operations to be on the safe side 10GBroot’ volume is recommended.

9. Leave the ‘tags’ field as we will configure that later.

10. In the “security groups” tab create a new “security group” and allow these TCP ports.

6443,23792380,10250,10251,10252,3000032767 and 22 for SSH into the instances.

10. Review and launch the Instances.

11. Name the instance for our convenience as ‘master’, ‘node-1’ and ‘node-2’.

12. Log in to the ‘master’ node and change the hostname.

$ ssh -i <keypair> ubuntu@<Public IP>
$ sudo -i
# hostnamectl set-hostname master
# vim /etc/hosts
127.0.0.1 localhost master
<private IP of Matser> master
<private IP of Node 1> node-1
<private IP of Node 2> node-2

Save and exit for ‘vim’ editor. Run ‘bash’ in the terminal.

# bash
/etc/hosts
run ‘bash’ to get the outcome without re-login

13. Follow the same approach for all the other nodes. Only replace the localhost name with a respective hostname i.e. 127.0.0.1 localhost node-1

14. Install ‘Docker’ in all three nodes.

$ sudo apt-get remove docker docker-engine docker.io containerd runc
$ sudo apt-get update -y
$ sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt-get update -y
$ sudo apt-get install docker-ce docker-ce-cli containerd.io -y
$ sudo systemctl start docker
$ sudo systemctl enable docker

15. Check and verify docker installation.

$ docker info

16. Set up the ‘Docker’ demon for ‘Kubernetes’.

# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF

16. Disable ‘swap’ memory( if allocated).

# swapoff -a

17. Check ‘fstab’ entry for ‘swap’ and comment on that line(if entry present for swap).

18. Install ‘Kubernetes’ in ‘All the Nodes’.

$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
> deb https://apt.kubernetes.io/ kubernetes-xenial main
> EOF
$ sudo -i
# apt-get update
# apt-get install -y kubelet kubeadm kubectl
# apt-mark hold kubelet kubeadm kubectl

This will not update the kubectl, kubeadm and kubelet while running ‘apt-get update’.

# kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=<private Ip of Master node>

19. Copy the node join instruction to the notepad for further reference.

20. exit from the ‘root’ user and log in as a normal user i.e. ‘ubuntu’.

# exit
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ export KUBECONFIG=/etc/kubernetes/admin.conf

21. In the two worker nodes, the ‘Kubernetes join’ command is the ‘root’ user to join the nodes with the master.

22. After the ‘worker’ joined with the ‘master’ node run the pod network plugin setup command in the ‘master’ node. I am using ‘Flannel’ as the pod network plugin. There are other plugs also available like ‘calico’, ‘cilium’ etc.

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

23. It will create few components and after some time the poses will show as ‘ready’.

24. To track if pods are ready or not run the below command in the ‘master’ node.

$ kubectl get nodes -w

25. After the nodes are in a ready state label the nodes for better convenience.

$ kubectl label node node-1 node-role.kubernetes.io/worker=worker
$ kubectl label node node-2 node-role.kubernetes.io/worker=worker

25. Finally, a three-node ‘Kubernetes’ cluster has been set up in AWS. To verify run,

$ kubectl get nodes

And it will out show all the nodes are in the ‘ready’ state.

kuberctl command output

############################################

After the release of Kubernetes v 1.20, the Docker is deprecated and ContainerD is used as the underlying runtime to favour “Container runtime”. For that instead of “Docker” in step 16, we can use “ContainerD”. The below steps is for installing “ContainerD” in Ubuntu. Except this, every step will work in the latest version of Kubernetes installation.

  1. Add the modules.
$ cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf 
> overlay
> br_netfilter
> EOF

2. Load the modules.

$ sudo modprobe overlay

3. Set the system configuration for Kubernetes networking.

$ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> net.bridge.bridge-nf-call-ip6tables
> EOF

4. Apply the new settings.

$ sudo sysctl --system

5. Install “ContainerD”.

$ sudo apt-get update && sudo apt-get install -y containerd

6. Create configuration directory for “Containerd”.

$ sudo mkdir -p /etc/containerd

7. Generate default “ContainerD” configuration and save to the newly created default file.

$ sudo containerd config default | sudo tee /etc/containerd/config.toml

8. Restart “ContainerD” to ensure new configuration file usage.

$ sudo systemctl restart containerd

--

--

Saumik Satapathy

A passionate software Engineer with good hands on experience in the field of DevOps/SRE. Love to share knowledge and intersted to learn from others.