How to Install Kubernetes on CentOS/RHEL k8s: Part-3

How to Install Kubernetes on CentOS/RHEL k8s: Part-3

In this article, we will learn to set up and install the Kubernetes cluster with two worker nodes. We have already discussed Kubernetes in our previous articles. If you have missed it then you may go back on these links to know about Kubernetes :

Understanding Kubernetes Concepts RHEL/CentOs K8s Part-1
Understanding Kubernetes Concepts RHEL/CentOs k8s: Part-2

But in short Kubernetes is a container orchestration program and it is OpenSource project by Google donated to “Cloud Native Computing foundation”

As we know that there are master node and worker node in Kubernetes. In this article, we will also know the use of some command like ‘kubeadm‘ and ‘kubectl‘.
There are several ways to implement Kubernetes and it’s totally about to your need and I have listed few of the methods

  • Minikube – It is a single node kubernetes cluster and good for development and testing
  • Kops (On AWS)- Multinode kubernetes and Easy to deploy as AWS take care of most of the things
  • Kubeadm -Multi-Node Cluster in our own premises
  • GCP Kubernetes – Almost same as AWS

Well in our scenario we are going to deploy Kubernetes using kubeadm and set up into our own premise.


Host OS: CentOS/RHEL 7
Master Node:
Worker Node-1:
Worker Node-2:
RAM: 4GB Memory

Minimum Requirement for Machines

Master: 2 Core CPU 4GB RAM
Node: 1 CPU 4 GB RAM

Note: If you are a SUDO user then prefix every command with sudo, like #sudo ifconfig

As I have mentioned about machine requirements and I will be using all three machine with minimal installed on CentOs 7. One server will acts master node and rest two servers will be the minion or worker nodes

On the Master Node following components will be installed

  • API Server – It provides kubernetes API using Json / Yaml over HTTP, states of API objects are stored in etcd
  • Scheduler – A master node program which performs scheduling tasks like launching containers in worker nodes based on available resources
  • Controller Manager – the main job of controller manager is to monitor replication controllers and create pods to maintain the desired state.
  • etcd – It is a Key value pair database. It stores configuration data of cluster and cluster state.
  • Kubectl utility – It is a command line utility which connects to API Server on port 6443. It is used by administrators to create pods, services etc.

On Worker Nodes following components will be installed

  • Kubelet – An agent on the worker node, it connects to Docker and takes care of creating, starting, deleting containers.
  • Kube-Proxy – It routes the traffic to appropriate containers based on IP address and port number of the incoming request. Basically does the port translation.
  • Pod – Pod can be defined as a multi-tier or group of containers that are deployed on a single worker node or Docker host.

I have broken this installation process in three Section

  • Section 1. Command to run on BOTH
  • Section 2. Command to run on master only
  • Section 3. Command to run on slave only

Let’s Start

Section 1: Commands To Run On Both Master and Slave

Step 1. Install Repo and Update Server

To get the latest packages and to remove any pending dependency install epel repo and update your servers by the following command.# yum update

[[email protected] ~]# yum install epel-release 
[[email protected] ~]# yum update

Step 2: Turn Off swap

For install kubernetes, we need to disable swap permanently by using following command steps

[[email protected] ~]# swapoff -a
[[email protected] ~]# vim /etc/fstab

Comment the line for the swap like below

Save the file using: wq and run the command to remount all the devices

[[email protected] ~]# mount -a

Step 3: Set Hostname

For internal communication and clear recognition set the hostnames for all the hosts
For master

[[email protected] ~]# hostnamectl set-hostname kmaster

For worker nodes

[[email protected] ~]# hostnamectl set-hostname knode1
[[email protected] ~]# hostnamectl set-hostname knode2

Step 4: Set static IP

Now for communication, we need a static IPs on all the nodes

[[email protected] ~] vim /etc/sysconfog/network-script/ifcfg-eth0
BOOTPROTO=static #
IPADDR= #chnage this IP for worker nodes 

Save the file using: WQ and restart network service change your value in # lines according to your network

[[email protected] ~]# systemctl restart network.service

Step 5: Set Host file

Update host file for master and node on all servers for resolving each other with the name against IP.

[[email protected] ~]# vim /etc/hosts kmaster knode1 knode2

Update this information on all three nodes save the file using: wq

Step 6: Disable Selinux

We need to disable selinux, use the following command to disable selinux

[[email protected] ~]# setenforce 0
[[email protected] ~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Step 7: Stop and disable FirewallD

Kubernetes also requires to stop and disable FirewallD service. Run the following command to do so

[[email protected] ~]# systemctl status firewalld 
[[email protected] ~]# systemctl disable firewalld 
[[email protected] ~]# systemctl stop firewalld

Step 8: Enable br_netfilter Kernel Module

This step is required if you are using iptables else you may skip this step but I would say go for it.

[[email protected] ~]# modprobe br_netfilter
[[email protected] ~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Step 9: Reboot servers

Now reboot your vms or machine so that we may start installing Kubernetes cluster

[[email protected] ~]# reboot

Step 10: Set UP repo for Kubernetes

To get install Kubernetes on to your system we need to set up repo for kubernetes run the following command to set up repo

[[email protected] ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo

Step 10: Install Kubernetes and its component

Now we will install the Docker open-ssh kubeadm and Kubelet. Also, don’t forget to enable your service. To do this run the following command to archive your requirements.

[[email protected] ~]# yum makecache fast
[[email protected] ~]# yum install -y kubelet kubeadm kubectl docker --disableexcludes=kubernetes
[[email protected] ~]# systemctl enable kubelet && systemctl start kubelet 
[[email protected] ~]# systemctl enable docker && systemctl start docker


Step 11: Change the cgroup-driver

We need to make sure the Docker and kubernetes are using same ‘cgroup’. Check Docker cgroup using the Docker info command. And Update Kubernetes conf File

[[email protected] ~]# docker info | grep -i cgroup
Cgroup Driver: cgroupfs

And you see the docker is using ‘cgroupfs‘ as a cgroup-driver. so make changes in kubeadm config file using below command

[[email protected] ~]# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

 After these changes reload the daemon and restart the services

[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart docker && systemctl restart kubelet

Don’t worry if you get this “Failed to restart Kubelet.service: Unit not found

With these commands, we have successfully completed our installation section 1. These were the command we need to run both end node and master end

We will continue this installation in part 2 along with section 2 and section 3

Kubernetes Series Links:

Understanding Kubernetes Concepts RHEL/CentOs K8s Part-1
Understanding Kubernetes Concepts RHEL/CentOs k8s: Part-2
How to Install Kubernetes on CentOS/RHEL k8s?: Part-3
How to Install Kubernetes on CentOS/RHEL k8s?: Part-4
How To Bring Up The Kubernetes Dashboard? K8s-Part: 5
How to Run Kubernetes Cluster locally (minikube)? K8s – Part: 6
How To Handle Minikube(Cheatsheet)-3? K8s – Part: 7
How To Handle Minikube(Cheatsheet)-3? K8s – Part: 8
How To Handle Minikube(Cheatsheet)-3? K8s – Part: 9