
Now when we know how the k8s components hang together, it’s time for creating a shiny new cluster for ourselves. There are a few ways to get this done
- manually – k8s the ‘hard way’
- bootstrapping – using the kubeadm tool
- vagrant script – installing cluster on windows machines
- minikube – single node setup for dev & test
In this blog we would scope ourself with bootstrapping installation only. Setting up vagrant script, minikube & K8s the hard way are topics in themselves, and rightly deserve solo posts …. do check them out
I go along with the presumption that very basic Linux commands are not an issue…. or else we have google 😉
Table of Contents
- Bootstrapping using Kubeadm
- before your begin
- basic vm configuration
- configure Kubernetes repository
- Installing docker, kubeadm, kubelet, kubectl & kubernetes-cni
- configuring the master & worker nodes
- Testing the Kubernetes Cluster
Bootstrapping using Kubeadm
We’ll be installing a 2 node cluster – 1 master (k8s-master) & 1 node (k8s-node1).
##### STEPS COMMON ON BOTH MASTER & WORKER NODES #####
before you begin
~ compatible linux hosts – I’m using Centos7 based linux VMs for my example
~ 2 GB or more of RAM per machine
~ 2 CPUs or more
~ Full network connectivity between machines in the intended cluster
basic vm configuration
~ Unique hostnames, MAC address & product_uuid for each node
# get MAC address of network interfaces by the commands –> ip link or ifconfig -a
# product_uuid can be checked by –> cat /sys/class/dmi/id/product_uuid
~ Update /etc/hosts with the ip & hostnames of all machines (master & nodes) in the cluster
~ Update /etc/ssh/sshd_config to allow password based ssh authentication & reload the service
sudo sed -i "/^[^#]*PasswordAuthentication[[:space:]]no/c\PasswordAuthentication yes" /etc/ssh/sshd_config
sudo systemctl reload sshd.service
~ PATH=$PATH:/usr/local/bin
~ Load br_netfilter modules to let iptables see bridged traffic
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
sudo yum -y update
~ disable SWAP
sudo sed -i '/swap/d' /etc/fstab
sudo swapoff -a
~ enable certain ports (*** Master Node ***)
## check all services running on prots ---> netstat -pnltu
systemctl start firewalld.service
firewall-cmd --permanent --add-port=22/tcp
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=8000/tcp
firewall-cmd --permanent --add-port=8080/tcp
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250-10255/tcp
firewall-cmd --permanent --add-port=9000-9100/tcp
sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --zone=public --add-service=https --permanent
sudo firewall-cmd --zone=public --add-service=ssh --permanent
firewall-cmd --reload && systemctl enable firewalld.service
firewall-cmd --list-all (to verify all ports are open)
~ enable certain ports (*** Worker Node ***)
sudo -i
systemctl start firewalld.service
firewall-cmd --permanent --add-port=22/tcp
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --zone=public --add-service=https --permanent
sudo firewall-cmd --zone=public --add-service=ssh --permanent
firewall-cmd --reload && systemctl enable firewalld.service
firewall-cmd --list-all (to verify all ports are open)
configure Kubernetes repository
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
installing docker, kubeadm, kubelet, kubectl & kubernetes-cni
~ set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config
~ installing docker
yum -y update && yum -y upgrade
yum -y install docker ## creates cgroup as 'systemd'
** if used the 'yum install' command, then create docker group
groupadd docker
sudo usermod -aG docker <username> ## i'm using vagrant
systemctl start docker && systemctl enable --now docker
or
curl -fsSL https://get.docker.com/ | sh ## creates cgroup as 'cgroupfs'
systemctl start docker && systemctl enable --now docker
~ verify docker installation –> run the following commands
sudo usermod -aG docker <username>
systemctl start docker && systemctl enable --now docker
systemctl status docker
docker run hello-world


~ installing Kubernetes components
yum install -y kubelet kubeadm kubectl kubernetes-cni --disableexcludes=kubernetes
systemctl daemon-reload && systemctl restart kubelet && systemctl enable --now kubelet
~ configure cgroup driver used by kubelet on control-plane node
**** only have to do that if the cgroup driver of your CRI is not cgroupfs ***
## check cgroup by --> docker info | grep -i cgroup
## modify the file with your cgroupDriver value and set it in the /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
systemctl daemon-reload && systemctl restart kubelet
configuring Master & Worker Nodes
##### STEP ON MASTER NODE ONLY #####
~ update kubeadm package
yum -y update
~ install kubeadm
kubeadm init --apiserver-advertise-address=<master_node_pub_ip> --apiserver-cert-extra-sans=<master_node_pub_ip> --node-name <master_node_host_name> --pod-network-cidr=<network_ip_cidr>
example:
kubeadm init --apiserver-advertise-address=192.168.200.10 --apiserver-cert-extra-sans=192.168.200.10 --pod-network-cidr=192.168.0.0/16
after a few minutes the kubernetes control-plane would initialize successfully

Note: Take a note of the ‘kubeadm join’ command with the token, highlighted above. This would be used to add on nodes to the cluster.
~ configure <user> to provide access – exit as root user to login as normal <user> –> i’m using vagrant as user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
~ setting context – if you are root user then
PATH=$PATH:/usr/local/bin
export KUBECONFIG=/etc/kubernetes/admin.conf
~ Installing a Pod Network add-on
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
~ verify network add-on
kubectl get pods --all-namespaces

##### STEP ON WORKER NODE ONLY #####
~ Join worker nodes to K8s Cluster – use ‘kubeadm join’ command generated during kubeadm init in master node
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
example:
kubeadm join 192.168.200.10:6443 --token h75zqx.awu0liy9dc3bvhtr \
--discovery-token-ca-cert-hash sha256:9272b78afca04f5dc061af47e93daeabbece5a6115539618ff7b35451f2d6518
after some time the node would be added to the freshly baked k8s cluster

Testing the K8s Cluster
Congratulations !!!!! You have successfully installed a Kubernetes Cluster by bootstrapping using the kubeadm tool. We can verify the cluster is up and running and has the worker node attached to it.
kubectl get nodes

Lets try creating & running a nginx pod & see the output. Execute the following command on the master node
kubectl run nginx --image nginx
kubectl get pods

Well… a long technical post loaded with commands, but i’m sure it is nothing with the joy of having a k8s cluster to play with.
Next we’ll explore how to make this a ‘1-command‘ installation using Hashicorp Vagrant
till then Learn… Share… Grow…
Speak Your Mind