Install Kubernetes with Dashboard on CentOS 7

Learning Objectives

  • Sequence 1. Create clone of tester1 VM
  • Sequence 2. Setting up server, tester1 and tester2 for K8s
  • Sequence 3. Set up K8s master node; server.
  • Sequence 4. Set up K8s worker nodes; tester1 and tester2.
  • Sequence 5. Deploy the Kubernetes Dashboard
  • Sequence 6. Troubleshooting and Reset

Pre Requisite

  • An Oracle Linux 7 VM to install K8s and the required software.
Configurationmaster Nodeworker Node1worker Node2
hostnameserver.example.comtester1.example.comtester2.example.com
OSCentOS 7/OL7CentOS 7/OL7CentOS 7/OL7
IP Address10.10.0.10010.10.0.10110.10.0.102
rpms requiredpython3python3python3
  • Internet access on all VMs.
  • Remove all Docker Images and Containers.

Sequence 1. Create clone of tester1 VM

  1. Shutdown tester1 VM and Create a clone of the VM.
  2. Right click on the VM Name and select clone.
  3. Change the Details as given in the screen shot
  4. In the next screen keep default and click on clone button
  5. Once the clone is created, start tester2 and make following changes
  6. Change the hostname in /etc/hostname to example.com
  7. Change the IP address in /etc/sysconfig/network-scripts/ifcfg-enp0s8 to 10.10.0.102
  8. Reboot the VM, tester2.

Sequence 2. Install required packages on all three VMs

  1. Start all three VMs; server, tester1 and tester2 and login as root user.
  2. Setup yum repositories for required packages on all three VMs. Configure the Kubernetes repositories by manually creating repo file. Separate text file is available to save time and avoid errors due to control characters.

# vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

 

Note – The gpgkey parameter is in one line with urls as given in screen shot.

  1. On teste1 and tester2, add these to Existing Repo file /etc/yum.repos.d/oracle-linux-ol7.repo

# vi /etc/yum.repos.d/oracle-linux-ol7.repo

[ol7_developer]

name=Oracle Linux $releasever Development Packages ($basearch)

baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/developer/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=1

 

[ol7_developer_EPEL]

name=Oracle Linux $releasever Development Packages ($basearch)

baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/developer_EPEL/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=1

  1. Run the package update command on all three VMs; server, tester1 and tester2. Note that it may take 15 to 20 Min. If the YUM package installer is busy, don’t panic and interrupt it. This might be due to auto update running.

# yum update  -y

  1. Disable swap on all three VMs. K8s will not work with swap on. For permanently disable swap, comment out the last line in /etc/fstab.

# vi /etc/fstab

 

  1. Enable Net packet filter with following command on three VMs.

# modprobe br_netfilter

# echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables

  1. Verify your /etc/hosts file on all three VMs
  2. Shutdown all three VMs.

# poweroff

Sequence 3. Setting up master Node, server

  1. Configure the server with two CPUs. Open the settings of server VM while it is in “poweroff” mode and click on processor tab under System as given in following screen shot.
  2. Start the server VM and login as root.
  3. Set the firewall rules with following command.

# firewall-cmd –permanent –add-port={10250-10252,10255,2379,2380,6443}/tcp

# firewall-cmd –reload

  1. Run the following command to install kubeadm.

# yum install kubeadm -y

  1. Start and enable kubectl and docker service

# systemctl restart docker

# systemctl enable –now kubelet

Service will not start yet. If you check /var/log/messages, it will complain missing /var/lib/kubelet/config.yaml. Don’t Worry.

  1. Run the following commands to pull images, initialize and setup kubernetes server.

# kubeadm config images pull

# kubeadm init –apiserver-advertise-address=10.10.0.100 –pod-network-cidr=172.168.10.0/24

Output of above command would be something like

  1. Execute the following commands to use the cluster as root user.

# mkdir -p $HOME/.kube

# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# chown $(id -u):$(id -g) $HOME/.kube/config

 

  1. Take note of the command to be executed on worker nodes, tester1 and Copy this command in a notepad file. You will need it later. The command is similar to:

kubeadm join 10.10.0.100:6443 –token ddynn0.xxxxxxxxxxxxde \

–discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

  1. Deploy pod network to the cluster. Try to run below commands to get status of cluster and pods.

# kubectl get nodes

# kubectl get pods –all-namespaces

To make the cluster status ready and kube-dns status running, deploy the pod network so that containers of different host communicated each other.  POD network is the overlay network between the worker nodes.

  1. Run the following command to deploy network.

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  1. Now run the following commands to verify the status. Please note that it may take few minutes to change the status to “ready

# kubectl get nodes

# kubectl get pods –all-namespaces

 

Sequence 4. Set up worker nodes; tester1 and tester2

  1. Configure firewall rules on both the nodes; tester1 and tester2.

# firewall-cmd –permanent –add-port={10250,10255,30000-32767,6783}/tcp

# firewall-cmd –reload

  1. Install kubeadm and docker package on both nodes

[root@tester1 ~]# yum install kubeadm docker -y

[root@tester2 ~]# yum install kubeadm docker -y

  1. Start and enable docker service

[root@tester1 ~]# systemctl enable –now docker

[root@tester2 ~]# systemctl enable –now docker

  1. Now use the command similar to following, to join the K8s cluster.

To join worker nodes to Server node, a token is required. Whenever kubernetes server is initialized, then in the output we get command and token.  Copy that command and run on both nodes. (Refer to Sequence 3, step 7 and 8)

[root@tester1 ~]# kubeadm join 10.10.0.100:6443 –token kndxd6.zolzvjaj8bifonoj \

–discovery-token-ca-cert-hash sha256:f0f118ebe0ab7f6ed2510e6e1bff2d23fb2b58f4f5ee8e69ed92323750659aa0

[root@tester2 ~]# kubeadm join 10.10.0.100:6443 –token kndxd6.zolzvjaj8bifonoj \

–discovery-token-ca-cert-hash sha256:f0f118ebe0ab7f6ed2510e6e1bff2d23fb2b58f4f5ee8e69ed92323750659aa0

  1. Output of above command would be something like
  2. Now verify Nodes status from server node using kubectl command

# kubectl get nodes

  1. To assign a role to tester1 and tester2, use the following command:

# kubectl label node tester1.example.com node-role.kubernetes.io/worker=worker

# kubectl label node tester2.example.com node-role.kubernetes.io/worker=worker

  1. Verify and Fix. Use the following command to check the status of all nodes on master

# kubectl get pods –all-namespaces -o wide

  1. Note that tester1 and tester2 failed to acquire lease. This means, the pod didn’t get the podCIDR. To fix it, from the master-node, first find out your funnel CIDR

# cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep -i cluster-cidr

Output:

– –cluster-cidr=172.168.10.0/24

Then run the following commands from the master node:

# kubectl patch node tester1.example.com -p ‘{“spec”:{“podCIDR”:”172.168.10.0/24″}}’

# kubectl patch node tester2.example.com -p ‘{“spec”:{“podCIDR”:”172.168.10.0/24″}}’

  1. After 2-3 Minutes, use the following command to check the status of all nodes on master

# kubectl get pods –all-namespaces -o wide

  1. For a detailed status of nodes use

# kubectl describe nodes

Sequence 5. Deploy the Kubernetes Dashboard

  1. Confirm the Namespace by running following command
  2. On the master node, deploy the Kubernetes Dashboard by running the following command

# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

  1. Edit Dashboard Deployment and add – –token-ttl=43200

# kubectl -n kubernetes-dashboard edit deployments kubernetes-dashboard

 

 

  1. Verify that all pods are running with

# kubectl get pods –all-namespaces -o wide

  1. Start the Dashboard with

# kubectl proxy &

  1. Open another Tab in the terminal and create the required user (ServiceAccount) for the Kubernetes Dashboard with cluster-admin privileges. Text file is available for directl copy/paste to save time.

# vi kubernetes-dashboard-admin-user.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

name: admin-user

namespace: kubernetes-dashboard

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: admin-user

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: cluster-admin

subjects:

– kind: ServiceAccount

name: admin-user

namespace: kubernetes-dashboard

  1. Apply this yaml

# kubectl apply -f ~/kubernetes-dashboard-admin-user.yaml

  1. Get a Token to access the Kubernetes Dashboard.

# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk ‘{print $1}’)

  1. Open the Dashboard available at

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

The UI can only be accessed from the machine where the command is executed. See kubectl proxy –help for more options.

  1. Copy the Token displayed and paste it in the login page by selecting Token to login as given below.
  2. Dashboard will open as given below.

 

  1. If you lose the token, you can get with following commands.
  2. List secrets using

# kubectl get secrets

  1. Use kubectl describe to get the access token. Use secret name from above command to get the token.

#  kubectl describe secret default-token-4bbdp

Sequence 6. Troubleshooting and Reset

  1. Get a comprehensive report on your nodes with:

# kubectl describe nodes

# kubectl get pods -n kube-system -o wide

# kubectl cluster-info dump

  1. Reset the Cluster on all three VMs with

# kubeadm reset

August 6, 2021
Designed by  © Alliance Softech Pvt Ltd. All rights reserved.
WhatsApp chat
X