Deploy Kubernetes Cluster on Oracle Linux 8 with Kubeadm

Kubernetes is an open-source orchestrator for deploying containerized applications. It was developed by Google in 2014 for deploying scalable, reliable systems in containers via application-oriented APIs.Since then it has grown to be one of the largest and most popular open-source projects in the world. Kubernetes is now the standard API for building cloud-native applications in every public cloud.

Kubernetes provides the software necessary to successfully build and deploy reliable, scalable distributed systems. Kubernetes has key features such as velocity, efficiency, scaling, and abstracting. Kubernetes provides a way for us to run and schedule containerized workloads on multiple hosts.

Kubeadm is used to create a minimum viable Kubernetes cluster that conforms to best practices or set up clusters that pass the Kubernetes conformance tests. Kubeadm also supports other cluster lifecycle functions such as cluster upgrades and bootstrap tokens. Kubeadm is thus a good tool to test and try out Kubernetes for the first time, for automating clusters and testing the applications, and it is also a building block for other ecosystems.

Create Kubernetes Cluster with kubeadm on Oracle Linux 8

To create a cluster with kubeadm, ensure you have the following requirements are met:

  • A server running on Oracle Linux 8.
  • 2GB RAM per machine i.e Master and worker.
  • Atleast 2 CPUs on control-plane node
  • Full network connectivity among all machines in the cluster
  • Unique hostname, MAC address, and product_uuid for every node.
  • TCP ports : 6443, 2379-2380 ,10250 , 10259 , and 10257 should be opened.
  • Swap must be disabled.

For the purpose of this article, I will have two worker nodes and one master node

RoleFQDNIPOSRAMCPU
k8sMasterkmaster.example.com192.168.201.9Oracle-Linux8GB4
k8sWorker1kworker.example.com192.168.201.7Oracle-Linux-18GB4
k8sWorker2kworker.example.com192.168.201.10Rocky-Linux-28GB4
Master & Worker-nodes-specifications

Let’s begin the process:

Run the commands below on both Kubernetes master and worker nodes

1. Update the system packages

Update the master and worker machines.

sudo yum update -y 

A reboot is recommended if kernel updates were applied.

sudo reboot

2. Set Hostnames

We will set hostnames for our k8sMaster node and k8sworker nodes to distinguish them.

sudo hostnamectl set-hostname k8smaster.example.com   ## Master node##
sudo hostnamectl set-hostname k8sworker1.example.com  ## Worker node 1 ##
sudo hostnamectl set-hostname k8sworker2.example.com  ## Worker node 2 ##

3. Assign Static IP Address

Run the commands to check network interface and set static IP address on nodes if using DHCP.

nmcli con
sudo vim /etc/sysconfig/network-scripts/ifcfg-enp1s0 #replace name accordingly

Make your changes then restart the network service

sudo systemctl restart network

Test connectivity by pinging the worker nodes from the master node and vice versa.

4. Edit Hosts file and update IPs / hostnames

Include the following configurations in both master and worker nodes. run the command:

$ sudo vim /etc/hosts
192.168.201.9 k8smaster
192.168.201.7 k8sworker1
192.168.201.10 k8sworker2

5. Disable SELinux

The next thing is to disable SELinux to allow containers to access the host filesystem needed by pod networks and other services.

sudo setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

6. Disable Firewall and edit iptable settings

To disable the firewall in both Master and Worker nodes:

sudo systemctl disable firewalld
sudo modprobe br-netfilter
echo 'net.bridge.bridge-nf-call-iptables = 1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

7. Add Kubernetes Repository

To set up Kubernetes Repo on Master and worker nodes:

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

8. Install Kubeadm & Docker Container Engine

Add Docker repository on Oracle Linux 8:

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install Docker CE on Oracle Linux 8:

sudo yum install -y docker-ce docker-ce-cli containerd.io --allowerasing

Run the commands below to install kubeadm on master and worker nodes.

sudo yum install kubeadm -y

Start and enable kubelet services:

sudo systemctl enable kubelet
sudo systemctl start kubelet

Enable and Start docker:

sudo systemctl enable docker
sudo systemctl start docker

The output of the docker status command:

$ systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor pres>
   Active: active (running) since Fri 2024-11-06 15:52:53 CET; 15s ago
     Docs: https://docs.docker.com
 Main PID: 67365 (dockerd)
    Tasks: 9
   Memory: 37.0M
   CGroup: /system.slice/docker.service
           └─67365 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/conta>

9. Disable Swap

Disable swap on both master and worker nodes by executing the commands:

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

10. Initialize Kubernetes Cluster on K8s Master node.

This step is done on the K8s Master node only.

Pull required images:

sudo kubeadm config images pull

To initialize Kubernetes, run the command :

sudo kubeadm init

The command output will look like this :

[init] Using Kubernetes version: v1.31.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.201.9]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.201.9 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.201.9 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.002282 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.31" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8smaster as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i8gowt.xwke5bnuqn3jah0g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.201.9:6443 --token i8gowt.xwke5bnuqn3jah0g \
	--discovery-token-ca-cert-hash sha256:ca0842e57b703e161c2de0ceddbaaef50d3ac2663bbd8ecebc4fde2b6f6f5617

If you run into any error while initiliazing kubeadm, run the following commands to resolve the er

1. Reset the kubeadm cluster then flush the iptables:

sudo kubeadm reset -f
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

2.Change docker cgroup driver to systems, then restart docker service:

$ sudo vim etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

$ sudo systemctl daemon-reload
$ sudo systemctl restart docker

3.Swapoff and restart and enable kubelet service:

sudo swapoff -a
sudo systemctl start kubelet 
sudo systemctl enable kubelet

To start using the Cluster, run the commands below as a regular user :

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Or as the root user:

mkdir ~/.kube
sudo cp /etc/kubernetes/admin.conf ~/.kube/config

Run the command on master node to check active nodes in the cluster:

$ kubectl get nodes
NAME        STATUS     ROLES                  AGE   VERSION
k8smaster   NotReady   control-plane,master   14m   v1.31.1

From the output, the status is Not Ready as we have not installed the pod networks.

11. Installing pod network using Calico Network

Run these steps on the Master node.

1. Download calico YAML file

Issue the command below, refer to Clico docs for the latest version.

curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/calico.yaml -O

2. Apply the calico YAML file.

Apply the downloaded YAML file by the command:

kubectl apply -f calico.yaml

Sample output:

kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

3. Check pod status:

Lets now check the pod status by running the command:

kubectl get pods -n kube-system

Sample output:

NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-647d84984b-5rz99   1/1     Running   0          48s
calico-node-r2qmp                          1/1     Running   0          48s
coredns-64897985d-gqwn2                    1/1     Running   0          16m
coredns-64897985d-mcdsd                    1/1     Running   0          16m
etcd-k8smaster                             1/1     Running   0          16m
kube-apiserver-k8smaster                   1/1     Running   0          16m
kube-controller-manager-k8smaster          1/1     Running   0          16m
kube-proxy-zxvqs                           1/1     Running   0          16m
kube-scheduler-k8smaster                   1/1     Running   0          16m

From the output, all components are running. Let’s check the status of our node:

$ kubectl get nodes
NAME        STATUS   ROLES                  AGE   VERSION
k8smaster   Ready    control-plane,master   17m   v1.31.1

Our Master node is ready.

12. Join Worker nodes.

With our Master node now ready, we need to join our two worker nodes. This is done by using the token created in our previous step. On the worker nodes, run the command:

sudo kubeadm join 192.168.201.9:6443 --token i8gowt.xwke5bnuqn3jah0g \
	--discovery-token-ca-cert-hash sha256:ca0842e57b703e161c2de0ceddbaaef50d3ac2663bbd8ecebc4fde2b6f6f5617 

Sample output:

sudo kubeadm join 192.168.201.9:6443 --token i8gowt.xwke5bnuqn3jah0g \
> --discovery-token-ca-cert-hash sha256:ca0842e57b703e161c2de0ceddbaaef50d3ac2663bbd8ecebc4fde2b6f6f5617 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

With the worker nodes now joined to the master node, issue the commands below:

sudo kubectl get nodes 

Sample output:

NAME         STATUS   ROLES                  AGE     VERSION
k8smaster    Ready    control-plane,master   78m     v1.31.1
k8sworker1   Ready    <none>                 5m46s   v1.31.1
k8sworker2   Ready    <none>                 97s     v1.31.1

Congratulations, your worker nodes have successfully joined the master node.

Confirm that all pods are running by running the command:

sudo kubectl get pods --all-namespaces

Sample output:

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-647d84984b-5rz99   1/1     Running   0          71m
kube-system   calico-node-5pxq7                          1/1     Running   0          14m
kube-system   calico-node-92s6r                          1/1     Running   0          10m
kube-system   calico-node-r2qmp                          1/1     Running   0          71m
kube-system   coredns-64897985d-gqwn2                    1/1     Running   0          87m
kube-system   coredns-64897985d-mcdsd                    1/1     Running   0          87m
kube-system   etcd-k8smaster                             1/1     Running   0          87m
kube-system   kube-apiserver-k8smaster                   1/1     Running   0          87m
kube-system   kube-controller-manager-k8smaster          1/1     Running   0          87m
kube-system   kube-proxy-2qq52                           1/1     Running   0          10m
kube-system   kube-proxy-kw96r                           1/1     Running   0          14m
kube-system   kube-proxy-zxvqs                           1/1     Running   0          87m
kube-system   kube-scheduler-k8smaster                   1/1     Running   0          87m

If a token is expired, a new one can be generated using the command below on the master node.

kubeadm token create

Then issue the command below to list the tokens.

kubeadm token list

Get pods by running the command:

kubectl get pods -n kube-system

The output:

NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-647d84984b-5rz99   1/1     Running   0          82m
calico-node-5pxq7                          1/1     Running   0          25m
calico-node-92s6r                          1/1     Running   0          21m
calico-node-r2qmp                          1/1     Running   0          82m
coredns-64897985d-gqwn2                    1/1     Running   0          98m
coredns-64897985d-mcdsd                    1/1     Running   0          98m
etcd-k8smaster                             1/1     Running   0          98m
kube-apiserver-k8smaster                   1/1     Running   0          98m
kube-controller-manager-k8smaster          1/1     Running   0          98m
kube-proxy-2qq52                           1/1     Running   0          21m
kube-proxy-kw96r                           1/1     Running   0          25m
kube-proxy-zxvqs                           1/1     Running   0          98m
kube-scheduler-k8smaster                   1/1     Running   0          98m

To see running containers from the worker node:

crictl ps

The output:

CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                ATTEMPT             POD ID
4a767a90e00ca       calico/node@sha256:6912fe45eb85f166de65e2c56937ffb58c935187a84e794fe21e06de6322a4d0             29 minutes ago      Running             calico-node         0                   29109d992c11d
a714534073527       k8s.gcr.io/kube-proxy@sha256:e40f3a28721588affcf187f3f246d1e078157dabe274003eaa2957a83f7170c8   29 minutes ago      Running             kube-proxy          0                   343bc314bfd4b

That’s it folks, you have successfully deployed k8s cluster on your Oracle Linux 8.

Explore More with CloudSpinx

Looking to streamline your tech stack?

Your IT Journey Starts Here!

Ready to level up your IT skills? Our new eLearning platform is coming soon to help you master the latest technologies.

Be the first to know when we launch! Join our waitlist now.

Join our Linux and open source community. Subscribe to our newsletter for tips, tricks, and collaboration opportunities!

Recent Post

Leave a Comment

Your email address will not be published. Required fields are marked *

Related Post

Complete development environments can be constructed with the help of Vagrant. The “it works on my machine” justification is rendered […]

Virtualization is a technology that allows the creation of virtual instances or environments of computer hardware, operating systems, storage devices, […]

Zimbra Collaboration Suit (ZCS) is an open-source collaborative suite providing an email server and a web client. Among other services […]

Let's Connect

Unleash the full potential of your business with CloudSpinx. Our expert solutions specialists are standing by to answer your questions and tailor a plan that perfectly aligns with your unique needs.
You will get a response from our solutions specialist within 12 hours
We understand emergencies can be stressful. For immediate assistance, chat with us now

Contact CloudSpinx today!

Download CloudSpinx Profile

Discover the full spectrum of our expertise and services by downloading our detailed Company Profile. Simply enter your first name, last name, and email address.