Kubernetes is an open-source platform that automates the installation, growth, and administration of containerized applications. It can operate on any on-premises, hybrid, or public cloud infrastructure, enabling you to shift workloads with ease to key locations. Kubernetes gradually deploys modifications to your application’s configuration or code while keeping track of application health to prevent mass instance termination. When nodes fail, Kubernetes restarts failing containers, replaces them, and rearranges their schedules.
A Kubernetes cluster contains:
- Control plane that manages the worker nodes. The Control plane contains components that can be run on any machine in the cluster. They include;
– kube-apiserver that exposes the Kubernetes API.
– etcd that is a key-value store used as Kubernetes’ backing store for all cluster data.
– kube-scheduler looks for newly created Pods with no assigned node and selects a node for them to run on.
– kube-controller-manager that runs controller process.
– cloud-controller-manager embeds cloud-specific control logic. - Worker nodes that run the containerized applications. They host and maintain Pods that are the components of the application workload and provide the Kubernetes runtime environment. Node components include;
– kubelet is an agent that ensures containers are running in a pod.
– kube-proxy that runs on each node on your cluster.
– Container runtime is the software that runs containers.
k0s Kubernetes Distribution
k0s is a Kubernetes distribution that deploys and runs Kubernetes workloads at any scale on any infrastructure. k0s is provided as a single binary with no dependencies on the host OS other than the kernel of the host OS. It works on any infrastructure: bare-metal, on-premises, edge, IoT, public & private clouds. It is open-source and 100% free to use. It has a simple design, flexible deployment options, and modest system requirements which reduces the complexity of installing and running a fully conformant Kubernetes distribution.
Some of its features include:
- Multiple installation methods: single-node, multi-node, airgap, and Docker.
- Flexible deployment options with control plane isolation as default.
- Built-In Cluster Features including; DNS by CoreDNS, Cluster Metrics by Metrics Server, and Horizontal Pod Autoscaling (HPA)
- Supports custom Container Network Interface (CNI) and Container Runtime Interface (CRI) plugins.
- Built-In Security & Conformance including Kube-bench security benchmark.
- Built-In Security Features like RBAC, Pod Security Policies, Network Policies, etc.
k0sctl is a command-line tool for bootstrapping and managing k0s clusters. In order to create a cluster, k0sctl deploys k0s, configures the supplied hosts, links the k0s nodes, and collects information from the hosts via an SSH connection. k0sctl allows you to automatically create repeatable multi-node clusters.
System requirements (minimum) for k0s
k0s target host system requirements.
Role | Memory (RAM) | Virtual CPU (vCPU) | Storage (SSD) |
---|---|---|---|
Controller node | 1 GB | 1 vCPU | ~0.5 GB |
Worker node | 1 GB | 1 vCPU | ~1.3 GB |
Controller + worker | 1 GB | 1 vCPU | ~1.7 GB |
Host operating systems supported are Linux and Windows server 2019.
Supported architectures are x86-64, ARM64, and ARMv7.
Deploy k0s Kubernetes on RHEL 9 or CentOS 9 using k0sctl
This guide takes you through the steps taken to Deploy the k0s Kubernetes Cluster on RHEL 9 / CentOS 9 using k0sctl.
Install k0s and k0sctl on a local machine –192.168.20.40 and SSH to 3 Remote machines;
- 192.168.200.41 – Controller
- 192.168.200.42 – Worker Node
- 192.168.200.43 – Worker Node
1. Generate SSH key
Generate a public SSH key to connect to the remote machines. Enter to select the default Keypath and leave the passphrase empty.
$ ssh-keygen
Enter file in which to save the key (/home/technixleo/.ssh/id_rsa):
Created directory '/home/technixleo/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/technixleo/.ssh/id_rsa
Your public key has been saved in /home/technixleo/.ssh/id_rsa.pub
....
The keys are stored in the ~/.ssh/id_rsa directory. Copy the key to the remote machines to ensure you can connect without the password.
Run the following commands to add your ssh keys to the servers.
ssh-copy-id -i ~/.ssh/id_rsa [email protected]
ssh-copy-id -i ~/.ssh/id_rsa [email protected]
ssh-copy-id -i ~/.ssh/id_rsa [email protected]
Confirm you can connect to the servers via SSH without the password.
ssh [email protected]
ssh [email protected]
ssh [email protected]
With SSH enabled, Let’s install k0s.
2. Install k0s on RHEL 9 / CentOS 9
We need to install k0s in our system. Use the following command to download the script to download the latest stable version of k0s.
$ curl -sSLf https://get.k0s.sh | sudo sh
Downloading k0s from URL: https://github.com/k0sproject/k0s/releases/download/v1.29.2+k0s.0/k0s-v1.29.2+k0s.0-amd64
k0s is now executable in /usr/local/bin
Run the following command to install a single node k0s service with controller and worker functions in the config file.
sudo k0s install controller --single
Start the k0s service
sudo k0s start
Check the status of the k0s instance.
$ sudo k0s status
Version: v1.29.2+k0s.0
Process ID: 35271
Role: controller
Workloads: true
SingleNode: true
Kube-api probing successful: true
Access your cluster using the following command.
$ sudo k0s kubectl get nodes
NAME STATUS ROLES AGE VERSION
rhel9.technixleo.com Ready control-plane 46s v1.29.2+k0s
Install k0sctl on CentOS 9| RHEL 9
Download the latest binary package for k0sctl from the releases page.
VER=$( curl --silent "https://api.github.com/repos/k0sproject/k0sctl/releases/latest"| grep '"tag_name"'|sed -E 's/.*"([^"]+)".*/\1/')
wget https://github.com/k0sproject/k0sctl/releases/download/${VER}/k0sctl-linux-x64 -O k0sctl
Set the file to executable
chmod +x k0sctl
sudo mv k0sctl /usr/local/bin
You can confirm successful installation by checking the version
$ k0sctl version
version: v0.17.4
commit: 372a589
Use the following command to create a configuration file for the cluster in the current working directory.
k0sctl init > k0sctl.yaml
To generate the file in a different directory, specify the path as shown below.
k0sctl init > path/to/k0sctl.yaml
Edit the configuration to add the worker nodes, specify their IP addresses reachable by k0sctl, and the connection details for an SSH connection.
$ vim k0sctl.yaml
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
hosts:
- ssh:
address: 192.168.200.41
user: root
port: 22
keyPath: ~/.ssh/id_rsa
role: controller
- ssh:
address: 192.168.200.42
user: root
port: 22
keyPath: ~/.ssh/id_rsa
role: worker
- ssh:
address: 192.168.200.43
user: root
port: 22
keyPath: ~/.ssh/id_rsa
role: worker
k0s:
version: null
versionChannel: stable
dynamicConfig: false
config: {}
From the configuration file, I have added two worker nodes and one controller. Save and exit the file.
To load the configuration file, use the following command.
k0sctl apply --config k0sctl.yaml
Sample Output:
⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███
⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███
⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███
⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████
k0sctl v0.17.4 Copyright 2023, k0sctl authors.
Anonymized telemetry of usage will be sent to the authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
INFO ==> Running phase: Connect to hosts
INFO [ssh] 192.168.200.42:22: connected
INFO [ssh] 192.168.200.41:22: connected
INFO [ssh] 192.168.200.43:22: connected
INFO ==> Running phase: Detect host operating systems
INFO [ssh] 192.168.200.41:22: is running CentOS Stream 9
INFO [ssh] 192.168.200.42:22: is running AlmaLinux 9.0 (Emerald Puma)
INFO [ssh] 192.168.200.43:22: is running CentOS Stream 9
INFO ==> Running phase: Acquire exclusive host lock
INFO ==> Running phase: Prepare hosts
INFO ==> Running phase: Gather host facts
INFO [ssh] 192.168.200.42:22: using almalinux.technixleo.com as hostname
INFO [ssh] 192.168.200.41:22: using cent9.technixleo.com as hostname
INFO [ssh] 192.168.200.43:22: using localhost.localdomain as hostname
INFO [ssh] 192.168.200.42:22: discovered ens18 as private interface
INFO [ssh] 192.168.200.43:22: discovered ens18 as private interface
INFO [ssh] 192.168.200.41:22: discovered ens18 as private interface
INFO ==> Running phase: Validate hosts
INFO ==> Running phase: Gather k0s facts
INFO ==> Running phase: Validate facts
INFO ==> Running phase: Configure k0s
WARN [ssh] 192.168.200.41:22: generating default configuration
INFO [ssh] 192.168.200.41:22: validating configuration
INFO [ssh] 192.168.200.41:22: configuration was changed
INFO ==> Running phase: Initialize the k0s cluster
INFO [ssh] 192.168.200.41:22: installing k0s controller
INFO [ssh] 192.168.200.41:22: waiting for the k0s service to start
INFO [ssh] 192.168.200.41:22: waiting for kubernetes api to respond
INFO ==> Running phase: Install workers
INFO [ssh] 192.168.200.42:22: validating api connection to https://192.168.200.41:6443
INFO [ssh] 192.168.200.43:22: validating api connection to https://192.168.200.41:6443
INFO [ssh] 192.168.200.41:22: generating token
INFO [ssh] 192.168.200.42:22: writing join token
INFO [ssh] 192.168.200.43:22: writing join token
INFO [ssh] 192.168.200.42:22: installing k0s worker
INFO [ssh] 192.168.200.43:22: installing k0s worker
INFO [ssh] 192.168.200.42:22: starting service
INFO [ssh] 192.168.200.43:22: starting service
INFO [ssh] 192.168.200.42:22: waiting for node to become ready
INFO [ssh] 192.168.200.43:22: waiting for node to become ready
INFO ==> Running phase: Release exclusive host lock
INFO ==> Running phase: Disconnect from hosts
INFO ==> Finished in 17m19s
INFO k0s cluster version v1.29.2+k0s.0 is now installed
INFO Tip: To access the cluster you can now fetch the admin kubeconfig using:
INFO k0sctl kubeconfig
Access the Cluster
Install kubectl on your system.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
To run the k0sctl kubeconfig file, we need to create the .kube directory in the home directory.
mkdir ~/.kube
Use this command to generate the kubeconfig file while passing the same configuration file.
k0sctl kubeconfig --config k0sctl.yaml > ~/.kube/config
Get the cluster info with the following command.
$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.200.41:6443
CoreDNS is running at https://192.168.200.41:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To get nodes use the following command. It only lists the worker nodes.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
almalinux.technixleo.com Ready <none> 67m v1.29.2+k0s.0
localhost.localdomain Ready <none> 67m v1.29.2+k0s.0
To get pods, use the following command.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-ddddfbd5c-f2j7d 1/1 Running 0 72m
kube-system coredns-ddddfbd5c-pz79p 1/1 Running 0 72m
kube-system konnectivity-agent-tmvkf 1/1 Running 0 72m
kube-system konnectivity-agent-z2dhb 1/1 Running 0 72m
kube-system kube-proxy-7jdsn 1/1 Running 0 72m
kube-system kube-proxy-kt4cl 1/1 Running 0 72m
kube-system kube-router-vstc4 1/1 Running 0 72m
kube-system kube-router-xwd46 0/1 CrashLoopBackOff 5 (69m ago) 72m
kube-system metrics-server-7d7c4887f4-pvx4x 0/1 Running 0 72m
Install NGINX Ingress Controller
The ingress controller aids in combining the routing specifications of several applications into a single entity. With the use of a NodePort, LoadBalancer, or host network, an ingress controller is made accessible to an external network.
Install Nginx Ingress controller with NodePort. Check the latest release from the link.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.0/deploy/static/provider/baremetal/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
Check that Pods have started
$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-ll2qr 0/1 Completed 0 60s
ingress-nginx-admission-patch-mqrpd 0/1 Completed 1 60s
ingress-nginx-controller-86585ccf6c-wdt4w 1/1 Running 0 60s
Check that you can see the NodePort service.
$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.101.60.193 <none> 80:30815/TCP,443:32438/TCP 2m10s
ingress-nginx-controller-admission ClusterIP 10.110.167.76 <none> 443/TCP 2m10s
Check that the ingress class object named nginx has been created
$ kubectl -n ingress-nginx get ingressclasses
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx <none> 3m15s
Try connecting the Ingress controller using the NodePort from the previous step (in the range of 30000-32767)
The syntax would be as follows.
curl <worker-external-ip>:<node-port>
In my case, I would have the IP address of one worker node with the provided NodePort. It will give a “404 Not Found” response which is okay as we have not yet configured any backend service.
$ curl 138.201.255.67:30815
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
Deploy a simple test application
vim simple-web-server-with-ingress.yaml
Add the following to the file
apiVersion: v1
kind: Namespace
metadata:
name: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
namespace: web
spec:
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: httpd
image: httpd:2.4.53-alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-server-service
namespace: web
spec:
selector:
app: web
ports:
- protocol: TCP
port: 5000
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-server-ingress
namespace: web
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-server-service
port:
number: 5000
Save and exit the file.
Deploy the app with the following command.
$ kubectl apply -f simple-web-server-with-ingress.yaml
namespace/web created
deployment.apps/web-server created
service/web-server-service created
ingress.networking.k8s.io/web-server-ingress created
Verify you can access the application from the NodePort from the above step. The following syntax is used.
curl <worker-external-ip>:<node-port> -H 'Host: web.example.com'
In my case, I would use the following command.
$ curl 138.201.255.67:30815 -H 'Host: web.example.com'
<html>
<head><title>web.example.com</title></head>
<body>
<center><h1>It works!</h1></center>
<hr><center>nginx</center>
</body>
</html>
Uninstall k0s
Stop the service.
sudo k0s stop
Invoke the reset command.
sudo k0s reset
Then reboot your system.
reboot
To uninstall use the k0sctl tool.
k0sctl reset --config k0sctl.yaml
Conclusion
We have gone through the process of deploying the k0s Kubernetes Cluster on RHEL 9 / CentOS 9 using k0sctl. An easier way to install and run a CNCF-approved Kubernetes distribution is with k0s, a certified Kubernetes distribution. With k0s, developer friction is eliminated and new clusters can be bootstrapped in a matter of minutes. This makes it possible for anyone to get started with Kubernetes without any special knowledge or expertise.