If you have been sitting and thinking of a simple solution that would help you set up a Kubernetes cluster quickly to test your application without the hassle of setting up a complete cluster with three or more nodes, then your lucky star has shined. There is this application known as k3d that we are going to illustrate how you can get installed on you Rocky /AlmaLinux box without much fuss. Before we set for the swim, let us do a couple of introductions so that we can call our names as we dive into the pool together.
So What is k3d?
In order to get a complete understanding of k3d, we shall first cover something else known as k3s that it depends on. So K3s is certified Kubernetes distribution created by Rancher whose footprint is lightweight, easy to install, deploy, and manage. Rancher were able to make K3s lightweight by trimming over 3 billion lines of code from the main Kubernetes source code. Source code that were trimmed made it possible for k3s to have fewer dependencies, cloud provider integrations, add-ons, and other components that are not absolutely necessary for installing and running Kubernetes. We hope that k3s is now clear. Let us now jump into our core business of K3d.
K3d is a lightweight wrapper to run K3s (Rancher Lab’s minimal Kubernetes distribution) in Docker. K3d makes it very easy to create single- and multi-node k3s clusters in Docker, for example, for local development on Kubernetes.
Install and Use k3d on Rocky / AlmaLinux 9
Good! Now we are swimming in the same pool. We can now proceed to set it up in our Rocky Linux 9 / AlmaLinux 9 box.
Requirements
Before we can start, we have to make sure our server meets the following:
- kubectl to interact with the Kubernetes cluster
- Docker to be able to use k3d at all
- Note: k3d v5.x.x requires at least Docker v20.10.5 (runc >= v1.0.0-rc93) to work properly
Step 1: Install Docker
K3d whole depends of docker and in this step, we get to set it up and make sure it runs smoothly. Lucky for us, we had already done a guide to help us set this up. Kindly refer to How to install Docker on Linux guide.
Step 2: Install kubectl and other packages
Before the installation we are going to need some packages to help us fetch, edit and manipulate files. Run the following to install all of these essential packages:
sudo dnf install -y vim curl wget
In order to interact with the cluster we will be creating, we need kubectl package. Install it by running the following commands:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
You can also confirm the version installed:
kubectl version --output=yaml --client
You should see an output similar to:
clientVersion:
buildDate: "2025-01-15T14:40:53Z"
compiler: gc
gitCommit: e9c9be4007d1664e68796af02b8978640d2c1b26
gitTreeState: clean
gitVersion: v1.32.1
goVersion: go1.23.4
major: "1"
minor: "32"
platform: linux/amd64
Step 3: Enable ip_tables module
K3d comes with traefik and it will need iptables to enable some routing. In order to make its actions smooth, we need to enable the “ip_tables” module. Run the following to this up:
sudo modprobe ip_tables
echo 'ip_tables' | sudo tee -a /etc/modules
Step 4: Install k3d
Now we are in the juiciest of part where we are going to see k3d in action. Before that, we shall install it in our server as follows
wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
You will see an output similar to this:
Preparing to install k3d into /usr/local/bin
k3d installed into /usr/local/bin/k3d
Run 'k3d --help' to see what you can do with it.
Step 5: Create a cluster a single server node
To demonstrate the power of k3d we are going to create a single node cluster in a very simple fashion. You simply need to run the command below:
k3d cluster create geekscluster
You’ll see the following as it creates your cluster
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-geekscluster'
INFO[0000] Created image volume k3d-geekscluster-images
INFO[0000] Starting new tools node...
INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.8.1'
INFO[0001] Creating node 'k3d-geekscluster-server-0'
INFO[0002] Starting node 'k3d-geekscluster-tools'
INFO[0009] Pulling image 'docker.io/rancher/k3s:v1.31.4-k3s1'
INFO[0014] Creating LoadBalancer 'k3d-geekscluster-serverlb'
INFO[0015] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.8.1'
INFO[0018] Using the k3d-tools node to gather environment information
INFO[0018] HostIP: using network gateway 172.18.0.1 address
INFO[0018] Starting cluster 'geekscluster'
INFO[0018] Starting servers...
INFO[0018] Starting node 'k3d-geekscluster-server-0'
INFO[0023] All agents already running.
INFO[0023] Starting helpers...
INFO[0023] Starting node 'k3d-geekscluster-serverlb'
INFO[0030] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0032] Cluster 'geekscluster' created successfully!
INFO[0032] You can now use it like this:
kubectl cluster-info
This is amazing stuff!!
It will automatically create a KubeConfig file at “~/.kube/config” and hence you can immediately begin interacting with you cluster via the “kubectl” command. So let us check our cluster information as follows:
$ kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:45467
CoreDNS is running at https://0.0.0.0:45467/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:45467/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Let us view the pods
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-ccb96694c-nv6bn 1/1 Running 0 32s
helm-install-traefik-crd-d56v5 0/1 Completed 0 33s
helm-install-traefik-q479j 0/1 Completed 1 33s
local-path-provisioner-5cf85fd84d-zgpk5 1/1 Running 0 32s
metrics-server-5985cbc9d7-z2slv 1/1 Running 0 32s
svclb-traefik-48bdc9ea-kpkrs 2/2 Running 0 17s
traefik-57b79cf995-5qhc8 1/1 Running 0 17s
And now you are ready to deploy your applications to test them locally or do your various playful escapades with it. Have a happy and wild play!
Final Remarks
With a local Kubernetes running with low compute resources footprint, you can now be able to learn, and take advantage of the features it provides before you get into the multi-node highly available clusters in production workloads. Have a wonderful time in your adventure.
Stay tuned for more articles on Kubernetes topics.