Kind (which in full means “Kubernetes IN Docker”), is a command line tool that enables you to run Kubernetes clusters in Docker containers. It is primarily used for local development and testing purposes. The tool is not recommended to be used in a production setup, but to help you test kubernetes applications and configurations locally without a need for a full-blown or production-grade kubernetes cluster – running in Virtual Machines or managed cloud platform.
Kind has support for creating multi-node kubernetes clusters, hence it allows you to simulate real-world Kubernetes scenarios, with much ease with minimal hardware resources. Furthermore, it’s easy to test different versions of Kubernetes by simply specifying a version when creating your clusters. Lastly, you can customize the configurations like node labels, network settings, and much more.
In this article, we will cover the procedure used to install Kubernetes cluster on Kind using Terraform automations. Here are the requirements for this guide:
Terraform use?
Leveraging Terraform’s infrastructure as code capabilities means you can consistently create and manage Kubernetes clusters, ensuring repeatability and ease of deployment.
Nginx Ingress?
We are integrating Nginx as the ingress controller which will give us a robust layer for managing external access to our services running in the kubernetes cluster.
The combination as used in this guide is ideal for local development, testing, and learning environments, since it offers a simplified microservices environment that mimics production-like Kubernetes cluster setups.
1. Install Docker, Kind and kubectl tools
In your local or remote machine, depending on the setup install Docker, Kubectl and Kind tools.
Install Docker Engine
Run the following command to install Docker in your system:
curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh
If this is not working, refer to the detailed how-to guide for Linux systems or the official Docker documentation page.
Verify it’s installed by checking the docker and compose versions:
docker version
docker compose version
Install Kind on the system
Get latest release of Kind:
VER=$(curl -s https://api.github.com/repos/kubernetes-sigs/kind/releases/latest|grep tag_name|cut -d '"' -f 4|sed 's/v//')
- On Linux system:
Download amd64 or ARM64 archive:
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v${VER}/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v${VER}/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
- On macOS:
# For Intel Macs
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v${VER}/kind-darwin-amd64
# For M1 / ARM Macs
[ $(uname -m) = arm64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v${VER}/kind-darwin-arm64
Make the the binary file executable and move it to a location in your PATH:
chmod +x kind
sudo mv ./kind /usr/local/bin/kind
You can query for the version number:
$ kind version
kind v0.24.0 go1.22.6 linux/amd64
Install Kubectl
Install kubectl CLI tool used to administer kubernetes:
- On Linux systems:
# Amd64
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# ARM64
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
- On macOS:
# Intel
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
# Apple Silicon
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"
Verify the installation:
kubectl version --client
2. Setup terraform environment
In this section we will install terraform and use terraform code for creating kind kubernetes cluster.
Easy Set up
The code used in this article is available in our Github repository. Follow the steps provided in the repository for how-to procedure:
For the manual procedure, follow the guide below.
1) Install terraform
Install terraform in your system:
- macOS
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
- Linux
VER=$(curl -s https://api.github.com/repos/hashicorp/terraform/releases/latest|grep tag_name|cut -d '"' -f 4|sed 's/v//')
wget https://releases.hashicorp.com/terraform/${VER}/terraform_${VER}_linux_amd64.zip
unzip terraform_${VER}_linux_amd64.zip
sudo mv terraform /usr/local/bin
terraform --version
2) Create terraform code
We can begin with directory creation for kind
mkdir ~/kind && cd ~/kind
Create terraform main.tf
file.
vim main.tf
Define terraform version and provider kind:
terraform {
required_version = ">= 1.9.0"
required_providers {
kind = {
source = "tehcyx/kind"
version = ">= 0.6.0"
}
}
}
Initialize your terraform environment.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding tehcyx/kind versions matching ">= 0.6.0"...
- Installing tehcyx/kind v0.6.0...
- Installed tehcyx/kind v0.6.0 (self-signed, key ID F471C773A530ED1B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
3) Add code block for kind cluster creation
We need to update main.tf
file to include cluster creation using kind_cluster resource.
resource "kind_cluster" "cluster" {
name = local.cluster_name
node_image = "kindest/node:v${local.cluster_version}"
wait_for_ready = true
kind_config {
kind = "Cluster"
api_version = "kind.x-k8s.io/v1alpha4"
# Worker configurations
node {
role = "worker"
kubeadm_config_patches = [
"kind: JoinConfiguration\nnodeRegistration:\n kubeletExtraArgs:\n node-labels: \"ingress-ready=true\"\n"
]
extra_port_mappings {
container_port = 80
host_port = 8080
listen_address = "0.0.0.0"
}
}
# Control plane
node {
role = "control-plane"
}
}
}
Here is a brief summary of the kind_cluster
configuration code:
- Cluster Creation: Defines a Kubernetes cluster using Kind (Kubernetes IN Docker).
- Cluster Name: The name of the cluster is set using
local.cluster_name
. - Node Image: Specifies the Kubernetes node image version using
kindest/node:v${local.cluster_version}
. - Wait for Ready: The cluster setup waits until all nodes are ready (
wait_for_ready = true
). - Kind Configuration: Uses Kind-specific configuration with
kind: Cluster
and API versionv1alpha4
. - Worker Node:
- Role set to “worker”.
- Custom kubelet configuration patch adds a node label (
ingress-ready=true
). - Port mappings expose container port 80 to host port 8080 on all interfaces (
0.0.0.0
).
- Control Plane Node:
- Role set to “control-plane” to handle Kubernetes management tasks.
Define local variables for use in the configuration.
locals {
cluster_name = "cluster1"
cluster_version = "1.31.0"
}
cluster_name
: Sets the cluster name to"cluster1"
.cluster_version
: Specifies the Kubernetes version to be used as"1.31.0"
.
Complete file looks like below:
terraform {
required_version = ">= 1.9.0"
required_providers {
kind = {
source = "tehcyx/kind"
version = ">= 0.6.0"
}
}
}
resource "kind_cluster" "cluster" {
name = local.cluster_name
node_image = "kindest/node:v${local.cluster_version}"
wait_for_ready = true
kind_config {
kind = "Cluster"
api_version = "kind.x-k8s.io/v1alpha4"
# Worker configurations
node {
role = "worker"
kubeadm_config_patches = [
"kind: JoinConfiguration\nnodeRegistration:\n kubeletExtraArgs:\n node-labels: \"ingress-ready=true\"\n"
]
extra_port_mappings {
container_port = 80
host_port = 8080
listen_address = "0.0.0.0"
}
}
# Control plane
node {
role = "control-plane"
}
}
}
locals {
cluster_name = "cluster1"
cluster_version = "1.31.0" # you can also use latest tag
}
output "cluster_endpoint" {
value = kind_cluster.cluster.endpoint
}
3. Create kind kubernetes cluster using terraform
Check whether the configuration is valid
$ terraform validate
Success! The configuration is valid.
Perform a dry run to see what terraform would do if you executed terraform apply.
$ terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# kind_cluster.cluster will be created
+ resource "kind_cluster" "cluster" {
+ client_certificate = (known after apply)
+ client_key = (known after apply)
+ cluster_ca_certificate = (known after apply)
+ completed = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ kubeconfig = (known after apply)
+ kubeconfig_path = (known after apply)
+ name = "cluster1"
+ node_image = "kindest/node:v1.31.0"
+ wait_for_ready = true
+ kind_config {
+ api_version = "kind.x-k8s.io/v1alpha4"
+ kind = "Cluster"
+ node {
+ kubeadm_config_patches = [
+ <<-EOT
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
EOT,
]
+ role = "worker"
+ extra_port_mappings {
+ container_port = 80
+ host_port = 8080
+ listen_address = "0.0.0.0"
}
}
+ node {
+ role = "control-plane"
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
Finally run terraform apply
to execute the actions proposed in a Terraform configuration file. This will apply the actual changes needed to achieve the desired state of the infrastructure.
$ terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# kind_cluster.cluster will be created
+ resource "kind_cluster" "cluster" {
+ client_certificate = (known after apply)
+ client_key = (known after apply)
+ cluster_ca_certificate = (known after apply)
+ completed = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ kubeconfig = (known after apply)
+ kubeconfig_path = (known after apply)
+ name = "cluster1"
+ node_image = "kindest/node:v1.31.0"
+ wait_for_ready = true
+ kind_config {
+ api_version = "kind.x-k8s.io/v1alpha4"
+ kind = "Cluster"
+ node {
+ kubeadm_config_patches = [
+ <<-EOT
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
EOT,
]
+ role = "worker"
+ extra_port_mappings {
+ container_port = 80
+ host_port = 8080
+ listen_address = "0.0.0.0"
}
}
+ node {
+ role = "control-plane"
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
You can see the progress of the changes and any errors that result from the same.
kind_cluster.cluster: Creating...
kind_cluster.cluster: Still creating... [10s elapsed]
kind_cluster.cluster: Still creating... [20s elapsed]
kind_cluster.cluster: Still creating... [30s elapsed]
kind_cluster.cluster: Still creating... [40s elapsed]
kind_cluster.cluster: Still creating... [50s elapsed]
kind_cluster.cluster: Creation complete after 51s [id=cluster1-kindest/node:v1.31.0]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
cluster_endpoint = "https://127.0.0.1:42951"
5. Access terraform cluster
The configuration file used by the Kubernetes command-line tool is located in ~/.kube/config
:
cat ~/.kube/config
List nodes in the cluster to validate it’s working:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cluster1-control-plane Ready control-plane 17m v1.31.0
cluster1-worker Ready <none> 16m v1.31.0
6. Install Nginx Ingress
Create provider.tf
file
vim provider.tf
Add kubernetes and helm provider definitions:
provider "kubernetes" {
# Docs: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
config_path = "~/.kube/config"
config_context = local.cluster_context
}
provider "helm" {
# Docs: https://registry.terraform.io/providers/hashicorp/helm/latest/docs
kubernetes {
config_path = "~/.kube/config"
config_context = local.cluster_context
}
}
Update main.tf
file local variables to include cluster_context and ingress_class_name, and the resource creation code sections.
Update main.tf
file to look like below:
terraform {
required_version = ">= 1.9.0"
required_providers {
kind = {
source = "tehcyx/kind"
version = ">= 0.6.0"
}
}
}
# Create kind cluster
resource "kind_cluster" "cluster" {
name = local.cluster_name
node_image = "kindest/node:v${local.cluster_version}"
wait_for_ready = true
kind_config {
kind = "Cluster"
api_version = "kind.x-k8s.io/v1alpha4"
# Worker configurations
node {
role = "worker"
kubeadm_config_patches = [
"kind: JoinConfiguration\nnodeRegistration:\n kubeletExtraArgs:\n node-labels: \"ingress-ready=true\"\n"
]
extra_port_mappings {
container_port = 80
host_port = 8080
listen_address = "0.0.0.0"
}
}
# Control plane
node {
role = "control-plane"
}
}
}
# Null resource to wait for Kind cluster to be ready
resource "null_resource" "wait_for_cluster" {
# This depends on the Kind cluster being created first
depends_on = [kind_cluster.cluster]
}
# Deploy Nginx Ingress
resource "helm_release" "nginx_ingress" {
# This depends on kind cluster being created first
depends_on = [null_resource.wait_for_cluster]
name = "nginx-ingress"
repository = var.nginx_ingress.chart_repository
chart = var.nginx_ingress.chart_name
version = var.nginx_ingress.chart_version
namespace = var.nginx_ingress.namespace
create_namespace = true
values = [templatefile("${path.root}/nginx-helm-chart-values-template.yaml", {
ingressClassName = var.nginx_ingress.ingress_class_name
replicas = var.nginx_ingress.replicas
})]
}
# Outputs
output "cluster_endpoint" {
value = kind_cluster.cluster.endpoint
}
output "nginx_ingress_app_version" {
value = helm_release.nginx_ingress.metadata[0].app_version
}
# Local variables
locals {
cluster_name = "kind-cluster1"
cluster_version = "1.31.0"
cluster_context = "kind-${local.cluster_name}"
ingress_class_name = "nginx"
}
Create nginx ingress configuration template:
vim nginx-helm-chart-values-template.yaml
Paste the following contents into the file. You can customize it to your liking.
controller:
ingressClassResource:
default: true
name: ${ingressClassName}
replicaCount: ${replicas}
ingressClass: non-existing
admissionWebhooks:
enabled: false
hostNetwork: true
service:
type: NodePort
Define required variables in the variables.tf
file:
variable "nginx_ingress" {
description = "Variables set for deployment of Nginx Ingress Controller."
type = object({
namespace = string
replicas = number
ingress_class_name = string
chart_repository = string
chart_name = string
chart_version = string
})
default = {
namespace = "nginx-ingress"
replicas = 1
ingress_class_name = "nginx"
chart_repository = "https://kubernetes.github.io/ingress-nginx"
chart_name = "ingress-nginx"
chart_version = "4.11.2"
}
}
Initialize your working directory using terraform init
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of tehcyx/kind from the dependency lock file
- Finding latest version of hashicorp/null...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Using previously-installed tehcyx/kind v0.6.0
- Installing hashicorp/null v3.2.2...
- Installed hashicorp/null v3.2.2 (signed by HashiCorp)
- Using previously-installed hashicorp/helm v2.15.0
- Using previously-installed hashicorp/kubernetes v2.32.0
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
It’s okay to reformat your configuration in the standard style using the following command:
$ terraform fmt
main.tf
You can now run plan and apply terraform commands:
$ terraform apply
kind_cluster.cluster: Refreshing state... [id=kind-cluster1-kindest/node:v1.31.0]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# helm_release.nginx_ingress will be created
+ resource "helm_release" "nginx_ingress" {
+ atomic = false
+ chart = "ingress-nginx"
+ cleanup_on_fail = false
+ create_namespace = true
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ manifest = (known after apply)
+ max_history = 0
+ metadata = (known after apply)
+ name = "nginx-ingress"
+ namespace = "nginx-ingress"
+ pass_credentials = false
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://kubernetes.github.io/ingress-nginx"
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ values = [
+ <<-EOT
controller:
ingressClassResource:
default: true
name: nginx
replicaCount: 1
ingressClass: non-existing
admissionWebhooks:
enabled: false
hostNetwork: true
service:
type: NodePort
EOT,
]
+ verify = false
+ version = "4.11.2"
+ wait = true
+ wait_for_jobs = false
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
helm_release.nginx_ingress: Creating...
helm_release.nginx_ingress: Still creating... [10s elapsed]
helm_release.nginx_ingress: Still creating... [20s elapsed]
helm_release.nginx_ingress: Creation complete after 23s [id=nginx-ingress]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
cluster_endpoint = "https://127.0.0.1:42951"
nginx_ingress_app_version = "1.10.3"
Check ingress resources if they were created in the cluster.
$ kubectl get pods -n nginx-ingress
NAME READY STATUS RESTARTS AGE
nginx-ingress-ingress-nginx-controller-58957796d6-jx9hk 1/1 Running 0 4m59s
$ kubectl -n nginx-ingress get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-ingress-nginx-controller NodePort 10.96.123.130 <none> 80:30404/TCP,443:32629/TCP 5m30s
To check the name of the Ingress class use the command:
$ kubectl get ingressclass
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx <none> 26h
Remember we configured our ingress to use local host port 8080. If port 80 is open in your system, you can update
$ ss -tunelp|grep 8080
tcp LISTEN 0 4096 0.0.0.0:8080 0.0.0.0:* users:(("docker-proxy",pid=2915,fd=4)) ino:17085 sk:5 cgroup:/system.slice/docker.service <->
If port 80 is free in your local system, you can update port mappings and use it.
extra_port_mappings {
container_port = 80
host_port = 80
listen_address = "0.0.0.0"
}
Testing if Ingress is working
We can deploy a simple application to test that our Nginx ingress is working. Start by creating demo app manifest file.
vim demo-ingress-app.yml
Add below contents into the file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
---
kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
---
kind: Pod
apiVersion: v1
metadata:
name: banana-app
labels:
app: banana
spec:
containers:
- name: banana-app
image: hashicorp/http-echo
args:
- "-text=banana"
---
kind: Service
apiVersion: v1
metadata:
name: banana-service
spec:
selector:
app: banana
ports:
- port: 5678 # Default port for image
Use kubectl apply
to create actual resources in the kubernetes cluster:
kubectl apply -f demo-ingress-app.yml
Create a manifest file for the ingress
vim demo-ingress.yml
Add contents as follows
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: webapp.example.com
http:
paths:
- path: /apple
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
- path: /banana
pathType: Prefix
backend:
service:
name: banana-service
port:
number: 5678
- Update ingressClassName to your configured value, and host to FQDN you want to use for the service.
Create the ingress resource in your cluster that directs traffic for the domain in host to the service deployed.
kubectl apply -f demo-ingress.yml
Confirm this was created successfully:
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
webapp-ingress nginx webapp.example.com 10.96.171.245 80 4m10s
The easiest way of using webapp.example.com
as configured is to map the domain to local machine IP address – 127.0.0.1 or actual IPv4 address.
$ sudo vim /etc/hosts
127.0.0.1 webapp.example.com
Test service access through Ingress using curl
:
curl http://webapp.example.com:8080/banana
curl http://webapp.example.com:8080/apple
If the ingress was set to listen on local port 80, you don’t need to specify :8080
. It will just be:
curl http://webapp.example.com/banana
Conclusion
In conclusion, we hope this guide has helped you set up a Kind (Kubernetes in Docker) cluster and deploy Nginx Ingress using Terraform automation. Terraform gives you an automated and scalable way to manage local Kubernetes environments.
With the tools highlighted in this article, it is more efficient to orchestrate and manage complex Kubernetes configurations, while enhancing reliability and productivity in your daily development workflow. Check out our Kubernetes consultancy service catalog to see the services we offer.