Renan Hillesheim

K3s on Hetzner with Terraform

· Renan Hillesheim

Introduction

This tutorial will help you to set up a Kubernetes cluster using K3S. K3s is a lightweight Kubernetes distribution build for IoT and Edge Computing, which makes it a good alternative for running it on a cluster with small virtual machines. In addition to that the entire provisioning will be done through Terraform.

First things first

Before starting make sure you have a Hetzner account. Aside of that make sure you have Terraform and Kubectl installed.

Add hcloud provider on Terraform

Create a file called main.tf and add hcloud provider into it, it will provide the necessary resources we need for the Kubernetes cluster.

# Tell Terraform to include the hcloud provider
terraform {
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      # Here we use version 1.45.0, this may change in the future
      version = "1.45.0"
    }
  }
}

# Configure the Hetzner Cloud Provider with your token
provider "hcloud" {
  token = var.hcloud_token
}

Hetzner Cloud API Key

You must set up a Hetzner Cloud API Key in your Hetzner account following this indication. After you have done that copy the token to .tfvars file as following:

hcloud_token = "z7VJ4zqj2FRM8z..."

You also need to declare hcloud_token as a variable on Terraform. Create a file called variables.tf with it:

variable "hcloud_token" {
  sensitive = true
}

SSH Key

Create a new ssh key (or use an existing one) to access the master node. In the Terraform repository create a .ssh folder and put your public ssh key in a file called local_rsa.pub.

Network

We will need to create a network which is used by the Kubernetes nodes to communicate with each other. We will use 10.0.0.0/16 as network and 10.0.1.0/24 as subnet. Put the Terraform configuration on a file called network.tf:

resource "hcloud_network" "private_network" {
  name     = "kubernetes-cluster"
  ip_range = "10.0.0.0/16"
}

resource "hcloud_network_subnet" "private_network_subnet" {
  type         = "cloud"
  network_id   = hcloud_network.private_network.id
  network_zone = "eu-central"
  ip_range     = "10.0.1.0/24"
}

Master Node Config

The master node’s configuration will be on master.tf and cloud-init.yaml. Let’s set up the resources necessary for the master node on master.tf file:

# Server creation with one linked primary ip (ipv4)
resource "hcloud_primary_ip" "master_node_public_ip" {
  name          = "primary_ip_test"
  datacenter    = "fsn1-dc14"
  type          = "ipv4"
  assignee_type = "server"
  auto_delete   = true
}

data "template_file" "master-node-config" {
  # cloud-init template with the k3s installation script
  template  = file("${path.module}/cloud-init.yaml")
  vars      = {
    # Public ssh to access the node
    local_ssh_public_key = file("${path.module}/.ssh/local_rsa.pub")
    # The resource worker-ssh-key is defined on worker.tf, the key is auto genereted on terraform
    # This is used for the worker to communicate with the master node
    worker_ssh_public_key = tls_private_key.worker-ssh-key.public_key_openssh
    hcloud_token = var.hcloud_token
    hcloud_network = hcloud_network.private_network.id
    public_ip = tostring(hcloud_primary_ip.master_node_public_ip.ip_address)
  }
}

resource "hcloud_server" "master-node" {
  name        = "master-node"
  image       = "ubuntu-24.04"
  # Change the server type for a bigger VM
  server_type = "cx22"
  location    = "fsn1"
  public_net {
    ipv4 = hcloud_primary_ip.master_node_public_ip.id
    ipv4_enabled = true
    ipv6_enabled = true
  }
  network {
    network_id = hcloud_network.private_network.id
    # IP Used by the master node, needs to be static
    # Here the worker nodes will use 10.0.1.1 to communicate with the master node
    ip         = "10.0.1.1"
  }

  # If we don't specify this, Terraform will create the resources in parallel
  # We want this node to be created after the private network is created
  depends_on = [hcloud_network_subnet.private_network_subnet]
}

# We output the public ip of the master node that will be used to connect to the cluster 
output "master_node_public_ip" {
  value = tostring(hcloud_primary_ip.master_node_public_ip.ip_address)
}

The cloud-init.yaml file contains the script for the K3s installation:

#cloud-config
packages:
  - curl
users:
  - name: admin
    ssh-authorized-keys:
      - ${local_ssh_public_key}
      - ${worker_ssh_public_key}
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash

runcmd:
  - apt-get update -y
  # K3s installation script:
  #   --disable-cloud-controller -kubelet-arg cloud-provider=external: Disable k3s default cloud controller manager to use Hetzner's cloud controller manager
  #   --disable traefik: disable traefik ingress controller
  #   --tls-san ${public_ip}: Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the TLS cert
  #   --disable servicelb: Disable K3s load balancer so we can use Hetzner Load Balancer
  #   --flannel-iface enp7s0: Change flannel to use enp7s0, which should be the interface of the private network
  #   --disable metrics-server: Disable metrics server to save resources
  - curl https://get.k3s.io | INSTALL_K3S_EXEC="--tls-san ${public_ip} --disable traefik --disable servicelb --flannel-iface enp7s0 --write-kubeconfig-mode 644 --disable metrics-server --disable-cloud-controller --kubelet-arg cloud-provider=external" sh -
  # Create secrets hcloud and hcloud-csi to be used by Hetzner's cloud controller manager
  - kubectl -n kube-system create secret generic hcloud --from-literal=token=${hcloud_token} --from-literal=network=${hcloud_network}
  - kubectl -n kube-system create secret generic hcloud-csi --from-literal=token=${hcloud_token}
  # Download Hetzner's cloud controller manager manifests
  - wget https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml -O ccm-networks.yaml
  # Set the Cluster CIDR to 10.244.0.0/16
  - sed -i 's cluster-cidr=10.244.0.0/16 cluster-cidr=10.42.0.0/16 g' /root/ccm-networks.yaml
  # Install Hetzner's cloud controller manager
  - kubectl apply -f ccm-networks.yaml
  # Install Hetzner's CSI drivers
  - kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.6.0/deploy/kubernetes/hcloud-csi.yml
  - apt clean
  - apt auto-clean

Worker Node Config

The worker node’s configuration will be located on worker.tf and cloud-init-worker.yaml. The worker.tf is very similar to master.tf, change count = 1 if you want more worker node instances.

resource "tls_private_key" "worker-ssh-key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

data "template_file" "worker-node-config" {
  template  = file("${path.module}/cloud-init-worker.yaml")
  vars      = {
    local_ssh_public_key = file("${path.module}/.ssh/local_rsa.pub")
    worker_ssh_public_key = tls_private_key.worker-ssh-key.public_key_openssh
    worker_ssh_private_key = base64encode(tls_private_key.worker-ssh-key.private_key_openssh)
  }
}

resource "hcloud_server" "worker-nodes" {
  count = 1

  # The name will be worker-node-0, worker-node-1, worker-node-2...
  name        = "worker-node-${count.index}"
  image       = "ubuntu-24.04"
  server_type = "cx22"
  location    = "fsn1"
  public_net {
    ipv4_enabled = true
    ipv6_enabled = true
  }
  network {
    network_id = hcloud_network.private_network.id
  }
  # Master node has to be ready before the worker node
  depends_on = [hcloud_network_subnet.private_network_subnet, hcloud_server.master-node]
}

Now the cloud-init-worker.yaml file for the worker node:

#cloud-config
packages:
  - curl
users:
  - name: cluster
    ssh-authorized-keys:
      - ${local_ssh_public_key}
      - ${worker_ssh_public_key}
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash

write_files:
  - path: /root/.ssh/id_rsa
    defer: true
    encoding: base64
    content: ${worker_ssh_private_key}
    permissions: '0600'

runcmd:
  - apt-get update -y
  - # Wait for the master node to be ready by trying to connect to it
  - until curl -k https://10.0.1.1:6443; do sleep 5; done
  - # Copy the token from the master node
  - REMOTE_TOKEN=$(ssh -o StrictHostKeyChecking=accept-new cluster@10.0.1.1 sudo cat /var/lib/rancher/k3s/server/node-token)
  - # Install K3s worker
  - curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--flannel-iface enp7s0 --kubelet-arg cloud-provider=external" K3S_URL=https://10.0.1.1:6443 K3S_TOKEN=$REMOTE_TOKEN sh -
  - apt clean
  - apt auto-clean

Now let’s apply the changes with Terraform using the following command, after that wait until the instances are up and running, you can check on Hetzner’s Cloud Dashboard.

terraform apply -var-file .tfvars

Checking installation

Before running kubectl to check if the K3s instances are up and running we need to copy the kubernetes config file. Change ~/.ssh/id_rsa with the location of your ssh primary key.

export MASTER_PUBLIC_IP=$(terraform output --raw master_node_public_ip) && \
    scp -i ~/.ssh/id_rsa cluster@${MASTER_PUBLIC_IP}:/etc/rancher/k3s/k3s.yaml ~/.kube/hetzner-config.yaml && sed -i 's 127.0.0.1 ${MASTER_PUBLIC_IP} g'  ~/.kube/hetzner-config.yaml

Add the downloaded configuration file to KUBECONFIG

export KUBECONFIG=${KUBECONFIG}:${HOME}/.kube/hetzner-config.yaml

Run kubectl to check the nodes in the cluster

$ kubectl get nodes
NAME            STATUS   ROLES                  AGE   VERSION
master-node     Ready    control-plane,master   19d   v1.30.5+k3s1
worker-node-0   Ready    <none>                 19d   v1.30.5+k3s1

Conclusion

You have finished successfully the installation of a K3s Kubernetes cluster. Now you are ready to deploy your apps on Kubernetes. The git project with all the files and code described in this post can be found here