Skip to content

☸️ Setting up a K3s cluster

Abstract

How to setup a K3s cluster on physical nodes. I will be using 5 laptops with Ubuntu 24.04 LTS installed on them.


Prerequisites


1. Setting up the first node

Start with installing K3S.

curl -sfL https://get.k3s.io | sh -

To check if K3S is running:

kubectl get nodes

We can also see that helm and Traefik is running:

kubectl get pods -A

Now we need to define our node as the verify first server node of our new cluster. For this we can set the parameter "cluster-init" to true.

nano /etc/rancher/k3s/config.yaml

Add the following:

cluster-init: true

Restart K3S:

systemctl restart k3s


2. Access the cluster from your local workstation

On Windows: https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/

In Cmd run (change to the latest version):

curl.exe -LO "https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe"

kubectl version --client

Create a directory for the kubeconfig:

mkdir %USERPROFILE%\.kube

Navigate to that directory:

C:\Users\<YourUsername>\.kube

Make sure that in the Folder Options, you have unchecked "Hide extensions for known file types" k3sclustersetup

In that directory create a file named config.txt

On a server node:

sudo cat /etc/rancher/k3s/k3s.yaml

Copy the output in the config.txt file on your windows machine. And if needed replace the line that looks like this:

server: https://127.0.0.1:6443

with the IP of your k3s node

server: https://<k3s-ip>:6443

Rename the config.txt by removing the ".txt" part, so that is just file: k3sclustersetup

Now in CMD or PowerShell, test the connection:

kubectl get nodes

3. Install helm on your local workstation

From your workstation, in a terminal, run the following to install helm:

winget install Helm.Helm

4. Use metallb instead of the built-in loadbalancer

Instead of using the built-in loadbalancer, we will install the metallb loadbalancer.

On the k3s cluster execute the following:

nano /etc/rancher/k3s/config.yaml

Add the following:

disable: servicelb

Restart K3S:

systemctl restart k3s

From the terminal in your windows workstation run:

helm repo add metallb https://metallb.github.io/metallb

Let's create a separate namespace for metallb:

kubectl create namespace metallb-namespace

And install metallb in the namespace:

helm install metallb metallb/metallb --namespace metallb-namespace

5. Configure metallb

Let's create a manifest file:

nano /var/lib/rancher/k3s/server/manifests/metallb.yaml

In the file add the following:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
  namespace: metallb-namespace
spec:
  addresses:
  - 192.168.70.100-192.168.70.140
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: k3s-l2
  namespace: metallb-namespace
spec:
  ipAddressPools:
  - default-pool

Let's explain the above: - The first part declares which IP the Load Balancer services can use - The second part advertises those IP ranges

6. Deploy a test webpage

Create a namespace for it:

kubectl create namespace test-nginx

nano /var/lib/rancher/k3s/server/manifests/nginx-deployment.yaml

With the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
  namespace: test-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-test
  template:
    metadata:
      labels:
        app: nginx-test
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        configMap:
          name: nginx-html
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-html
  namespace: test-nginx
data:
  index.html: |
    <html>
    <head><title>Hello NGINX</title></head>
    <body>
      <h1>Hello from k3s + MetalLB!</h1>
    </body>
    </html>

  • Deployment runs a single NGINX pod.
  • ConfigMap provides a custom index.html.
  • volumeMounts mounts the HTML into NGINX’s default directory.

Create the loadbalancer service:

apiVersion: v1
kind: Service
metadata:
  name: nginx-test-lb
  namespace: test-nginx
spec:
  selector:
    app: nginx-test
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
kubectl get pods -n test-nginx
kubectl get svc -n test-nginx

Example output:

NAME            TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
nginx-test-lb   LoadBalancer   10.43.34.214   192.168.10.21   80:32195/TCP   48s

Now you should be able to browse to the external IP above and see a simple webpage.

To delete our test, you can simply delete the manifest files:

rm /var/lib/rancher/k3s/server/manifests/nginx-deployment.yaml
rm /var/lib/rancher/k3s/server/manifests/nginx-service.yaml

Verify that the pod and service are gone:

kubectl get pods -n test-nginx
kubectl get svc -n test-nginx

To delete the namespace:

kubectl delete namespace test-nginx

7. Join other nodes

To get the cluster join token:

sudo cat /var/lib/rancher/k3s/server/node-token

Then on your new server nodes, use the command:

curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server --disable servicelb --disable=traefik --server https://<ip or hostname of server1>:6443 

Then on your new worker nodes, use the command:

curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - agent --server https://<ip or hostname of server1>:6443

8. Use Traefik instead of the built-in Traefik ingress controller

We need to start by disabling the built-in Traefik ingress controller.

nano /etc/rancher/k3s/config.yaml

Add the following:

disable: traefik

Restart K3S:

systemctl restart k3s

Add the Traefik repository:

helm repo add traefik https://traefik.github.io/charts

helm repo update

Create a namespace for Traefik:

kubectl create namespace traefik

Traefik is configured using a values.yaml file, which we will store on our Windows workstation. I will save mine under C:\Users\'user'.kube\traefik (Do note that I might be recommended to keep a backup of this file, as in its current state it is local only).

Paste in the default values file, which you can find at https://artifacthub.io/packages/helm/traefik/traefik. There you can select on the right: "Default values". Copy these over.

Now we can install Traefik, in the namespace Traefik, with the values.yaml file:

helm install traefik traefik/traefik -f C:\Users\cedric\.kube\traefik\values.yml -n traefik

Example output

C:\Users\cedric>helm install traefik traefik/traefik -f C:\Users\cedric\.kube\traefik\values.yml -n traefik
level=WARN msg="unable to find exact version; falling back to closest available version" chart=traefik requested="" selected=37.4.0
NAME: traefik
LAST DEPLOYED: Thu Dec  4 21:51:02 2025
NAMESPACE: traefik
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete
TEST SUITE: None
NOTES:
traefik with docker.io/traefik:v3.6.2 has been deployed successfully on traefik namespace !

If you want to make changes to Traefik afterwards, you can do this by changing values in the values.yaml file and apply the changes by running:

helm upgrade traefik traefik/traefik --namespace=traefik --values C:\Users\cedric\.kube\traefik\values.ym

To see if Traefik is deployed:

helm list -n traefik

kubectl -n traefik get all

Example output:

C:\Users\cedric>kubectl -n traefik get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/traefik-59ddf46749-tvqq2   1/1     Running   0          3m30s

NAME              TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
service/traefik   LoadBalancer   10.43.83.229   192.168.10.21   80:30586/TCP,443:30246/TCP   3m30s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/traefik   1/1     1            1           3m30s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/traefik-59ddf46749   1         1         1       3m30s
In the example above, we can see that Traefik is hosted on external IP 192.168.10.21

9. Traefik Ingress Route

Lets setup a simple manifest of nginx and create an ingress route. On my Windows workstation I created a test deployment file at C:\Users\cedric.kube\test\test.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: nginx
  name: nginx-deploy-test
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx-test
  template:
    metadata:
      labels:
        run: nginx-test
    spec:
      volumes:
      - name: webdata
        emptyDir: {}
      initContainers:
      - name: web-content
        image: busybox
        volumeMounts:
        - name: webdata
          mountPath: "/webdata"
        command: ["/bin/sh", "-c", 'echo "<h1>I am a test</font></h1>" > /webdata/index.html']
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: webdata
          mountPath: "/usr/share/nginx/html"

By running the following command, we create a temporary deployment:

kubectl apply -f C:\Users\cedric\.kube\test\test.yml
kubectl get all

Example output:

C:\Users\cedric>kubectl get all
NAME                                     READY   STATUS    RESTARTS   AGE
pod/nginx-deploy-test-5b8df89786-x2xr9   1/1     Running   0          2m58s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP   45h

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deploy-test   1/1     1            1           2m58s

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deploy-test-5b8df89786   1         1         1       2m58s

Now we need to expose this deployment:

kubectl expose deploy nginx-deploy-test --port 80

And we can confirm this here:

C:\Users\cedric>kubectl get svc
NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes          ClusterIP   10.43.0.1      <none>        443/TCP   45h
nginx-deploy-test   ClusterIP   10.43.140.38   <none>        80/TCP    43s

Now for the ingress route, create a file C:\Users\cedric.kube\test\ingressroute.yml with:

---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: nginx
  namespace: default
spec:
  entryPoints:
    - web
  routes:
    - match: Host(`nginx.example.com`)
      kind: Rule
      services:
        - name: nginx-deploy-test
          port: 80

and deploy it

kubectl create -f C:\Users\cedric\.kube\test\ingressroute.yml

We can verify this:

C:\Users\cedric>kubectl get ingressroutes
NAME    AGE
nginx   2m55s

Now you need to add 'nginx.example.com' in your DNS server (or add it on your local workstation) so that it points to 192.168.10.20 (where traefik is exposed as shown above).

Now if you go to nginx.example.top, you should see the nginx deployment.