Raspberry Pi K3s Cluster Part 3 Storage

Posted on Jun 13, 2025

There are several storage options that we could use such as MinIO, Rook, or in this case Longhorn. While the other two are great, I wanted to go with something that was easier to deploy in this kind of environment and didn’t require a lot of resources. Although backups are incredibly important, this part will focus solely on installing longhorn and getting the necessary components in place so we can create PVCs and PVs.

Prerequisites

  1. You have a cluster with a CNI installed and functioning or you’ve followed Part 2.
  2. You either have an available disk or partition for dedicated for Longhorn storage

Depending on how you want to use the Longhorn UI will determine what the requirements will be. If you don’t want to have any authentication then you can use either Cilium’s builtin ingress or gateway, To enable basic auth you will need to use a different ingress or gateway class along with a generic secret containing the basic auth credentials in the longhorn-system namespace. It’s recommended to use TLS, and you will need to have cert-manager installed and configured along with a TLS secret in the longhorn-system namespace.

Software

  • Longhorn – Block storage used for persistent volumes
  • Packages
    • iscsi
    • nfs-common
    • multipath-tools
  • Helm
  • Enabled kernel options
    • NFS v4.1
    • NFS v4.2

Setup

If you haven’t done so already, install the storage onto the worker nodes.

Install Required Packages

Before we can install Longhorn, we need to make sure that the required software is installed on each node that will be used for storage.



apt install open-iscsi nfs-common multipath-tools
systemctl enable --now iscsid
systemctl enable --now multipathd

Make sure that NFS v4.1 and v4.2 are enabled in the kernel. You can check this by running:



cat /boot/config-`uname -r` | grep CONFIG_NFS_V4_1
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set



cat /boot/config-`uname -r` | grep CONFIG_NFS_V4_2
CONFIG_NFS_V4_2=y
CONFIG_NFS_V4_2_READ_PLUS=y
CONFIG_NFS_V4_2_SSC_HELPER=y

Longhorn

We’re going to install longhorn using Helm. First we add the repository, create a values file, and then install the workload.



helm repo add longhorn https://charts.longhorn.io
helm repo update

As mentioned above, if you plan on enabling the Longhorn UI with basic authentication, you’ll need to create a secret in the longhorn-system namespace. You can do this by running the following commands:



USER=USERNAME
PASSWORD=PASSWORD
echo "${USER}:$(openssl passwd -stdin -apr1 <<< ${PASSWORD})" >> basic-auth.txt
kubectl -n longhorn-system create secret generic basic-auth --from-file=basic-auth.txt
rm basic-auth.txt

Note: Replace USERNAME and PASSWORD with your desired credentials.

Create a Longhorn values file called longhorn-values.yaml:


    
global:
  nodeSelector:
    storage: "longhorn"

longhornManager:
  nodeSelector:
    storage: "longhorn"

longhornDriver:
  nodeSelector:
    storage: "longhorn"

longhornUI:
  nodeSelector:
    storage: "longhorn"

networkPolicies:
  enabled: true
  type: "k3s"

defaultSettings:
  defaultDataPath: "/srv/storage"
  replicaAutoBalance: "best-effort"

ingress:
  enabled: true
  ingressClassName: nginx
  host: longhorn.paulslinuxbox.net
  tls: true
  tlsSecret: longhorn-tls
  path: /
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: nginx/basic-auth
    cert-manager.io/cluster-issuer: letsencrypt-issuer
    cert-manager.io/common-name: longhorn.paulslinuxbox.net

persistence:
  defaultClass: true
  defaultFsType: ext4
  defaultMkfsParams: ""
  defaultClassReplicaCount: 3
  defaultDataLocality: disabled
  reclaimPolicy: Retain
  volumeBindingMode: "Immediate"
  migratable: true
  disableRevisionCounter: "true"

  • Lines 1-15 ensure that longhorn runs on nodes with the long
  • defaultSettings
    • defaultDataPath
    • replicaAutoBalance In the event that replicas are scheduled unevenly on nodes. This setting allows replicas to be balanced when a new node becomes available in the cluster or when the replica count for a volume is updated.
      • default no replica rebalancing will take place
      • least-effort Longhorn will rebalance replicas for minimal redundancy.
      • best-effort Rebalance replicas for even redundancy.
  • The ingress section is self explanatory. Make sure that you change ingress.host.
  • persistence is the section relating to the persistent volumes and data.
    • defaultClass Sets longhorn as the default storage class.
    • defaultFsType and defaultMkfsParams The default filesystem of the Longhorn StorageClass and mkfs parameters of the Longhorn StorageClass.
    • defaultClassReplicaCount is the number of copies of data.
    • defaultDataLocality influences the replica placement in regards to the workload. There are two options disabled which places the replicas on nodes based on available/capacity. The best-effort will try to place replicas on the same node(s) where the PVC is mounted or as close as possible. Other replicas will be places on other nodes if needed for availability.
    • reclaimPolicy What to do with the volume after its claim is released. We’ve set it to Retain just in case we need to keep the data around to be reattached.
    • volumeBindingMode The default is Immediate which binding and provisioning occur.
    • migratable Enables Longhorn to perform a live migration of a volume from one node to another.
    • disableRevisionCounter This disables the revision counter, preventing Longhorn from tracking all write operations.

Finally, install Longhorn with the following command:



helm upgrade --install longhorn longhorn/longhorn --namespace longhorn-system -f longhorn-values.yml --create-namespace 

Post Install

Now that Longhorn is installed, you can access the UI via the ingress we’ve configured. In this case, it would be https://longhorn.paulslinuxbox.net. You should see the Longhorn dashboard where you can manage your volumes, settings, and nodes. The first thing you should do is verify that the storage nodes are available and ready. You can do this by navigating to the Nodes section in the Longhorn UI. Click on the icon under the operation column then click on Edit node and disks.

On this screen you can see that status of the node and what features or conditions are available on the node. You should also be able to add or delete storage as well as mark the node as unschedulable for storage workloads. Additionally, you can add tags to both the nodes and disks.

Adding Additional Storage by Editing the Node Resource

You can add additional storage locations to Longhorn nodes by editing the node.longhorn.io resource. Run the following command:



kubectl edit nodes.longhorn.io  -n longhorn-system

Add the following yaml snippet under the spec.disks section:


    
new-disk:
  path: "/srv/new-disk"
  diskDriver: ""
  allowScheduling: true
  evictionRequested: false
  storageReserved: 0
  tags: []
  diskType: "filesystem"

Apply the disk manifest:



kubectl apply -f longhorn-disk.yaml
kubectl get disk -n longhorn-system
kubectl describe disk new-disk -n longhorn-system

Important: Ensure the path /srv/new-disk exists and has proper permissions on the target node before applying this manifest. You may need to create the directory and set ownership to the Longhorn user.

Longhorn Nodes

Create a Volume from a manifest

You can create a PersistentVolumeClaim (PVC) by applying a manifest file. With Longhorn installed (and longhorn as your StorageClass), create a file called pvc-longhorn.yaml with the following contents:


    
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-longhorn-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: longhorn

Apply the manifest and verify the PVC and backing PV are created and bound:



kubectl apply -f pvc-longhorn.yaml
kubectl get pvc example-longhorn-pvc -o wide
kubectl describe pvc example-longhorn-pvc
kubectl get pv

If the PVC is Bound, Longhorn will have provisioned a volume for you. You can also inspect and manage the volume from the Longhorn UI (the longhorn-frontend service) or via kubectl -n longhorn-system get volumes.longhorn.io.

Conclusion

We now have a fully functioning Longhorn installation on our Raspberry Pi K3s cluster. We can use this storage backend to create persistent volumes for our applications. In the next part of this series, we’ll setup backups for our Longhorn volumes and test out restoring from a backup.