Configuring Storage on Raspberry Pi based Kubernetes Cluster

Golden Gate Bridge, Photo by author

Kubernetes has two types of workloads:

  • Stateless
  • Stateful

Stateless workloads do not require persistent storage, however, the stateful workloads do need storage which needs to work in a distributed manner across nodes. There are various approaches to configuring storage layer on Kubernetes () but the scope here is to document the workflow to get Rook-Ceph running on a single-node k0s based Raspberry Pi Kubernetes cluster.

Getting your cluster up and running is definitely the first step as I documented .

The steps are for single-node deployment and are not suitable for production or mission critical deployment. Pl. use for prototyping and experiments only.

Let’s first look at the configured system and try to briefly summarize why we even need a storage layer. Raspberry Pi (RPi) is very useful hardware platform for edge computing and IoT applications due to its cost and size. A lot of communication among IoT devices happens via messaging services such as MQTT. So in order to get MQTT service running on RPi, as a Kubernetes application, we need an ability to dynamically provision storage volumes for persisting MQTT messages. This dynamic provisioning of storage volume requires a storage provider which will carve out volumes from configured global pool of raw storage.

└─ $ ▶ kubectl --namespace=my-system get pods,pvc,svc
NAME READY STATUS RESTARTS AGE
pod/emqx-0 1/1 Running 0 166m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/emqx-data-emqx-0 Bound pvc-a6efa8ed-15f9-4794-8314-f8b870a9f95e 2Gi RWO rook-ceph-block 166m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/emqx-headless ClusterIP None <none> 1883/TCP,8883/TCP,8081/TCP,8083/TCP,8084/TCP,18083/TCP,4370/TCP 166m
service/emqx NodePort 10.106.133.86 <none> 1883:31261/TCP,8883:32184/TCP,8081:32218/TCP,8083:30956/TCP,8084:31156/TCP,18083:31617/TCP 166m

As you see above, the MQTT services has a 2Gi storage volume attached to it, which was provisioned using a storage class rook-ceph-block. This ability to dynamically provision storage using Kubernetes primitives is highly effective is managing the application life-cycle in a distributed manner, meaning, we could have nodes go down without loss of data or the service.

Since the steps here are for single-node cluster, we obviously don’t have failover, however, the pattern is still valid and it will scale as cluster size is increased in future.

Components:

  • A single-node Raspberry Pi 4 8GB
  • Ubuntu 21.04 (Hirsute Hippo) ARM64 OS
  • k0s 0.13.1 ARM64 distribution of Kubernetes
  • A Seagate Barracuda 1TB HDD attached via USB3
  • Rook-Ceph for storage management

It starts with connecting and configuring a storage device to the node. I had an old 1TB Seagate BarraCuda drive, which I attached to the node using a USB3 to SATA adpater.

The important thing to note here is the physically attached storage device is in right initial state, otherwise Rook-Ceph operator deployment runs into various issues. In particular, it is necessary to make sure disk is not formatted and traces from any previous Rook-Ceph installation are cleared.

Assuming that the device was on /dev/sda I ran following commands to ensure that the disk was without any format:

sudo mkfs.ext4 /dev/sda
sudo wipefs -a /dev/sda
sudo sgdisk --zap-all /dev/sda

Another important cleanup step is to make sure no traces and fingerprints from previous installation exist.

sudo rm -rf /var/lib/rook/

At this point the hardware setup should be fine for us to try steps.

However, a few changes need to be made since k0s based Kubernetes installation does not have default paths assumed by the Rook-Ceph manifests. A few changes were made, in particular, monitor count was reduced from 3 to 1 and ROOK_CSI_KUBELET_DIR_PATH: “/var/lib/k0s/kubelet” was changed to reflect the correct path at which kubelet was configured in k0s based Kubernetes system.

user@rpi-host:~/rook$ git diff
diff --git a/cluster/examples/kubernetes/ceph/cluster.yaml b/cluster/examples/kubernetes/ceph/cluster.yaml
index fc663546..7f44fe8c 100644
--- a/cluster/examples/kubernetes/ceph/cluster.yaml
+++ b/cluster/examples/kubernetes/ceph/cluster.yaml
@@ -45,7 +45,7 @@ spec:
waitTimeoutForHealthyOSDInMinutes: 10
mon:
# Set the number of mons to be started. Must be an odd number, and is generally recommended to be 3.
- count: 3
+ count: 1
# The mons should be on unique nodes. For production, at least 3 nodes are recommended for this reason.
# Mons should only be allowed on the same node for test environments where data loss is acceptable.
allowMultiplePerNode: false
diff --git a/cluster/examples/kubernetes/ceph/operator.yaml b/cluster/examples/kubernetes/ceph/operator.yaml
index dfad3f1a..ac4b0e04 100644
--- a/cluster/examples/kubernetes/ceph/operator.yaml
+++ b/cluster/examples/kubernetes/ceph/operator.yaml
@@ -90,7 +90,7 @@ data:
# CSI_RBD_PLUGIN_UPDATE_STRATEGY: "OnDelete"

# kubelet directory path, if kubelet configured to use other than /var/lib/kubelet path.
- # ROOK_CSI_KUBELET_DIR_PATH: "/var/lib/kubelet"
+ ROOK_CSI_KUBELET_DIR_PATH: "/var/lib/k0s/kubelet"

# Labels to add to the CSI CephFS Deployments and DaemonSets Pods.
# ROOK_CSI_CEPHFS_POD_LABELS: "key1=value1,key2=value2"

After these initial setup and configuration changes were made, the manifest application went fine. You may still want to go through the and also checkout which I found very relevant. Furthermore, goes over formatting in details.

kubectl apply

cd cluster/examples/kubernetes/ceph/
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml

On my system I see following on once all transient pods are done configuring the device:

$ kubectl --namespace=rook-ceph get pods,pvc,svc
NAME READY STATUS RESTARTS AGE
pod/csi-rbdplugin-9j2qr 3/3 Running 3 2d6h
pod/rook-ceph-tools-57787758df-x5f92 1/1 Running 1 2d6h
pod/rook-ceph-crashcollector-rpi4-0-594b5b9bc9-gbpnn 1/1 Running 1 2d6h
pod/rook-ceph-operator-95f44b96c-rvx5p 1/1 Running 2 2d6h
pod/csi-cephfsplugin-2rv5n 3/3 Running 3 2d6h
pod/rook-ceph-mon-a-65f7b7d7d9-7tw2g 1/1 Running 1 2d6h
pod/rook-ceph-mgr-a-68454d559d-sbhzg 1/1 Running 3 2d6h
pod/csi-rbdplugin-provisioner-55f998c984-cxd6c 6/6 Running 16 2d6h
pod/csi-cephfsplugin-provisioner-5b989b9977-c7gxf 6/6 Running 16 2d6h
pod/rook-ceph-osd-0-769bb7f97f-rkbvg 1/1 Running 3 2d6h
pod/rook-ceph-osd-1-85685cffc5-2vjrk 1/1 Running 0 31h
pod/rook-ceph-osd-prepare-rpi4-0-xplll 0/1 Completed 0 32m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/csi-rbdplugin-metrics ClusterIP 10.101.220.225 <none> 8080/TCP,8081/TCP 2d6h
service/csi-cephfsplugin-metrics ClusterIP 10.103.139.41 <none> 8080/TCP,8081/TCP 2d6h
service/rook-ceph-mon-a ClusterIP 10.101.137.222 <none> 6789/TCP,3300/TCP 2d6h
service/rook-ceph-mgr-dashboard ClusterIP 10.98.145.192 <none> 8443/TCP 2d6h
service/rook-ceph-mgr ClusterIP 10.110.192.94 <none> 9283/TCP 2d6h

I found logs from osd-prepare pod to be particularly useful in debugging when things were not working well:

$ kubectl --namespace=rook-ceph logs pods/rook-ceph-osd-prepare-rpi4-0-xplll...<redacted>
2021-05-03 21:11:08.750786 I | cephosd: 1 ceph-volume raw osd devices configured on this node
2021-05-03 21:11:08.750868 I | cephosd: devices = [{ID:1 Cluster:ceph UUID:9e581158-475d-4042-8b48-2b2205e91c6d DevicePartUUID: BlockPath:/dev/sda2 MetadataPath: WalPath: SkipLVRelease:true Location:root=default host=rpi4-0 LVBackedPV:false CVMode:raw Store:bluestore TopologyAffinity:}]

Once a few pods are created that dynamically requested volumes we can see the resources at the node level in lsblk command:

user@host:~/rook/cluster/examples/kubernetes/ceph$ lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda ceph_bluestor
├─sda2
└─sda3

rbd0 /var/lib/k0s/kubelet/pods/faa3d3dc-e7c0-4426-9022-
rbd1 /var/lib/k0s/kubelet/pods/f1a17a4c-519f-46ef-a935-

Finally storageclass can be create:

cd cluster/examples/kubernetes/ceph/csi/rbd
kubectl create -f storageclass-test.yaml

That’s it for now… I plan on documenting my journey as I work with more k8s based systems, so stay tuned!

Software engineer and entrepreneur currently building Kubernetes infrastructure and cloud native stack for edge/IoT and ML workflows.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store