Exploring nomad for edge workload management

Saurabh Deoras
6 min readJun 12, 2021

Edge computing covers a wide spectrum of configurations from a multi-node Kubernetes cluster running as an edge unit to a single node Raspberry Pi Zero or perhaps a micro-controller sending data over MQTT channel. Each of these configurations requires a different way of thinking for workload management. For instance, if the edge device is a Kubernetes cluster, we could leverage Git-Ops for deploying applications on it. However, if the edge device is a Raspberry Pi Zero with a single core processor, limited memory and ARM v6 architecture, there are only a few options to run applications on such devices.

This post is a summary of my exploration using nomad for deploying workloads on Raspberry Pi Zero. In particular, I was looking for ways to dynamically deploy non-containerized binaries directly on the device and needed a scheduler to manage such deployments.

Non-Containerized Workloads

I have been working with BME280 and PA1010D sensors to capture temperature, pressure, humidity and GPS data. These sensors were attached to Raspberry Pi Zero W via I²C bus with an eventual goal to provide an edge unit to capture these data streams in a programmatic way. For instance, we could build a farm of sensors and collect at a central place for further analysis.

While I definitely prefer workloads to be containerized and deployed in a cloud-native way, it is not always possible to do so on devices with restricted compute capabilities. The workloads being explored here are binaries that need to be executed directly on the device. If you like to know more about the particular binaries for sensor data capture, I have written posts (here and here) on putting together binaries to capture data from BME280 and PA1010D sensors. Rest of this post assumes that the binaries exist on the Raspberry Pi Zero and sensors are attached on I²C bus.

Let’s now look at the workload orchestration setup.

Nomad Setup

Nomad is a very nice scheduler and workload orchestrator and it’s website describes it as follows:

A simple and flexible workload orchestrator to deploy and manage containers and non-containerized applications across on-prem and clouds at scale.

The ability to run non-containerized workloads is key here. In my setup I have a single-node nomad cluster (server) running as a statefulset on Kubernetes and it’s endpoint are exposed for clients to connect. Raspberry Pi Zero’s, as edge devices are setup with nomad in a client mode.

This way of configuring nomad server as a resource on Kubernetes may not be an ideal way to deploy it, however, it worked well for the server-client topology described below. In particular, I was after a configuration where all control-plane services would run on a Kubernetes cluster and all peripheral devices would talk to such cluster via MQTT brokers, REST call and so on. I came across nomad-on-kubernetes github repository which is an excellent resource to configure nomad server on a Kubernetes cluster. Considering my Kubernetes cluster ran on a Raspberry Pi 4, which is an arm64 based platform, I needed to make a few changes to the build process to get it working:

  • First, I compiled nomad as a static binary so I could pack it in a distroless container. Read more about special case of putting together multi-arch distroless container images here. In order to build nomad without cgo I also had to disable nVidia plugin. I hope these two changes won’t affect how I plan to use nomad. Also see this commit.
  • I was then able to build nomad for arm v6 architecture as well, which allowed me to run these binaries on Raspberry Pi Zero.

As you can see, once Raspberry Pi Zero’s are configured, we see all connected clients on the nomad UI running on the server side.

As for nomad client config, I followed the systemd setup instructions here except for the fact that I built nomad binary separately as described earlier.

Similarly on the server side we can have a heml chart managing statefulset for nomad running on a Raspberry Pi 4 based Kubernetes cluster:

└─ $ ▶ kubectl --namespace=nomad-system get statefulsets.apps,pods,pvc,secrets,svc
NAME READY AGE
statefulset.apps/nomad 1/1 2d3h
NAME READY STATUS RESTARTS AGE
pod/nomad-0 1/1 Running 0 2d3h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nomad-data-nomad-0 Bound pvc-a3692c3d-77d4-40be-8321-ec5f0ba3ea7b 10Gi RWO standard 2d3h
NAME TYPE DATA AGE
secret/default-token-82tf5 kubernetes.io/service-account-token 3 2d3h
secret/artifact-registry-key kubernetes.io/dockerconfigjson 1 2d3h
secret/nomad-token-9crwr kubernetes.io/service-account-token 3 2d3h
secret/sh.helm.release.v1.nomad.v1 helm.sh/release.v1 1 2d3h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nomad ClusterIP 10.111.221.8 <none> 4646/TCP,4647/TCP,4648/TCP 2d3h
service/nomad-np NodePort 10.109.235.210 <none> 4646:32646/TCP,4647:32647/TCP,4648:32648/TCP 2d3h

It is important to turn on nomad ACL tokens and TLS for secure and authenticated API calls.

With server and client setup working properly, let’s now look at deploying workloads.

Workload Deployment

Workloads can now be defined in hcl formatted files. As you can see in the example workload below, the workload are configured to run using raw_exec driver and will execute a command at /usr/local/bin/bme280. I am currently having difficulties running workloads using exec driver, which deploys using chroot and is a safer option compared to raw_exec driver. There is some discussion on github (here) about this topic and I hope to get it working eventually. In the meantime, raw_exec driver will do just fine for our purposes.

└─ $ ▶ cat bme280-run.hcl 
job "measure-bme280" {
type = "batch"
datacenters = ["rpi4"]
group "sensor-bme280" {
task "run-bme280" {
driver = "raw_exec"
config {
command = "/usr/local/bin/bme280"
args = [
"run",
"--filename=/opt/bme280/data/bme280.json",
"--enable-filename-indexing",
"--interval-seconds=1",
"--total-duration-minutes=0",
"--total-count=0",
]
}
resources {
cpu = 100
memory = 50
}
}
}
}

Jobs can now be launched using nomad CLI as follows:

└─ $ ▶ nomad job run bme280-run.hcl 
==> Monitoring evaluation "ec2d9b18"
Evaluation triggered by job "measure-bme280"
==> Monitoring evaluation "ec2d9b18"
Allocation "a1a003ef" created: node "43210121", group "sensor-bme280"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "ec2d9b18" finished with status "complete"

└─ $ ▶ nomad job run pa1010d-raw.hcl
==> Monitoring evaluation "5e0c9b7c"
Evaluation triggered by job "measure-pa1010d"
==> Monitoring evaluation "5e0c9b7c"
Allocation "b2acb41f" created: node "43210121", group "sensor-pa1010d"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "5e0c9b7c" finished with status "complete"

Subsequently, these jobs can be monitored and further managed on the UI:

deploying non-containerized workloads using nomad

Summary

My current effort is to build an edge computing platform that hosts control-plane on a Kubernetes cluster. To deploy workloads on peripheral edge devices I explored the use of nomad to launch non-containerized workloads on Raspberry Pi Zero’s. After few minor changes to the build process it was possible to build nomad for ARM architectures running arm v6. Server side nomad can be managed easily as a Kubernetes statefulset resource and clients can connect to it using exposed endpoints, NodePort in my case.

So far this exploration is going well and I am happy with the ease of use of nomad. It is also good to know that similar workflow will extend easily for deploying containerized workloads on devices that do allow it but are not part of a Kubernetes cluster.

Stay tuned for more updates as I explore this workflow further. Please consult upstream hashicorp nomad docs for best practices on deploying it for production use cases and consider the workflow described here for exploratory purposes only… enjoy!

Saurabh Deoras

Software engineer and entrepreneur currently building Kubernetes infrastructure and cloud native stack for edge/IoT and ML workflows.