Notes on Kubernetes-native ways to visualize IoT sensor data

Grafana dashboard showing sensor data (Image by author)

This post is a high level summary of the key moving parts in a Kubernetes based system running on Raspberry Pi’s that do the job of capturing IoT sensor data and funneling it all the way to Grafana dashboards in a pure cloud-native manner. My goal was to arrive at a workflow that would allow me to manage sensor workloads, such as environmental data capture, via Kubernetes manifests. The advantage of having all components expressed as Kubernetes resources is that we can mange their life-cycle easily and integrate with other open source components in a standardized so-called “cloud-native” way. As you will see later in this post, I only needed to write code to trigger sensor data capture, but the pipeline to eventually visualize that data on a Grafana dashboard is all thanks to the excellent ecosystem we have in the cloud-native world!

The hardware setup consisted of three Raspberry Pi’s that formed a Kubernetes cluster. The sensor was BME280 which captured temperature, pressure, humidity and altitude data from the environment. BME280 sensor was attached to the Raspberry Pi node via I²C interface. With hardware in place, I used k0s distribution of Kubernetes to form the cluster. My configuration consisted of a single master node and two worker nodes all running 64-bit version of Raspberry Pi OS. I had to perform a few steps to prepare nodes prior to deploying Kuernetes, such as installing cgroups related components that do not come preinstalled with the OS.

└─ $ ▶ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node-0 Ready <none> 8d v1.21.3+k0s 000.000.0.00 <none> Debian GNU/Linux 10 (buster) 5.10.52-v8+ containerd://1.4.8
node-1 Ready <none> 9d v1.21.3+k0s 000.000.0.00 <none> Debian GNU/Linux 10 (buster) 5.10.52-v8+ containerd://1.4.8
node-2 Ready <none> 9d v1.21.3+k0s 000.000.0.0 <none> Debian GNU/Linux 10 (buster) 5.10.52-v8+ containerd://1.4.8

The next step was to put together a container image that would do the work of interacting with the sensors and capture environmental data. I wrote a simple Go based binary for this purpose which was very easy to cross compile to arm64 target and package in a so-called distroless container image. The ability to cross compile binaries and cross build container images was key to a smooth development workflow. The output of running the container image was simple JSON formatted logs with sensor readings in them. The idea was to keep things as simple as possible and not instrument any code for eventual visualization in the binary.

Now with basic infrastructure ready, let’s look at other pieces and how they fit together.

  • Data capture workload was expressed as a deployment with a node selector based on where sensor was attached
  • loki was used for capturing logs. promtail, a component of loki, exposed metrics for prometheus to scrape
  • grafana interfaced with both prometheus and loki data sources allowing visualization of sensor data

The high level schematic of these building blocks was as shown below:

┌──────────────┐         ┌────────────┐       ┌────────┐
│ │ │ │ │ │
│ Sensor data │ │ Promtail │ │ Loki │
│ capture ├─────────► ├───────► │
│ │ │ │ │ │
└──────────────┘ └──────┬─────┘ └────┬───┘
│ │
│ │
│ │
│ │
┌─────▼──────┐ ┌────▼────┐
│ │ │ │
│ Prometheus ├──────► Grafana │
│ │ │ │
└────────────┘ └─────────┘

prometheus, loki and grafana were installed using their corresponding helm charts. The key to the configuration was the log format processing, which happened in the promtail config. This config allows us to define log processing in stages. Since the idea here was to have promtail read logs from the sensor pods, we needed to configure promtail on how to parse these logs and extract relevant information from them.

For instance, the logs had following format. It is important to note that the logs were produced as single line JSON strings, however, I am displaying them here using pretty-printing via jq.

└─ $ ▶ kubectl --namespace=sensor-system logs deployments.apps/bme280 | tail -n 1 | jq '.'
{
"level": "info",
"time": "2021-08-22T00:54:43.229Z",
"name": "bme280.data",
"msg": "data",
"temperature": 25.42,
"pressure": 997.44,
"humidity": 44.05,
"altitude": 132
}

Since such logs were originating from container runtime in Kubernetes, the promtail config to parse such a lot entry and extract sensor data required the first stage to be a cri: {} stage followed by json parsing and then followed by metrics stage.

pipelineStages:
- cri: {}
- json:
expressions:
output: msg
level: level
temperature: temperature
pressure: pressure
humidity: humidity
altitude: altitude
timestamp: time
- metrics:
temperature:
type: Gauge
description: temperature value
source: temperature
config:
match_all: true
action: set
pressure:
type: Gauge
description: pressure value
source: pressure
config:
match_all: true
action: set
humidity:
type: Gauge
description: humidity value
source: humidity
config:
match_all: true
action: set
altitude:
type: Gauge
description: altitude value
source: altitude
config:
match_all: true
action: set

The promtail config did two things:

  • Parsed logs to extract relevant labels such as temperature, pressure etc.
  • Generated metrics from these logs for prometheus to scrape.

I needed an additional step to trigger prometheus to scrape promtail pods. That was achieved using a serviceMonitor resource available in operator based instance of prometheus. The serviceMonitor config was very simple and allowed prometheus to locate the relevant pods:

└─ $ ▶ kubectl --namespace=loki-system get servicemonitors.monitoring.coreos.com loki-promtail -o yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: loki-system
creationTimestamp: "2021-08-21T18:59:44Z"
generation: 1
labels:
app: promtail
app.kubernetes.io/managed-by: Helm
chart: promtail-2.2.0
heritage: Helm
release: prometheus
name: loki-promtail
namespace: loki-system
resourceVersion: "693145"
uid: 9ba02e48-e03c-4384-abca-e7e89afb4afa
spec:
endpoints:
- port: http-metrics
namespaceSelector:
matchNames:
- loki-system
selector:
matchLabels:
app: promtail
release: loki

It can get tricky to connect various moving parts of the system when deployed in separate namespaces. In my case, prometheus and loki existed in separate namespaces, which required the service selector label to have an entry release: loki, whereas prometheus selector was identified in the serviceMonitor label as release: prometheus. With this config in place, we can not only inspect logs in grafana via loki data source, but also produce dashboards on metrics using prometheus data source.

Grafana exploration of logs via loki data source

As you see above, the pod logs can now be queried by providing namespace and app values and the log entry parsing can be explored further for labels as shown below.

log entry explored in details on Grafana explore page

That summarizes the workflow I currently have which allows me to go from deploying a sensor data-capture workload to Grafana visualization all expressed as Kubernetes native resources. Managing lifecycle of such workloads via Kubernetes control plane certainly makes management of IoT devices and their sensor data very easy!

--

--

--

Software engineer and entrepreneur currently building Kubernetes infrastructure and cloud native stack for edge/IoT and ML workflows.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

What is the Concept of Domain-Driven Design?

These Are The Best Cloud Hosting Services In 2021…And Other Small Business Tech News

Provision CosmosDB in Azure using CRDs in Kubernetes (k8s)

How to use MailCatcher in Rails

[Algo]: Interwoven Strings

How-to Install LibreELEC Linux on cheap Android TV Box

Walking through the Microsoft Engage Mentorship Program with Ria

A Solidity series to get you from 0 to HERO in no time [Part 3]

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Saurabh Deoras

Saurabh Deoras

Software engineer and entrepreneur currently building Kubernetes infrastructure and cloud native stack for edge/IoT and ML workflows.

More from Medium

Drawing Cloud Architectures, Neural Network Diagrams and more with Draw.io

Automated MariaDB Replication using Docker

Automated dependabot configuration in GitHub

DNS — I know what it is, but how does it work?