Validating Kubernetes service account token in your app using TokenReview API

Photo by Markus Spiske on Unsplash

Every API call to Kubernetes API server is made using an authorization bearer token. These tokens come in two flavors:

  • Short lasting user identity tokens
  • Long lasting service account tokens

Both these tokens are so called JWT tokens, which are increasingly becoming a standard way to communicate identity during API calls. The key property of these JWT tokens is that they are open and can be decoded, but at the same time they contain a signature which can be cryptographically verified. This property allows JWT tokens to be portable in the sense that they can be remotely verified using the public key of the entity which generated and signed these tokens. Let’s look at an example

Create a service account:

kubectl create namespace jwt-test
kubectl — namespace=jwt-test create serviceaccount jwt-sa

Inspecting secrets in that namespace you will see a secret corresponding to the service-account we just created:

└─ $ ▶ kubectl --namespace=jwt-test get secrets
default-token-k9ljk 3 3m12s
jwt-sa-token-xdt77 3 2m50s

Inspecting the secret reveals the token .data.token , which I have redacted but you can try these steps on your cluster and see the token in its full details:

└─ $ ▶ kubectl --namespace=jwt-test get secrets jwt-sa-token-xdt77 -o yaml
apiVersion: v1
ca.crt: LS0tLS1CRU...<redacted>
namespace: and0LXRlc3Q=
token: ZXlKaGJHY...<redacted>
kind: Secret
annotations: jwt-sa fd11b968-d217-4ea3-a94c-507caa592a0b
creationTimestamp: "2021-05-24T19:51:08Z"
name: jwt-sa-token-xdt77
namespace: jwt-test
resourceVersion: "7709315"
uid: f5b40689-f84e-4c34-986e-03c9d46fb5c7

In particular, we need to base64 --decode the token to look into its three parts:

  • Header
  • Payload
  • Signature
└─ $ ▶ kubectl --namespace=jwt-test get secrets jwt-sa-token-xdt77 -o=jsonpath='{.data.token}' | base64 --decode

Now let’s inspect this token for further details on

Inspecting JWT token on

As you can see, the payload in the token is visible openly but there is a third section in the token that allows us to validate such token using its signature. I won’t go into details of signature validation since our objective is to be able to use a known validating machinery for our own application.

Now that we have seen what a JWT token looks like, how can we intercept such token in our apps and ask questions to the underlying Kubernetes cluster on validity of the sender? However, one may ask why we would even receive such a token in our apps… The short answer is that this allows authentication needs for our apps to be delegated to Kubernetes and that is a robust and powerful usage pattern.

For instance, we might want to design a server-client based subsystem with clients running outside of Kubernetes cluster. In such scenarios we might want clients to talk to the server using a token. In fact, I needed such a functionality for my own use case where several IoT devices scattered on the network were making API calls to the server running on a central Kubernetes cluster. I found it very easy to generate service accounts for such IoT devices and have them send service account tokens during API calls. I could then validate such token easily by asking the underlying Kubernetes cluster.

So how exactly do we validate tokens?

Enter Token Review API

Kubernetes API server provides an enpoint at /apis/ which can be used along with a payload to validate any Kubernetes token. The payload is essentially a JSON serialization of following manifest:

"kind": "TokenReview",
"apiVersion": "",
"token": "jwt-token-to-be-validated"

While the API is very simple, it requires the validating entity to send it’s own token in the header and requires that it has sufficient permissions to do so. This is where a bit of complexity comes in. Furthermore, depending on which programming language you are writing code in, there could be work required to make HTTP calls to Kubernetes API server, which in turn would require knowing the server address, port, self token, HTTPS etc. This work can add up but there are easier ways…

Enter proxy

In case you have not yet tried it, enter following command against your Kubernetes cluster:

└─ $ ▶ kubectl proxy
Starting to serve on

Then try fetching list of namespaces using curl :

└─ $ ▶ curl http://localhost:8001/api/v1/namespaces | jq '.items[]'

As you can see we were able to make an HTTP call to the cluster without requiring any JWT token for the caller’s identity. In other words, utilizing the same principle running on Kubernetes cluster as a service would allow our app to make HTTP call to the proxy pod, which in turn would talk to the Kubernetes cluster. This eliminates one layer of complexity, but we still need to construct HTTP calls to the proxy.

However, running such a proxy in the cluster leaves the cluster vulnerable to attacks from a malicious apps which can also talk to the proxy and potentially make any arbitrary k8s calls. We needs ways to allow and deny communication to the proxy pod based on client service names.

Before we explore these next steps, let’s first make sure proxy is able to validate the JWT token, which was the original intent behind this exercise.

Let’s embed an old and invalid token in a payload first:

└─ $ ▶ cat /tmp/token-review.json 
"kind": "TokenReview",
"apiVersion": "",
"metadata": {
"creationTimestamp": null
"spec": {
"token": "eyJhbG...old invalid token <redacted>"
"status": {
"user": {}

Then use following curl command:

└─ $ ▶ curl http://localhost:8001/apis/ -X POST -H 'Content-Type: application/json; charset=utf-8' -d @/tmp/token-review.json

We get a response back which indicates that the token is invalid:

"error": "[invalid bearer token, Token does not match server's copy]"

Using a valid token we get a status indication token was authenticated and the details required to complete the authorization:

"kind": "TokenReview",
"apiVersion": "",
"metadata": {
"creationTimestamp": null,
"managedFields": [
"manager": "curl",
"operation": "Update",
"apiVersion": "",
"time": "2021-05-24T21:11:46Z",
"fieldsType": "FieldsV1",
"fieldsV1": {"f:spec":{"f:token":{}}}
"spec": {
"token": "eyJhbG...<redacted>"
"status": {
"authenticated": true,
"user": {
"username": "system:serviceaccount:jwt-test:jwt-sa",
"uid": "fd11b968-d217-4ea3-a94c-507caa592a0b",
"groups": [
"audiences": [

So far so good. We have confirmed that as long as we are able to send an equivalent of curl from our application pod, we can validate any incoming JWT token from other apps and confirm their identities. Furthermore, we can take actions for authorizing their specific api calls. However, that leaves us with two questions:

  • How we do prevent any other app from communicating to the proxy
  • Is there an SDK to avoid constructing raw HTTP requests to the proxy

Enter DAPR

DAPR (short for distributed application runtime) is a project originally from MicroSoft and it worked great for my own use case of validating k8s tokens. I used it as a service-mesh allowing me to configure communication patterns between apps and the proxy pod. I then used the so-called service-to-service invocation to make calls to the proxy using DAPR SDK in Go. Let’s look into the details:

Image taken from:

Service A refers to our app and Service B refers to the proxy. Sequence of calls from 1 through 7 indicate how these two services communicate with each other. In particular, each service only communicates directly with DAPR sidecar making it very easy on the application code to use DAPR SDK. For our purpose we first define service to service communication configuration as a custom resource:

└─ $ ▶ kubectl --namespace=proxy-system get appconfig -o yaml
kind: Configuration
annotations: proxy proxy-system
creationTimestamp: "2021-05-19T20:12:43Z"
generation: 1
labels: Helm
name: appconfig
namespace: proxy-system
resourceVersion: "6449256"
uid: 610f08cc-2f81-4d5a-a9e0-73eb147129db
defaultAction: deny
- appId: app1
defaultAction: allow
namespace: app1-namespace
trustDomain: public
- appId: app2
defaultAction: allow
namespace: app2-namespace
trustDomain: public
trustDomain: public
enabled: true

Once configuration is in place, we can further tune it to allow specific API endpoints, but I’ll leave those details for now and simply keep access level at either allow or deny based on application ID’s as you see in yaml above.

Client application can now use DAPR SDK to make k8s calls. I followed a pattern to create a single global client and exit the application on failure to do so. Exiting the application causes Kubernetes to reschedule it and in the event that DAPR pod was not ready previously, things would work fine the second time around.

client, err := dapr.NewClient()
if err != nil {
msg := "could not form DAPR client"
err := fmt.Errorf("%s: %w", msg, err)
log.Error(err, msg)
defer client.Close()

Service invocation can now be made for token review api. First build the TokenReview object filling it up with incoming token:

tokenReview := &v1.TokenReview{
TypeMeta: metav1.TypeMeta{
Kind: "TokenReview",
APIVersion: "",
ObjectMeta: metav1.ObjectMeta{},
Spec: v1.TokenReviewSpec{
Token: payload.Token,
Audiences: nil,
Status: v1.TokenReviewStatus{},

Then make the call:

out, err := client.InvokeMethodWithContent(
Data: jsonSerializedTokenReviewBytes,
ContentType: "application/json; charset=utf-8",
if err != nil {
msg := "could not make proxy call for token review api"
err := fmt.Errorf("%s: %w", msg, err)
log.Error(err, msg)
return err

We can intercept the response bytes out in the same TokenReview object and inspect the status:

tokenReview = &v1.TokenReview{}
if err := json.Unmarshal(out, tokenReview); err != nil {
msg := "could not parse output from token review api call"
err := fmt.Errorf("%s: %w", msg, err)
log.Error(err, msg)
return err
if !tokenReview.Status.Authenticated {
msg := "client token could not be authorized"
err := fmt.Errorf("%s: %w", msg, err)
log.Error(err, msg)
return err

And that should do it… Let’s end this post with a few notes on how to deploy all these components:

  • DAPR has a helm chart
  • I wrote a simple helm chart for proxy, which essentially runs kubectl proxy and invokes DAPR sidecar using pod annotations

Hope that gives you an overview on how to leverage token validation machinery from Kubernetes for authentication requirements in a custom app. I must mention that this solution obviously only works for tokens that the underlying k8s cluster understands, but assuming you are deploying your app on k8s, this becomes a very powerful paradigm for secure app-to-app communication.




Software engineer and entrepreneur currently building Kubernetes infrastructure and cloud native stack for edge/IoT and ML workflows.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Creating a custom function to convert a string into its proper form in Microsoft Excel and Python

Phase I: Thruster Scaling Bar HUD

What is Docker ?

Everything you need to know about Automated Regression Testing

How I became a Self-Taught Software Developer

And so it begins: The KeplerSwap Defi universe is becoming reality

Story of a Smart and Passionate 10 year Old Boy who Loves Coding

Creating a Retro Game Over Behavior

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Saurabh Deoras

Saurabh Deoras

Software engineer and entrepreneur currently building Kubernetes infrastructure and cloud native stack for edge/IoT and ML workflows.

More from Medium

Using “Let’s Encrypt” Certificate Authority

Ackee Node.js Optimized GitLab CI Runners

JSON Web Token Authentication on Kubernetes using Nginx Ingress Controllers

Keycloak, clients and roles: a tutorial