Working with the kubernetes dashboard

[ Yes, headlamp is a better choice for this ]

Sometimes when you are working with microk8s, you may want to run the Kubernetes dashboard. We first enable it with microk8s enable dashboard. We assume that we have microk8s enable rbac and microk8s enable metrics-server already. The dashboard pod runs in the kube-system namespace.

To access the dashboard we now create a service account which will be used for logging into the system: kubectl -n kube-system create sa kubernetes-dashboard-george

We bind this account to the cluster-admin role:

# kubectl -n kube-system create token kubernetes-dashboard-george
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-george
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-george
  namespace: kube-system

We apply this with something like kubectl apply -f kubernetes-dashboard-george.yaml

And now we can request a login token to access the dashboard with kubectl -n kube-system create token kubernetes-dashboard-george

We are almost there. Within the cluster we can run the port-forward command kubectl -n kube-system port-forward --address=0.0.0.0 svc/kubernetes-dashboard 8443:443

And now all that is left, is to access the dashboard. Assuming one of our machines has the IP address 172.31.1.13 we can use the nip.io trick and get to https://ip-172.31.1.13.nip.io:8443/#/pod?namespace=default

resurrecting a comment from 2010

There is a number of times I want to point people to an old comment I made at another blog, and always have a hard time finding it. So I am keeping a local copy for posterity. In the general case it is about an organization’s long term thinking vs a person’s thinking during their tenure in said organization:

They are not blind. They simply work within their time-frame of maintaining their job in email marketing. How much is this going to be? Three, Five years? Then they will switch subject and will not care for the ruins left behind. People in marketing and management are always that “blind” because they care more about their bonuses than the lifetime of the company they work for. As for the demise of their previous company, it is never their fault, right?

Obviously there are good people in marketing and management. I aim at the “bad apples” here.

PS: A longer discussion here.

Can I use ingress-nginx as a reverse proxy?

Questions at the Kubernetes slack are always interesting and sometimes nerd sniping. One such question came along the #ingress-nginx-users channel where a user was trying to make the nginx controller work as a reverse proxy too for a site outside of the Kubernetes cluster in question. The user tried to do this with configuration snippets, without using any ingress object at all. However the ingress-nginx maintainers discourage configuration snippets, as they are scheduled to be deprecated.

Now normally to solve this problem, one would deploy an nginx and configure it as a reverse proxy, create a service and link it with an Ingress object to do this. Assuming you run Docker Desktop Kubernetes and you want to reverse proxy api.chucknorris.io, a solution to that would look like this. So nothing really fancy, just typical stuff.

Is it possible though that one can achieve this, through clever use of annotations, without any deployments? I thought that I could do this with an ExternalName service. Defining such a service is not enough, because the ingress-nginx works with Endpoints and not Service objects under the hood. And Endpoints are not created automatically for an ExternalName. Enter EndpointSlices and you can define the endpoints on your own. You can do so even with an address type of FQDN (beware though that this seems to be heading for deprecation, but for now it works). And you end up with a solution that looks like this:

# Assuming Docker Desktop Kubernetes, this is a reverse proxy 
# leveraging the ingress-nginx to reverse proxy with api.chucknorris.io:
# curl -k -v -H "Host: chucknorris.local" https://kubernetes.docker.internal/jokes/random
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: chucknorris
spec:
  selfSigned: {}
---
apiVersion: v1
kind: Service
metadata:
  name: chucknorris
spec:
  type: ExternalName
  externalName: api.chucknorris.io
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
  name: chucknorris-1
  labels:
    kubernetes.io/service-name: chucknorris
addressType: FQDN
ports:
- protocol: TCP
  port: 443
endpoints:
- addresses:
  - "api.chucknorris.io"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: "chucknorris"
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "true"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "2"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-name: api.chucknorris.io
    nginx.ingress.kubernetes.io/upstream-vhost: api.chucknorris.io
    nginx.ingress.kubernetes.io/proxy-ssl-secret: default/chucknorris-local
  name: chucknorris
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - chucknorris.local
    secretName: chucknorris-local
  rules:
  - host: chucknorris.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: chucknorris
            port:
              number: 443

Writing a tiny reverse proxy in golang

I faced an unusal case at work where I needed to write a reverse proxy that would inject a specific header read from a file to handle the request. While I do not write professionally in Go, I thought it would be good enough for the task at hand. I did not know where to start and googling did show some other efforts, so I asked at Golangs’s Slack. When in doubt always ask. Hopefully someone will pick it up and answer. I was pointed to NewSingleHostReverseProxy. So the simplest reverse proxy in Go can look like this:

package main

import (
	"log"
	"net/http"
	"net/http/httputil"
	"net/url"
)

func main() {
	target, _ := url.Parse("https://api.chucknorris.io")
	proxy := httputil.NewSingleHostReverseProxy(target)
	http.Handle("/", proxy)
	err := http.ListenAndServe(":8080", nil)
	if err != nil {
		log.Fatal(err)
	}
}

This is the simplest reverse proxy and if you call it with curl -H 'Host: api.chucknorris.io' http://localhost:8080/jokes/random you are going to get back a joke.

But say we want to add a header, what do we do? We can define a Director function in our program that will allow us to do so:

proxy := httputil.NewSingleHostReverseProxy(target)
proxy.Director = func(req *http.Request) {
	req.Header.Set("X-Foo", "Bar")
}

Now trying the same curl command we get the error 2024/11/17 19:41:53 http: proxy error: unsupported protocol scheme "". That is because by defining the Director function, we have lost the previous one and we have to rebuild the scheme (and a couple of other headers after). There is though a better way to do this, by keeping track of the previous default Director:

	proxy := httputil.NewSingleHostReverseProxy(target)
	d := proxy.Director
	proxy.Director = func(req *http.Request) {
		d(req)
		req.Header.Set("X-Foo", "Bar")
	}

And now you are set. I am certain there are better and more performant ways to do this, but for the PoC I was working on, that was good enough.

[*] I used api.chucknorris.io for the blog post because it is one of the simplest open APIs out there to try stuff.
[**] The complete reverse proxy, taking care of error handling and environment variables is 65 lines.

Notes on deploying DexIdp on Kubernetes

dexidp.io contains valueable information on how to configure and run DexIdp, but even though they provide a docker container, there is scarce information on how to configure and run it.

So let’s create a DexIdp deployment in a Docker Desktop

kubectl create deployment dexidp --image dexidp/dex

We see from the Dockerfile that dex is being started by a custom entrypoint written in Go. This essentially executes gomplate. Gomplate is yet another template language written in Go. It reads /etc/dex/docker.config.yaml and produces a configuration file in /tmp which is then used to start the server.

So the best way to approach this is to get a local copy of this file with for example, edit th file as we see fit and then make it a configMap:

kubectl cp dexidp-79ff7cc5ff-p527s:/etc/dex/config.docker.yaml config.docker.yaml
:
kubectl create cm dex-config --from-file config.docker.yaml

We can now modify the deployment to mount the configMap

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dex
  name: dex
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dex
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: dex
    spec:
      volumes:
      - name: dex-config
        configMap:
          name: dex-config
          items:
          - key: "config.docker.yaml"
            path: "config.docker.yaml"
      containers:
      - image: dexidp/dex:v2.41.1
        name: dex
        volumeMounts:
        - name: dex-config
          mountPath: /etc/dex
        ports:
        - containerPort: 5556
          name: dex
        - containerPort: 5558
          name: telemetry

You can proceed from there with any specific configuration your setup requires and even make your own helm charts. I know there are already existing helm charts, but sometimes when in contact with a new technology is is best that you do not have to go over helm charts that try to cover all possible angles, as their makers rightfully try to accomodate for everybody knowledgable of their software.

So this is the DexIdp newbie’s deploy on Kubernetes guide. Do this, learn the ropes of the software, proceed with helm or other deployment styles.

“Management Science Fiction “

In this episode of ArrayCast, Turing Award winner Ken E. Iverson talks about Ian P. Sharp (founder of IPSharp and Associates) and shares this:

He was one of the early people in operations research, which came to be called management science. So he knows what this stuff is, but he also likes to speak of management science fiction, which I think reflects the correct thing that those techniques were very much overblown and oversold, at least for a period.

This of course reminded me of Gene Woolsey (again someone well known in operations research) who at the beginning of the book is seen saying:

:
5. Does it work?
6. If yes, is there a measurable, verifiable reduction in cost over what was done before, or a measurable, verifiable increase in readiness?
7. If yes, show it to me NOW.

If you think I am trying to take a dig at some current trend of overblown and oversold techniques using lessons and parallels from the past, you are correct :) I have expressed the very same opinion in private conversations about what is happening with GenAI today as I.P. Sharp had years ago about Management Science.

That made me smile for the rest of the weekend.

A peculiarity with Kubernetes immutable secrets

This post is again something from a question that popped in #kubernetes-users. A user was updating an immutable secret and yet the value was not propagating. But what is an immutable secret in the first place?

Once a Secret or ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data field. You can only delete and recreate the Secret. Existing Pods maintain a mount point to the deleted Secret – it is recommended to recreate these pods.

So the workflow is: Delete old immutable secret, create a new one, restart the pods that use it.

Let’s create an immutable secret from the command line:

% kubectl create secret generic version -o json --dry-run=client --from-literal=version=1  | jq '. += {"immutable": true}' | kubectl apply -f -
secret/version created

If there’s a way to create an immutable secret from the command line without using jq please tell me.

Now let’s create a deployment that uses it as an environment variable

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        env:
        - name: VERSION
          valueFrom:
            secretKeyRef:
              name: version
              key: version

Now let’s check the value for $VERSION:

% kubectl exec -it nginx-68bbbd9d9f-rwtfm -- env | grep ^VERSION
VERSION=1

The time has come for us to update the secret with a new value. Since it is immutable, we delete and recreate it:

% kubectl delete secrets version
secret "version" deleted
% kubectl create secret generic version -o json --dry-run=client --from-literal=version=2  | jq '. += {"immutable": true}' | kubectl apply -f -
secret/version created

We restart the pod and check the value of $VERSION again:

% kubectl delete pod nginx-68bbbd9d9f-rwtfm
pod "nginx-68bbbd9d9f-rwtfm" deleted
% kubectl exec -it nginx-68bbbd9d9f-x4c74 -- env | grep ^VERSION
VERSION=1

What happened here? Why has the pod the old value still? It seems that there some caching at play here and the new immutable secret is not passed at the pod. But let’s try something different now:

% kubectl delete secrets version
secret "version" deleted
% kubectl create secret generic version -o json --dry-run=client --from-literal=version=3  | jq '. += {"immutable": true}' | kubectl apply -f -
secret/version created
% kubectl scale deployment nginx --replicas 0
deployment.apps/nginx scaled
% kubectl scale deployment nginx --replicas 1
deployment.apps/nginx scaled
% kubectl exec -it nginx-68bbbd9d9f-zkj8h -- env | grep ^VERSION
VERSION=3

From what I understand, if you scale down your deployment and scale it up again this gives enough time for the kubelet to release the old cached value and pass the new, proper one.

Now all the above were tested with Docker Desktop Kubernetes. I did similar tests with a multinode microk8s cluster and when restarting the pods the environment variable was updated properly, but if instead you used a volume mount for the secret, it did not and you needed to scale down to zero first.

Can a pod belong to 2 workloads?

In the Kubernetes Slack an interesting question was posed:

Hi, can a pod belong to 2 workloads? For example, can a pod belong both the a workload and to the control plane workload?

My initial reaction was that, while a Pod can belong to two (or three, or more) services, it cannot belong to two workloads (Deployments for example). I put my theory to the test by creating initially a pod with some labels

apiVersion: v1
kind: Pod
metadata:
  name: caddy
  labels:
    apache: ok
    nginx: ok
spec:
  containers:
  - name: caddy
    image: caddy
    ports:
    - name: http
      containerPort: 80
    - name: https
      containerPort: 443

Sure enough the pod was created

% kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
caddy   1/1     Running   0          2s

Next I created a replicaSet whose pods have a label that the above (caddy) pod has also.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      nginx: ok
  template:
    metadata:
      labels:
        app: nginx
        nginx: ok
    spec:
      containers:
      - name: nginx
        image: bitnami/nginx
        ports:
        - containerPort: 8080

Since the original pod and the replicaSet share a common label (nginx: ok), the pod is assimilated in the replicaSet and it launches one additional pod only:

% kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
caddy         1/1     Running   0          2m52s
nginx-lmmbk   1/1     Running   0          3s

We can now ask Kubernetes to create an identical replicaSet that launches apache instead of nginx and has the apache: ok label set.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: apache
  labels:
    app: apache
spec:
  replicas: 2
  selector:
    matchLabels:
      apache: ok
  template:
    metadata:
      labels:
        app: apache
        apache: ok
    spec:
      containers:
      - name: apache
        image: bitnami/apache
        ports:
        - containerPort: 8080

If a pod can be shared among workloads, then it should start a single apache pod. Does it?

% kubectl get pod
NAME           READY   STATUS    RESTARTS   AGE
apache-8fwdz   1/1     Running   0          4s
apache-9xwhd   1/1     Running   0          4s
caddy          1/1     Running   0          5m17s
nginx-lmmbk    1/1     Running   0          2m28

As you can see, it starts two apache pods and the pods carrying the apache-ok label are three:

 % kubectl get pod -l apache=ok
NAME           READY   STATUS    RESTARTS   AGE
apache-8fwdz   1/1     Running   0          6m20s
apache-9xwhd   1/1     Running   0          6m20s
caddy          1/1     Running   0          11m

% kubectl get rs
NAME     DESIRED   CURRENT   READY   AGE
apache   2         2         2       6m21s
nginx    2         2         2       8m45s

So there you have it, a Pod cannot be shared among workloads.

Writing a Jenkinsfile

I like Jenkins a lot. Even with a plethora of systems that have a vastly better web UI and many of them tailored for specific platforms, it is still my first choice. Not for many other people and they are right, because you can easily shoot yourself on the foot the worst of times. That is why when people are new to Jenkins, I have an opinionated method to start them working with it. You work only with Multibranch pipelines (even when with a single branch) and also them being declarative pipelines:

Introduction

Multibranch pipelines which are what we would like to make use of at work, are driven by Jenkinsfiles. The language to program a Jenkinsfile is a DSL based on the Groovy language. Groovy is based on (and resembles) Java and thus is vast, as is the Jenkins declarative pipeline DSL and the multitude of plugins that are supported. This guide aims to help you write your first Jenkinsfile when you have no prior experience. As such it is opinionated. You are welcome to deviate from it once you get more experience with the tooling.

So, with your editor open a new file named Jenkinsfile at the top of your repository and let’s start!

Define a pipeline

To define a pipeline simply type

pipeline {
}

That’s it! You have defined a pipeline!

Lock the pipeline

Assuming we do not want two builds of the same project running concurrently, we acquire a temporary lock

pipeline {
  options {
    lock('poc-pipeline')
  }
}

Now if two different people start the same build, the builds will be executed sequentially

But where will the build run?

Builds run on Jenkins agents. Jenkins agents are labeled and we can select them based on their labels. In the general case we run docker based builds and as such we need to select an agent that has docker installed and also provide a container image to be launched for the build to run

pipeline {
  options {
    lock('poc-pipeline')
  }
  
  agent {
    docker {
      label 'docker'
      image 'busybox'
    }
  }
}

So with the above we select a Jenkins node labeled docker which will launch a docker container inside which all our intended operations will run

Build stages

Builds in Jenkins happen in stages. As such we define a stages section in the Jenkinsfile

pipeline {
  options {
    lock('poc-pipeline')
  }
  
  agent {
    docker {
      label 'docker'
      image 'busybox'
    }
  }
  
  stages {
    stage("build") {
    }
    stage("test") {
    }
    stage("deploy") {
    }
  }
}

Above we have defined three stages, build, test and deploy, which will run in any of the Jenkins agents labeled as docker and not necessarily on the same one. Because this can lead to confusion, we require, for now, that all of our build runs on the same node. One way to do this is to have “substages” within a stage in Jenkins. The syntax becomes a bit convoluted when you are not much experienced, but let’s see how it transforms

pipeline {
  options {
    lock('poc-pipeline')
  }
  
  agent {
    docker {
      label 'docker'
      image 'busybox'
    }
  }
  
  stages {
    stage("acquire node") {
      stages {
        stage("build") {
        }
      
        stage("test") {
        }
    
        stage("deploy") {
        }
      }
    } 
  }
}

The stage acquire node is assigned to a node labeled docker and the “sub-stages” build, test and deploy will run within this node.

Each stage has steps

Each stage in a pipeline executes a series of steps

pipeline {
  options {
    lock('poc-pipeline')
  }
  
  agent {
    docker {
      label 'docker'
      image 'busybox'
    }
  }
  
  stages {
    stage("acquire node") {
      stages {
        stage("build") {
          steps {
          }
        }
      
        stage("test") {
          steps {
          }
        }
    
        stage("deploy") {
          steps {
          }
        }
      }
    } 
  }
}

Time to say Hello, World!

It is now time to make something meaningful with the Jenkinsfile like have it tell us Hello, World!. We will show you two ways to do this, one via a script section which allows us to run some Groovy code (in case we need to check some logic or something) and one using direct sh commands:

pipeline {
  options {
    lock('poc-pipeline')
  }
  
  agent {
    docker {
      label 'docker'
      image 'busybox'
    }
  }
  
  stages {
    stage("acquire node") {
      stages {
        stage("build") {
          steps {
            script {
              // This is Groovy code here
              println "This is the build stage executing"
            }
          }
        }
      
        stage("test") {
          steps {
            sh """
            echo This is the test stage executing
            """
          }
        }
    
        stage("deploy") {
          steps {
            sh """
            echo This is the deploy stage executing
            """
            script {
              println "Hello, World!"
            }
          }
        }
      }
    } 
  }
}

Congratulations! You have now created a complete Jenkinsfile.

Epilogue

Where do we go from here? You are set for your Jenkins journey. By using the above boilerplate and understanding how it is created, you can now specify jobs, have them described in code and run. Most likely you will need to read about credentials in order to perform operations to services where authentication is needed.

I understand there is a lot of curly-brace hell, which can be abstracted by extending the pipeline DSL (I am, very slowly, experimenting with Pkl to see how to best achieve this, but here is a book for Groovy DSLs if you like).

Happy SysAdmin Day

The CrowdStrike thing happened a Friday too soon :) Which got me thinking. We give third party software a lot of permission in kernel mode, when in fact they are most likely not involved in the kernel development. And we ship updates to this software that get interpreted and execute actions in kernel space.

The only real difference from malware here is intent. Which reminded me of this old story where the author of an AdWare described how they used tinyscheme for their purpose.

Or the case when a friend figured out that a driver was crashing because it was using an XML parser (not designed for kernel space) to parse five lines of XML.

Or when Prolog was used in the WindowsNT kernel.

Random thoughts of the day.

Have a lovely weekend.