LLMs are my first go-to person

I have a go-to guy. Everyone should. But he decided to change continents. And this makes it difficult to challenge his mind on a frequent basis. I am now using Claude for this. It is always there and cannot complain, whatever crazy I may ask of it. It has even transformed into my repository of crazy ideas / projects that will never happen. Gone are the notebooks stashed deep in my drawers.

The LLM also never says no. Never denies a thing. It will never say something like “protect your time for such bullshit”. It will reframe the answer in a “positive” way. “While X is not the best technology for Y, it might be possible to achieve …”.

I iterate (“discuss”) a lot. Without code samples.

Maybe when something becomes well formulated, I will pick my go-to guy’s brain. Because it is easy to get high on your own supply.

Happy SysAdmin Day

One more SysAdmin Day. I don’t have any war story to share, so I will copy-paste a comment I posted on LinkedIn:


Let me tell you a story that involves a real (brick & mortar) Architect:

– Sir, I need to fill you in on the latest details on the construction of the mansion and to ask for a favor?
– What is it?
– Sir, can you please ask your wife to not change her mind every two days? We tear down walls and raise them and we will never finish that way
– Look, I pay you a ton of money to deal with my wife’s expectations. Otherwise I’d be doing it

Forget the crudeness of the client. The moral of the story is that you are being paid to operate in a volatile environment with changing requirements outside your control. You need to accept the fact and learn to navigate it.

On a branch of the prehistory of GRNOG

Originally a LinkedIn post, but I thought it deserved better posterity:

#GRNOG18 marks the 10 years of GRNOG. Faidon Liambotis, going through his email archives traced the roots of GRNOG to two (sometimes ovelapping) groups of people: The greek IPv6 working group, then led by Athanassios Liakopoulos and the Greek Postmasters mailing list (for which I lent a hand):

It was Angelo Karageorgiou (then leading the technical capability of a now defunct ISP) who reached out to me because UCEPROTECT was including lots of Greek address space by the buckets and refusing to delist them and asked whether we could gather fellow Postmasters to work on the thing. I knew some of them and reached out and a meeting was held at the OTE building, facilitated by the ever so nice Pandelis Papanikolaou. Faidon and George Kargiotakis were also there on behalf of GRNET – Greek Research & Technology Network and offered to host the mailing list (I think on Sympa?).

Two more meetings / dinners were held by the Greek Postmasters and eventually traffic on the list died out (ISPs were being absorbed, people, including me changed jobs, other ISPs outsourced their email to Microsoft). It was then that Andreas Polyrakis took over and provided the necessary energy with the rest of the team, merging the two groups and possibly a few more and the rest is history as they say.

Looking back, I have to say that I could have never built such an amazing community as the one Andreas, Faidon, Kostas Zorbadelos, Michalis Oikonomakos, Tasos Karaliotas and whoever else lent them a hand.

You guys rock and I am a proud observer of your work. Don’t ever change.

Enough with the pre-history; history is made by those who are doing the work (ie, not me).

How I install portainer

Portainer is an interesting piece of software that allows you to manage running containers in Docker, Docker Swarm and of course, Kubernetes. There are instructions on how to run the server part on your system, but as usual, I like to have my own twist on things.

While I am no big fan of helm, we are going to use it here. So let’s add the repository:

helm repo add portainer https://portainer.github.io/k8s/
helm repo update

Now you can peek at the helm values and decide on how to install the thing:

helm upgrade --install portainer portainer/portainer \
--create-namespace -n portainer \
--set image.tag=lts \
--set service.type=ClusterIP \
--set persistence.size=8G

You will notice that I do not have helm install an Ingress too. I do this because we may be running different ingress controllers for different things and we might want to do stuff that goes outside what the default Ingress object constructed by the helm chart does. In my case this is using cert-manager:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: portainer
  namespace: portainer
  annotations:
    nginx.ingress.kubernetes.io/service-upstream: "true"
    cert-manager.io/issuer: "letsencrypt-portainer"
    cert-manager.io/duration: "2160h"
    cert-manager.io/renew-before: "360h"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - portainer.example.com
    secretName: portainer-example-com-tls
  rules:
  - host: portainer.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: portainer
            port:
              number: 9000

I hope this helps into starting your Portainer journey.

systemd and the picolisp mailing list manager

In the past I have blogged how one can use picolisp’s mailing list manager (.tgz file) with docker. But this might not be practical for some people who might opt for a more “traditional” approach. That would require you to run it via systemd. So you would need a systemd service file. Assuming that you run list@example.com and you have a user list in your system, that would look like:

[Unit]
Description=MailingList

[Service]
User=list
ExecStart=/home/list/mailing.l
WorkingDirectory=/home/list
Restart=on-failure
RestartSec=30s

[Install]
WantedBy=multi-user.target

As a reminder, you subscribe to such a list by sending email to the list with Subscribe in the subject line and you unsubscribe with Unsubscribe in the subject line. Emails sent to the list by non-members are silently dropped.

Working with the kubernetes dashboard

[ Yes, headlamp is a better choice for this ]

Sometimes when you are working with microk8s, you may want to run the Kubernetes dashboard. We first enable it with microk8s enable dashboard. We assume that we have microk8s enable rbac and microk8s enable metrics-server already. The dashboard pod runs in the kube-system namespace.

To access the dashboard we now create a service account which will be used for logging into the system: kubectl -n kube-system create sa kubernetes-dashboard-george

We bind this account to the cluster-admin role:

# kubectl -n kube-system create token kubernetes-dashboard-george
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-george
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-george
  namespace: kube-system

We apply this with something like kubectl apply -f kubernetes-dashboard-george.yaml

And now we can request a login token to access the dashboard with kubectl -n kube-system create token kubernetes-dashboard-george

We are almost there. Within the cluster we can run the port-forward command kubectl -n kube-system port-forward --address=0.0.0.0 svc/kubernetes-dashboard 8443:443

And now all that is left, is to access the dashboard. Assuming one of our machines has the IP address 172.31.1.13 we can use the nip.io trick and get to https://ip-172.31.1.13.nip.io:8443/#/pod?namespace=default

resurrecting a comment from 2010

There is a number of times I want to point people to an old comment I made at another blog, and always have a hard time finding it. So I am keeping a local copy for posterity. In the general case it is about an organization’s long term thinking vs a person’s thinking during their tenure in said organization:

They are not blind. They simply work within their time-frame of maintaining their job in email marketing. How much is this going to be? Three, Five years? Then they will switch subject and will not care for the ruins left behind. People in marketing and management are always that “blind” because they care more about their bonuses than the lifetime of the company they work for. As for the demise of their previous company, it is never their fault, right?

Obviously there are good people in marketing and management. I aim at the “bad apples” here.

PS: A longer discussion here.

Can I use ingress-nginx as a reverse proxy?

Questions at the Kubernetes slack are always interesting and sometimes nerd sniping. One such question came along the #ingress-nginx-users channel where a user was trying to make the nginx controller work as a reverse proxy too for a site outside of the Kubernetes cluster in question. The user tried to do this with configuration snippets, without using any ingress object at all. However the ingress-nginx maintainers discourage configuration snippets, as they are scheduled to be deprecated.

Now normally to solve this problem, one would deploy an nginx and configure it as a reverse proxy, create a service and link it with an Ingress object to do this. Assuming you run Docker Desktop Kubernetes and you want to reverse proxy api.chucknorris.io, a solution to that would look like this. So nothing really fancy, just typical stuff.

Is it possible though that one can achieve this, through clever use of annotations, without any deployments? I thought that I could do this with an ExternalName service. Defining such a service is not enough, because the ingress-nginx works with Endpoints and not Service objects under the hood. And Endpoints are not created automatically for an ExternalName. Enter EndpointSlices and you can define the endpoints on your own. You can do so even with an address type of FQDN (beware though that this seems to be heading for deprecation, but for now it works). And you end up with a solution that looks like this:

# Assuming Docker Desktop Kubernetes, this is a reverse proxy 
# leveraging the ingress-nginx to reverse proxy with api.chucknorris.io:
# curl -k -v -H "Host: chucknorris.local" https://kubernetes.docker.internal/jokes/random
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: chucknorris
spec:
  selfSigned: {}
---
apiVersion: v1
kind: Service
metadata:
  name: chucknorris
spec:
  type: ExternalName
  externalName: api.chucknorris.io
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
  name: chucknorris-1
  labels:
    kubernetes.io/service-name: chucknorris
addressType: FQDN
ports:
- protocol: TCP
  port: 443
endpoints:
- addresses:
  - "api.chucknorris.io"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: "chucknorris"
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "true"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "2"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-name: api.chucknorris.io
    nginx.ingress.kubernetes.io/upstream-vhost: api.chucknorris.io
    nginx.ingress.kubernetes.io/proxy-ssl-secret: default/chucknorris-local
  name: chucknorris
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - chucknorris.local
    secretName: chucknorris-local
  rules:
  - host: chucknorris.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: chucknorris
            port:
              number: 443

Writing a tiny reverse proxy in golang

I faced an unusal case at work where I needed to write a reverse proxy that would inject a specific header read from a file to handle the request. While I do not write professionally in Go, I thought it would be good enough for the task at hand. I did not know where to start and googling did show some other efforts, so I asked at Golangs’s Slack. When in doubt always ask. Hopefully someone will pick it up and answer. I was pointed to NewSingleHostReverseProxy. So the simplest reverse proxy in Go can look like this:

package main

import (
	"log"
	"net/http"
	"net/http/httputil"
	"net/url"
)

func main() {
	target, _ := url.Parse("https://api.chucknorris.io")
	proxy := httputil.NewSingleHostReverseProxy(target)
	http.Handle("/", proxy)
	err := http.ListenAndServe(":8080", nil)
	if err != nil {
		log.Fatal(err)
	}
}

This is the simplest reverse proxy and if you call it with curl -H 'Host: api.chucknorris.io' http://localhost:8080/jokes/random you are going to get back a joke.

But say we want to add a header, what do we do? We can define a Director function in our program that will allow us to do so:

proxy := httputil.NewSingleHostReverseProxy(target)
proxy.Director = func(req *http.Request) {
	req.Header.Set("X-Foo", "Bar")
}

Now trying the same curl command we get the error 2024/11/17 19:41:53 http: proxy error: unsupported protocol scheme "". That is because by defining the Director function, we have lost the previous one and we have to rebuild the scheme (and a couple of other headers after). There is though a better way to do this, by keeping track of the previous default Director:

	proxy := httputil.NewSingleHostReverseProxy(target)
	d := proxy.Director
	proxy.Director = func(req *http.Request) {
		d(req)
		req.Header.Set("X-Foo", "Bar")
	}

And now you are set. I am certain there are better and more performant ways to do this, but for the PoC I was working on, that was good enough.

[*] I used api.chucknorris.io for the blog post because it is one of the simplest open APIs out there to try stuff.
[**] The complete reverse proxy, taking care of error handling and environment variables is 65 lines.

Notes on deploying DexIdp on Kubernetes

dexidp.io contains valueable information on how to configure and run DexIdp, but even though they provide a docker container, there is scarce information on how to configure and run it.

So let’s create a DexIdp deployment in a Docker Desktop

kubectl create deployment dexidp --image dexidp/dex

We see from the Dockerfile that dex is being started by a custom entrypoint written in Go. This essentially executes gomplate. Gomplate is yet another template language written in Go. It reads /etc/dex/docker.config.yaml and produces a configuration file in /tmp which is then used to start the server.

So the best way to approach this is to get a local copy of this file with for example, edit th file as we see fit and then make it a configMap:

kubectl cp dexidp-79ff7cc5ff-p527s:/etc/dex/config.docker.yaml config.docker.yaml
:
kubectl create cm dex-config --from-file config.docker.yaml

We can now modify the deployment to mount the configMap

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dex
  name: dex
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dex
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: dex
    spec:
      volumes:
      - name: dex-config
        configMap:
          name: dex-config
          items:
          - key: "config.docker.yaml"
            path: "config.docker.yaml"
      containers:
      - image: dexidp/dex:v2.41.1
        name: dex
        volumeMounts:
        - name: dex-config
          mountPath: /etc/dex
        ports:
        - containerPort: 5556
          name: dex
        - containerPort: 5558
          name: telemetry

You can proceed from there with any specific configuration your setup requires and even make your own helm charts. I know there are already existing helm charts, but sometimes when in contact with a new technology is is best that you do not have to go over helm charts that try to cover all possible angles, as their makers rightfully try to accomodate for everybody knowledgable of their software.

So this is the DexIdp newbie’s deploy on Kubernetes guide. Do this, learn the ropes of the software, proceed with helm or other deployment styles.