Can I have a serviceless Ingress (part 2)

Now that ingress-nginx is being retired, people are looking for alternatives. haproxy-ingress is a choice. In a previous post I had showed how one could have an Ingress object that is not linked to any service (leaving any required processing to the ingress controller)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: not-nginx
  annotations:
    nginx.ingress.kubernetes.io/server-snippet: |
      return 302 https://new_web_server_here$request_uri;
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - old_name_here
    secretName: secret_name_here
  rules:
  - host: old_name_here

Unfortuneately, in the case of the haproxy-ingress the above won’t work. The Ingress object needs to be linked with a service, even if this service does not exist:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: not-nginx
  annotations:
    haproxy-ingress.github.io/redirect-to: https://new_web_server_here
spec:
  ingressClassName: haproxy
  tls:
  - hosts:
    - old_name_here
    secretName: secret_name_here
  rules:
  - host: old_name_here
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: no-service
            port:
              number: 80

You may want to user redirect-code-to if you want to set the 3xx code also.

A bare whiteboard application

A couple of years ago, a friend commented in a Slack that “there is no whiteboard / sketch application in macOS” readily available. Maybe there is and we did not know, but even at the time it was suggested that someting like https://jspaint.app might do the job. But the thing stayed at a corner of my mind.

About a week ago, I watched the ultimate introduction to pygame and the question returned. But now we also have LLMs to play some design and implentation ping pong and I started asking Claude a bit about that. And it was writing tons and tons of stuff where all was I wanted a bare whiteboard with no features at all. Something to (hand) draw lines on, no undo, no colors, not even a save image feature. And here is what I got:

import pygame
import sys
import argparse


options = argparse.ArgumentParser(description='Start a pygame window with configurable dimensions')
options.add_argument('--width', type=int, default=800, help='Window width (default: 800)')
options.add_argument('--height', type=int, default=600, help='Window height (default: 600)')
args = options.parse_args()

pygame.init()

screen = pygame.display.set_mode((args.width, args.height))
pygame.display.set_caption('pygame sketch')
clock = pygame.time.Clock()
a = (-1, -1)

screen.fill((250, 250, 160))
while True:
	for event in pygame.event.get():
		if event.type == pygame.QUIT:
			pygame.quit()
			sys.exit()

		if event.type == pygame.MOUSEBUTTONUP:
			a = (-1, -1)

	mouse_buttons = pygame.mouse.get_pressed()
	if mouse_buttons[0]:
		if a[0] == -1:
			a = pygame.mouse.get_pos()
		else:
			b = pygame.mouse.get_pos()
			pygame.draw.line(screen, 'red', a, b, 2)
			a = b

	pygame.display.flip()
	clock.tick(60)

I do not think I will make anything more with it. But it was a fun PoC , first time pygame application.

LLMs are my first go-to person

I have a go-to guy. Everyone should. But he decided to change continents. And this makes it difficult to challenge his mind on a frequent basis. I am now using Claude for this. It is always there and cannot complain, whatever crazy I may ask of it. It has even transformed into my repository of crazy ideas / projects that will never happen. Gone are the notebooks stashed deep in my drawers.

The LLM also never says no. Never denies a thing. It will never say something like “protect your time for such bullshit”. It will reframe the answer in a “positive” way. “While X is not the best technology for Y, it might be possible to achieve …”.

I iterate (“discuss”) a lot. Without code samples.

Maybe when something becomes well formulated, I will pick my go-to guy’s brain. Because it is easy to get high on your own supply.

Happy SysAdmin Day

One more SysAdmin Day. I don’t have any war story to share, so I will copy-paste a comment I posted on LinkedIn:


Let me tell you a story that involves a real (brick & mortar) Architect:

– Sir, I need to fill you in on the latest details on the construction of the mansion and to ask for a favor?
– What is it?
– Sir, can you please ask your wife to not change her mind every two days? We tear down walls and raise them and we will never finish that way
– Look, I pay you a ton of money to deal with my wife’s expectations. Otherwise I’d be doing it

Forget the crudeness of the client. The moral of the story is that you are being paid to operate in a volatile environment with changing requirements outside your control. You need to accept the fact and learn to navigate it.

On a branch of the prehistory of GRNOG

Originally a LinkedIn post, but I thought it deserved better posterity:

#GRNOG18 marks the 10 years of GRNOG. Faidon Liambotis, going through his email archives traced the roots of GRNOG to two (sometimes ovelapping) groups of people: The greek IPv6 working group, then led by Athanassios Liakopoulos and the Greek Postmasters mailing list (for which I lent a hand):

It was Angelo Karageorgiou (then leading the technical capability of a now defunct ISP) who reached out to me because UCEPROTECT was including lots of Greek address space by the buckets and refusing to delist them and asked whether we could gather fellow Postmasters to work on the thing. I knew some of them and reached out and a meeting was held at the OTE building, facilitated by the ever so nice Pandelis Papanikolaou. Faidon and George Kargiotakis were also there on behalf of GRNET – Greek Research & Technology Network and offered to host the mailing list (I think on Sympa?).

Two more meetings / dinners were held by the Greek Postmasters and eventually traffic on the list died out (ISPs were being absorbed, people, including me changed jobs, other ISPs outsourced their email to Microsoft). It was then that Andreas Polyrakis took over and provided the necessary energy with the rest of the team, merging the two groups and possibly a few more and the rest is history as they say.

Looking back, I have to say that I could have never built such an amazing community as the one Andreas, Faidon, Kostas Zorbadelos, Michalis Oikonomakos, Tasos Karaliotas and whoever else lent them a hand.

You guys rock and I am a proud observer of your work. Don’t ever change.

Enough with the pre-history; history is made by those who are doing the work (ie, not me).

How I install portainer

Portainer is an interesting piece of software that allows you to manage running containers in Docker, Docker Swarm and of course, Kubernetes. There are instructions on how to run the server part on your system, but as usual, I like to have my own twist on things.

While I am no big fan of helm, we are going to use it here. So let’s add the repository:

helm repo add portainer https://portainer.github.io/k8s/
helm repo update

Now you can peek at the helm values and decide on how to install the thing:

helm upgrade --install portainer portainer/portainer \
--create-namespace -n portainer \
--set image.tag=lts \
--set service.type=ClusterIP \
--set persistence.size=8G

You will notice that I do not have helm install an Ingress too. I do this because we may be running different ingress controllers for different things and we might want to do stuff that goes outside what the default Ingress object constructed by the helm chart does. In my case this is using cert-manager:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: portainer
  namespace: portainer
  annotations:
    nginx.ingress.kubernetes.io/service-upstream: "true"
    cert-manager.io/issuer: "letsencrypt-portainer"
    cert-manager.io/duration: "2160h"
    cert-manager.io/renew-before: "360h"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - portainer.example.com
    secretName: portainer-example-com-tls
  rules:
  - host: portainer.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: portainer
            port:
              number: 9000

I hope this helps into starting your Portainer journey.

systemd and the picolisp mailing list manager

In the past I have blogged how one can use picolisp’s mailing list manager (.tgz file) with docker. But this might not be practical for some people who might opt for a more “traditional” approach. That would require you to run it via systemd. So you would need a systemd service file. Assuming that you run list@example.com and you have a user list in your system, that would look like:

[Unit]
Description=MailingList

[Service]
User=list
ExecStart=/home/list/mailing.l
WorkingDirectory=/home/list
Restart=on-failure
RestartSec=30s

[Install]
WantedBy=multi-user.target

As a reminder, you subscribe to such a list by sending email to the list with Subscribe in the subject line and you unsubscribe with Unsubscribe in the subject line. Emails sent to the list by non-members are silently dropped.

Working with the kubernetes dashboard

[ Yes, headlamp is a better choice for this ]

Sometimes when you are working with microk8s, you may want to run the Kubernetes dashboard. We first enable it with microk8s enable dashboard. We assume that we have microk8s enable rbac and microk8s enable metrics-server already. The dashboard pod runs in the kube-system namespace.

To access the dashboard we now create a service account which will be used for logging into the system: kubectl -n kube-system create sa kubernetes-dashboard-george

We bind this account to the cluster-admin role:

# kubectl -n kube-system create token kubernetes-dashboard-george
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-george
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-george
  namespace: kube-system

We apply this with something like kubectl apply -f kubernetes-dashboard-george.yaml

And now we can request a login token to access the dashboard with kubectl -n kube-system create token kubernetes-dashboard-george

We are almost there. Within the cluster we can run the port-forward command kubectl -n kube-system port-forward --address=0.0.0.0 svc/kubernetes-dashboard 8443:443

And now all that is left, is to access the dashboard. Assuming one of our machines has the IP address 172.31.1.13 we can use the nip.io trick and get to https://ip-172.31.1.13.nip.io:8443/#/pod?namespace=default

resurrecting a comment from 2010

There is a number of times I want to point people to an old comment I made at another blog, and always have a hard time finding it. So I am keeping a local copy for posterity. In the general case it is about an organization’s long term thinking vs a person’s thinking during their tenure in said organization:

They are not blind. They simply work within their time-frame of maintaining their job in email marketing. How much is this going to be? Three, Five years? Then they will switch subject and will not care for the ruins left behind. People in marketing and management are always that “blind” because they care more about their bonuses than the lifetime of the company they work for. As for the demise of their previous company, it is never their fault, right?

Obviously there are good people in marketing and management. I aim at the “bad apples” here.

PS: A longer discussion here.

Can I use ingress-nginx as a reverse proxy?

Questions at the Kubernetes slack are always interesting and sometimes nerd sniping. One such question came along the #ingress-nginx-users channel where a user was trying to make the nginx controller work as a reverse proxy too for a site outside of the Kubernetes cluster in question. The user tried to do this with configuration snippets, without using any ingress object at all. However the ingress-nginx maintainers discourage configuration snippets, as they are scheduled to be deprecated.

Now normally to solve this problem, one would deploy an nginx and configure it as a reverse proxy, create a service and link it with an Ingress object to do this. Assuming you run Docker Desktop Kubernetes and you want to reverse proxy api.chucknorris.io, a solution to that would look like this. So nothing really fancy, just typical stuff.

Is it possible though that one can achieve this, through clever use of annotations, without any deployments? I thought that I could do this with an ExternalName service. Defining such a service is not enough, because the ingress-nginx works with Endpoints and not Service objects under the hood. And Endpoints are not created automatically for an ExternalName. Enter EndpointSlices and you can define the endpoints on your own. You can do so even with an address type of FQDN (beware though that this seems to be heading for deprecation, but for now it works). And you end up with a solution that looks like this:

# Assuming Docker Desktop Kubernetes, this is a reverse proxy 
# leveraging the ingress-nginx to reverse proxy with api.chucknorris.io:
# curl -k -v -H "Host: chucknorris.local" https://kubernetes.docker.internal/jokes/random
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: chucknorris
spec:
  selfSigned: {}
---
apiVersion: v1
kind: Service
metadata:
  name: chucknorris
spec:
  type: ExternalName
  externalName: api.chucknorris.io
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
  name: chucknorris-1
  labels:
    kubernetes.io/service-name: chucknorris
addressType: FQDN
ports:
- protocol: TCP
  port: 443
endpoints:
- addresses:
  - "api.chucknorris.io"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: "chucknorris"
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "true"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "2"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-name: api.chucknorris.io
    nginx.ingress.kubernetes.io/upstream-vhost: api.chucknorris.io
    nginx.ingress.kubernetes.io/proxy-ssl-secret: default/chucknorris-local
  name: chucknorris
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - chucknorris.local
    secretName: chucknorris-local
  rules:
  - host: chucknorris.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: chucknorris
            port:
              number: 443