Can I have a serviceless Ingress (part 2)

Now that ingress-nginx is being retired, people are looking for alternatives. haproxy-ingress is a choice. In a previous post I had showed how one could have an Ingress object that is not linked to any service (leaving any required processing to the ingress controller)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: not-nginx
  annotations:
    nginx.ingress.kubernetes.io/server-snippet: |
      return 302 https://new_web_server_here$request_uri;
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - old_name_here
    secretName: secret_name_here
  rules:
  - host: old_name_here

Unfortuneately, in the case of the haproxy-ingress the above won’t work. The Ingress object needs to be linked with a service, even if this service does not exist:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: not-nginx
  annotations:
    haproxy-ingress.github.io/redirect-to: https://new_web_server_here
spec:
  ingressClassName: haproxy
  tls:
  - hosts:
    - old_name_here
    secretName: secret_name_here
  rules:
  - host: old_name_here
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: no-service
            port:
              number: 80

You may want to user redirect-code-to if you want to set the 3xx code also.

How I install portainer

Portainer is an interesting piece of software that allows you to manage running containers in Docker, Docker Swarm and of course, Kubernetes. There are instructions on how to run the server part on your system, but as usual, I like to have my own twist on things.

While I am no big fan of helm, we are going to use it here. So let’s add the repository:

helm repo add portainer https://portainer.github.io/k8s/
helm repo update

Now you can peek at the helm values and decide on how to install the thing:

helm upgrade --install portainer portainer/portainer \
--create-namespace -n portainer \
--set image.tag=lts \
--set service.type=ClusterIP \
--set persistence.size=8G

You will notice that I do not have helm install an Ingress too. I do this because we may be running different ingress controllers for different things and we might want to do stuff that goes outside what the default Ingress object constructed by the helm chart does. In my case this is using cert-manager:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: portainer
  namespace: portainer
  annotations:
    nginx.ingress.kubernetes.io/service-upstream: "true"
    cert-manager.io/issuer: "letsencrypt-portainer"
    cert-manager.io/duration: "2160h"
    cert-manager.io/renew-before: "360h"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - portainer.example.com
    secretName: portainer-example-com-tls
  rules:
  - host: portainer.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: portainer
            port:
              number: 9000

I hope this helps into starting your Portainer journey.

Working with the kubernetes dashboard

[ Yes, headlamp is a better choice for this ]

Sometimes when you are working with microk8s, you may want to run the Kubernetes dashboard. We first enable it with microk8s enable dashboard. We assume that we have microk8s enable rbac and microk8s enable metrics-server already. The dashboard pod runs in the kube-system namespace.

To access the dashboard we now create a service account which will be used for logging into the system: kubectl -n kube-system create sa kubernetes-dashboard-george

We bind this account to the cluster-admin role:

# kubectl -n kube-system create token kubernetes-dashboard-george
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-george
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-george
  namespace: kube-system

We apply this with something like kubectl apply -f kubernetes-dashboard-george.yaml

And now we can request a login token to access the dashboard with kubectl -n kube-system create token kubernetes-dashboard-george

We are almost there. Within the cluster we can run the port-forward command kubectl -n kube-system port-forward --address=0.0.0.0 svc/kubernetes-dashboard 8443:443

And now all that is left, is to access the dashboard. Assuming one of our machines has the IP address 172.31.1.13 we can use the nip.io trick and get to https://ip-172.31.1.13.nip.io:8443/#/pod?namespace=default

Can I use ingress-nginx as a reverse proxy?

Questions at the Kubernetes slack are always interesting and sometimes nerd sniping. One such question came along the #ingress-nginx-users channel where a user was trying to make the nginx controller work as a reverse proxy too for a site outside of the Kubernetes cluster in question. The user tried to do this with configuration snippets, without using any ingress object at all. However the ingress-nginx maintainers discourage configuration snippets, as they are scheduled to be deprecated.

Now normally to solve this problem, one would deploy an nginx and configure it as a reverse proxy, create a service and link it with an Ingress object to do this. Assuming you run Docker Desktop Kubernetes and you want to reverse proxy api.chucknorris.io, a solution to that would look like this. So nothing really fancy, just typical stuff.

Is it possible though that one can achieve this, through clever use of annotations, without any deployments? I thought that I could do this with an ExternalName service. Defining such a service is not enough, because the ingress-nginx works with Endpoints and not Service objects under the hood. And Endpoints are not created automatically for an ExternalName. Enter EndpointSlices and you can define the endpoints on your own. You can do so even with an address type of FQDN (beware though that this seems to be heading for deprecation, but for now it works). And you end up with a solution that looks like this:

# Assuming Docker Desktop Kubernetes, this is a reverse proxy 
# leveraging the ingress-nginx to reverse proxy with api.chucknorris.io:
# curl -k -v -H "Host: chucknorris.local" https://kubernetes.docker.internal/jokes/random
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: chucknorris
spec:
  selfSigned: {}
---
apiVersion: v1
kind: Service
metadata:
  name: chucknorris
spec:
  type: ExternalName
  externalName: api.chucknorris.io
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
  name: chucknorris-1
  labels:
    kubernetes.io/service-name: chucknorris
addressType: FQDN
ports:
- protocol: TCP
  port: 443
endpoints:
- addresses:
  - "api.chucknorris.io"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: "chucknorris"
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "true"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "2"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-name: api.chucknorris.io
    nginx.ingress.kubernetes.io/upstream-vhost: api.chucknorris.io
    nginx.ingress.kubernetes.io/proxy-ssl-secret: default/chucknorris-local
  name: chucknorris
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - chucknorris.local
    secretName: chucknorris-local
  rules:
  - host: chucknorris.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: chucknorris
            port:
              number: 443

Notes on deploying DexIdp on Kubernetes

dexidp.io contains valueable information on how to configure and run DexIdp, but even though they provide a docker container, there is scarce information on how to configure and run it.

So let’s create a DexIdp deployment in a Docker Desktop

kubectl create deployment dexidp --image dexidp/dex

We see from the Dockerfile that dex is being started by a custom entrypoint written in Go. This essentially executes gomplate. Gomplate is yet another template language written in Go. It reads /etc/dex/docker.config.yaml and produces a configuration file in /tmp which is then used to start the server.

So the best way to approach this is to get a local copy of this file with for example, edit th file as we see fit and then make it a configMap:

kubectl cp dexidp-79ff7cc5ff-p527s:/etc/dex/config.docker.yaml config.docker.yaml
:
kubectl create cm dex-config --from-file config.docker.yaml

We can now modify the deployment to mount the configMap

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dex
  name: dex
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dex
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: dex
    spec:
      volumes:
      - name: dex-config
        configMap:
          name: dex-config
          items:
          - key: "config.docker.yaml"
            path: "config.docker.yaml"
      containers:
      - image: dexidp/dex:v2.41.1
        name: dex
        volumeMounts:
        - name: dex-config
          mountPath: /etc/dex
        ports:
        - containerPort: 5556
          name: dex
        - containerPort: 5558
          name: telemetry

You can proceed from there with any specific configuration your setup requires and even make your own helm charts. I know there are already existing helm charts, but sometimes when in contact with a new technology is is best that you do not have to go over helm charts that try to cover all possible angles, as their makers rightfully try to accomodate for everybody knowledgable of their software.

So this is the DexIdp newbie’s deploy on Kubernetes guide. Do this, learn the ropes of the software, proceed with helm or other deployment styles.

A peculiarity with Kubernetes immutable secrets

This post is again something from a question that popped in #kubernetes-users. A user was updating an immutable secret and yet the value was not propagating. But what is an immutable secret in the first place?

Once a Secret or ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data field. You can only delete and recreate the Secret. Existing Pods maintain a mount point to the deleted Secret – it is recommended to recreate these pods.

So the workflow is: Delete old immutable secret, create a new one, restart the pods that use it.

Let’s create an immutable secret from the command line:

% kubectl create secret generic version -o json --dry-run=client --from-literal=version=1  | jq '. += {"immutable": true}' | kubectl apply -f -
secret/version created

If there’s a way to create an immutable secret from the command line without using jq please tell me.

Now let’s create a deployment that uses it as an environment variable

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        env:
        - name: VERSION
          valueFrom:
            secretKeyRef:
              name: version
              key: version

Now let’s check the value for $VERSION:

% kubectl exec -it nginx-68bbbd9d9f-rwtfm -- env | grep ^VERSION
VERSION=1

The time has come for us to update the secret with a new value. Since it is immutable, we delete and recreate it:

% kubectl delete secrets version
secret "version" deleted
% kubectl create secret generic version -o json --dry-run=client --from-literal=version=2  | jq '. += {"immutable": true}' | kubectl apply -f -
secret/version created

We restart the pod and check the value of $VERSION again:

% kubectl delete pod nginx-68bbbd9d9f-rwtfm
pod "nginx-68bbbd9d9f-rwtfm" deleted
% kubectl exec -it nginx-68bbbd9d9f-x4c74 -- env | grep ^VERSION
VERSION=1

What happened here? Why has the pod the old value still? It seems that there some caching at play here and the new immutable secret is not passed at the pod. But let’s try something different now:

% kubectl delete secrets version
secret "version" deleted
% kubectl create secret generic version -o json --dry-run=client --from-literal=version=3  | jq '. += {"immutable": true}' | kubectl apply -f -
secret/version created
% kubectl scale deployment nginx --replicas 0
deployment.apps/nginx scaled
% kubectl scale deployment nginx --replicas 1
deployment.apps/nginx scaled
% kubectl exec -it nginx-68bbbd9d9f-zkj8h -- env | grep ^VERSION
VERSION=3

From what I understand, if you scale down your deployment and scale it up again this gives enough time for the kubelet to release the old cached value and pass the new, proper one.

Now all the above were tested with Docker Desktop Kubernetes. I did similar tests with a multinode microk8s cluster and when restarting the pods the environment variable was updated properly, but if instead you used a volume mount for the secret, it did not and you needed to scale down to zero first.

Can a pod belong to 2 workloads?

In the Kubernetes Slack an interesting question was posed:

Hi, can a pod belong to 2 workloads? For example, can a pod belong both the a workload and to the control plane workload?

My initial reaction was that, while a Pod can belong to two (or three, or more) services, it cannot belong to two workloads (Deployments for example). I put my theory to the test by creating initially a pod with some labels

apiVersion: v1
kind: Pod
metadata:
  name: caddy
  labels:
    apache: ok
    nginx: ok
spec:
  containers:
  - name: caddy
    image: caddy
    ports:
    - name: http
      containerPort: 80
    - name: https
      containerPort: 443

Sure enough the pod was created

% kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
caddy   1/1     Running   0          2s

Next I created a replicaSet whose pods have a label that the above (caddy) pod has also.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      nginx: ok
  template:
    metadata:
      labels:
        app: nginx
        nginx: ok
    spec:
      containers:
      - name: nginx
        image: bitnami/nginx
        ports:
        - containerPort: 8080

Since the original pod and the replicaSet share a common label (nginx: ok), the pod is assimilated in the replicaSet and it launches one additional pod only:

% kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
caddy         1/1     Running   0          2m52s
nginx-lmmbk   1/1     Running   0          3s

We can now ask Kubernetes to create an identical replicaSet that launches apache instead of nginx and has the apache: ok label set.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: apache
  labels:
    app: apache
spec:
  replicas: 2
  selector:
    matchLabels:
      apache: ok
  template:
    metadata:
      labels:
        app: apache
        apache: ok
    spec:
      containers:
      - name: apache
        image: bitnami/apache
        ports:
        - containerPort: 8080

If a pod can be shared among workloads, then it should start a single apache pod. Does it?

% kubectl get pod
NAME           READY   STATUS    RESTARTS   AGE
apache-8fwdz   1/1     Running   0          4s
apache-9xwhd   1/1     Running   0          4s
caddy          1/1     Running   0          5m17s
nginx-lmmbk    1/1     Running   0          2m28

As you can see, it starts two apache pods and the pods carrying the apache-ok label are three:

 % kubectl get pod -l apache=ok
NAME           READY   STATUS    RESTARTS   AGE
apache-8fwdz   1/1     Running   0          6m20s
apache-9xwhd   1/1     Running   0          6m20s
caddy          1/1     Running   0          11m

% kubectl get rs
NAME     DESIRED   CURRENT   READY   AGE
apache   2         2         2       6m21s
nginx    2         2         2       8m45s

So there you have it, a Pod cannot be shared among workloads.

How I setup Rancher these days

Rancher is a very handy interface when you need to manage Kubernetes clusters. In the past I was deploying rancher on a single VM running the container as per the getting-started instructions. You could go a long way using a single machine setup.

However, if you observe how recent versions of the container start, a k3s cluster is launched within the container. Which kind of makes it an overkill to work this way. Also the Rancher documentation includes multiple directions on how to run it in a Kubernetes cluster (k3s being their obvious preferrence) and how to also do certificate management (which is something you need, since otherwise the rancher agent deployed in your clusters won’t be able to communicate via web sockets). Well, I am not a big fun of how the Rancher documentation describes the actions to take to launch it in a Kubernetes cluster, and more importantly I am annoyed at how SSL certificates are managed. You can go a long way using microk8s, and this is what I did in this case.

Assuming you have setup a microk8s cluster (single or three node cluster for HA) we are almost ready to start. Rancher deploys its stuff in the cattle-system namespace, so we create this first with kubectl create ns cattle-system. We will use helm to install Rancher and we want to provide some basic values to the installation. So we create a file named values.yaml with the following contents

auditLog:
  level: 1
bootstrapPassword: A_PASSWORD_HERE
hostname: rancher.example.net
replicas: 1
ingress:
  enabled: false

With the above we instruct helm not to deal with the Ingress, since we will provide this later (we want to manage certificates either on our own or via cert-manager at the Ingress object). Thus we run helm -n cattle-system install rancher rancher-latest/rancher -f values.yaml --version 2.8.3 to install it.

After some time passes (verified by something like kubectl -n cattle-system get pod) Rancher is installed and we now need to make it accessible from the “outside” world. Microk8s offers nginx-ingress as an add on (microk8s enable ingress sets this up) or we can use a different ingress like for example haproxy again using helm -n ingress-haproxy install haproxy-ingress haproxy-ingress/haproxy-ingress -f ./values-haproxy.yaml --version 0.14.6 . The contents for values-haproxy.yaml are:

controller:
  hostNetwork: true
  ingressClassResource:
    enabled: true

And now that we have the Ingress controller installed, we can also set it up

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: rancher-haproxy
  namespace: cattle-system
  annotations:
    haproxy-ingress.github.io/ssl-redirect: "true"
spec:
  ingressClassName: "haproxy"
  tls:
  - hosts:
    - rancher.example.net
    secretName: example-net
  rules:
  - host: rancher.example.net
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: rancher
            port:
              number: 80

And you are done. You can of course setup a cert-manager Issuer that will help you automate certificate management and issuing.

Happy ranchering.

PS: Assuming that a new version of Rancher is out, you can upgrade with something like helm -n cattle-system upgrade rancher rancher-latest/rancher -f values-rancher.yaml --version 2.8.4

Network Request Failed when configuring OpenLDAP authentication in Rancher

It may be the case that you have installed Rancher in a cluster via helm with something like

helm install rancher rancher-latest/rancher \
--namespace=cattle-system \
--set hostname=rancher.storfund.net \
--set replicas=1 \
--set bootstrapPassword=PASSWORD_HERE \
--set auditLog.level=1 \
--version 2.8.3

If you try to configure the OpenLDAP authentication (and maybe other directories) you will be greeted with the not at all helpful message Network Request Failed` where in the logs you will see that your OpenLDAP server was never contacted. What gives?

Well, the above helm command installs Rancher with a self-signed certificate. And you have to open the developer tools in the browser to see that a wss:// call failed because of the certificate. The solution of course is to use a certificate that your browser considers valid. First we ask helm to give us the configuration values with helm -n cattle-system get values rancher -o yaml > values.yaml and then we augment values.yaml with:

ingress:
  tls:
    source: secret
privateCA: true

It does not have to be a “really” private CA. I did the above with a certificate issued by Let’s Encrypt. The above can be upgraded now with helm -n cattle-system upgrade rancher rancher-latest/rancher -f values.yaml --version 2.8.3 And now we are ready to add our own working certificate with

kubectl -n cattle-system delete secret tls-rancher-ingress
kubectl -n cattle-system create secret tls --key ./key.pem --cert ./cert.pem

Of course, if you are using cert-manager there are other ways to do stuff. See also:

How many env: blocks per container?

While creating a Deployment earlier today, I faced a weird situation where a specific environment variable that was held as a Secret, was not being set. I tried deleting and recreating the secret, with no success. Mind you this was a long YAML with volumes, volumeMounts, ConfigMaps as enviroment variables, lots of lines. In the end the issue was pretty simple and I missed it because kubectl silently accepted the submitted YAML. I had two(!) env: blocks defined for the same container and somehow I missed that. It turns out, that when you do so, only the last one gets accepted, and whatever is defined in the previous, is not taken into account. To show this with an example:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: box
  name: box
spec:
  replicas: 1
  selector:
    matchLabels:
      app: box
  template:
    metadata:
      labels:
        app: box
    spec:
      containers:
      - image: busybox
        name: busybox
        env:
        - name: FIRST_ENV
          value: "True"
        - name: ANOTHER_FIRST
          value: "True"
        command:
        - sleep
        - infinity
        env:
        - name: SECOND_ENV
          value: "True"

In the above example, when the Pod starts the container, only SECOND_ENV is set. FIRST_ENV and ANOTHER_FIRST are not.

I do not know whether this is a well known YAML fact or not, but it cost me some head-scratching and I hope it won’t cost you too.