I sold two books from my library today

There’s this service called metabook that facilitates selling books that you no longer need / want and buying them from the seller. I had two in excellent condition collecting dust for many years:

  • UNIX Network Programming: Interprocess communications
  • UNIX Network Programming: The sockets networking API

I had purchased them for sentimental reasons. I’d read the first edition from the lab next to mine and at some point in time I got them. It is always nice to have a complete library.

The warm feeling I used to have by just looking at them was gone though. But I could see the spark in the eyes of their new owner. He too said he got them for sentimental reasons.

Ubuntu VM on Parallels M1 does not boot after upgrade

So I upgraded a VM running on Parallels on my M1, only to be greeted with:

This issue seems not to be unique to Parallels. It also happens with UTM. The quick fix here was to boot the machine with the previous kernel and customize grub to boot using GRUB_DEFAULT=saved and GRUB_SAVEDEFAULT=true for as long as the issue persists.

Granted this does not work when trying to install from an Ubuntu 20.04.4 ISO image, but you can start from an older one for as long as the issue persists.

Update: kernel

Linux upwork-box 5.4.0-105-generic #119-Ubuntu SMP Mon Mar 7 18:50:13 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux

seems to work

Canceling all Jenkins jobs in queue

This is a continuation from a previous post where I showed how to disable all configured jobs in a Jenkins server (when for example launching a copy for test purposes). To this end, it may be the case that you have placed your Jenkins controller in quiet mode to have some ease of mind examining what goes on with your queue, or you simply want to cleanup the queue and have the system start with no jobs submitted. Whatever the reason, if you need to erase all of your Jenkins queue, python-jenkins and a few lines come to your assistance:

import jenkins

server = jenkins.Jenkins('http://127.0.0.1:8080/',
        timeout=3600,
        username=USERNAME,
        password=PASSWORD)

queue_info = server.get_queue_info()
for i in range(len(queue_info)):
    print(queue_info[i]['id'])
    server.cancel_queue(queue_info[i]['id']) 

Mi TW Earphones 2 Basic not disconnecting when in case

It may happen that when you put your Mi TW Earphones 2 Basic back in their case they do not disconnect when you close it. The reason for this seems to be that the kid gets a tiny bit loose and as such it does not press them enough to understand that the lid is closed so they must disconnect.

Electrical tape to the rescue. Two pieces should be enough.

Update: I am now using BluTack instead.

I am in my fourth attempt of learning Go

I am not particularly fond of Go, but I work with Kubernetes, and Go is to Kubernetes what C is to Unix. So after a point, you have to know some Go in order to understand more of Kubernetes’s implementation, design and other quirks (and why not, implement something too).

My first try was with The Go Programming Language. Up in the mountains, no Internet, just me, the book and my laptop. It felt like when reading K&R, only it didn’t. Times have changed. This is not the way. The Book is OK to be by your side, but I need something else.

The second time was when I was asked to write a review of Go Systems Programming by Mihalis Tsoukalos. I read the book through and through and submitted corrections on errors and such and my opinion of it to the author and the publisher. But life happened and I paused from immediate need for Go and Kubernetes. I forgot almost all of Go that I learned through the process.

I then tried the Exercism track for Go. I’ve tried many languages on Exercism and I consider it a valuable tool for everyone. It is just not for me. That effort faded quickly.

I am now in my fourth attempt. Mind you, I am not trying to become super proficient in Go, or even idiomatic Go. I want to have the relative ease to understand code that I read and to be able to write 100 lines of Go that work. This time I’ve chosen Go by Example. I’m following this tutorial one example per day. I had a small hiatus during the holidays, but today I came back. This looks like it may work.

That’s why I am writing the post. It is a sort of public commitment. Like Stickk without Stickk.

RUN –mount=type=ssh is not always easy

Let’s take a very barebones Jenkinsfile and use it to build a docker image that clones something from GitHub (and possibly does other stuff next):

pipeline {
  agent any

  environment {
    DOCKER_BUILDKIT=1
  }

  stages {
    stage('200ok') {
      steps {
        sshagent(["readonly-ssh-key-here"]) {
          script {
            sh 'docker build --ssh default -t adamo/200ok .'
          }
        }
      }
    }
  }
}

We are using the SSH Agent Plugin in order to allow a clone that happens in the Dockerfile:

# syntax=docker/dockerfile:experimental
FROM bitnami/git
RUN mkdir /root/.ssh && ssh-keyscan github.com >> /root/.ssh/known_hosts
RUN --mount=type=ssh git clone git@github.com:a-yiorgos/200ok.git

This builds fine. But what if you need this to be some "rootless" container?

# syntax=docker/dockerfile:experimental
FROM bitnami/git
USER bitnami
WORKDIR /home/bitnami
RUN mkdir /home/bitnami/.ssh && ssh-keyscan github.com >> /home/bitnami/.ssh/known_hosts
RUN --mount=type=ssh git clone git@github.com:a-yiorgos/200ok.git

This will fail with something like:

#14 [7/7] RUN --mount=type=ssh git clone git@github.com:a-yiorgos/200ok.git
#14       digest: sha256:fb15ac6ca5703d056c7f9bf7dd61bf7ff70b32dea87acbb011e91152b4c78ad4
#14         name: "[7/7] RUN --mount=type=ssh git clone git@github.com:a-yiorgos/200ok.git"
#14      started: 2021-12-17 12:00:22.859388318 +0000 UTC
#14 0.572 fatal: destination path '200ok' already exists and is not an empty directory.
#14    completed: 2021-12-17 12:00:23.508950696 +0000 UTC
#14     duration: 649.562378ms
#14        error: "executor failed running [/bin/sh -c git clone git@github.com:a-yiorgos/200ok.git]: exit code: 128"

rpc error: code = Unknown desc = executor failed running [/bin/sh -c git clone git@github.com:a-yiorgos/200ok.git]: exit code: 128

Why is that? Is not the SSH agent forwarding working? Well, kind of. Let’s add a couple of commands in the Dockerfile to see what might be the issue:

# syntax=docker/dockerfile:experimental
FROM bitnami/git
USER bitnami
WORKDIR /home/bitnami
RUN mkdir /home/bitnami/.ssh && ssh-keyscan github.com >> /home/bitnami/.ssh/known_hosts
RUN --mount=type=ssh env
RUN --mount=type=ssh ls -l ${SSH_AUTH_SOCK}
RUN --mount=type=ssh git clone git@github.com:a-yiorgos/200ok.git

Then the build output gives us:

:
#13 [6/7] RUN --mount=type=ssh ls -l ${SSH_AUTH_SOCK}
#13       digest: sha256:ce8fcd7187eb813c16d84c13f8d318d21ac90945415b647aef9c753d0112a8a7
#13         name: "[6/7] RUN --mount=type=ssh ls -l ${SSH_AUTH_SOCK}"
#13      started: 2021-12-17 12:00:22.460172872 +0000 UTC
#13 0.320 srw------- 1 root root 0 Dec 17 12:00 /run/buildkit/ssh_agent.0
#13    completed: 2021-12-17 12:00:22.856049431 +0000 UTC
#13     duration: 395.876559ms
:

and subsequently fails to clone. This happens because the socket file /run/buildkit/ssh_agent.0 for the SSH agent forwarding is not accessible by user bitnami and thus no ssh identity is available to it.

I do not know whether it is possible to make use of RUN --mount=type=ssh in combination with USER where the user is not root. Please leave a comment if you know whether/how this can be accomplished.

On drawing lines and keeping notes

This post is sparked from a discussion I had with friends around “What do you use to keep notes?”.

Well, the tl;dr answer to this for me is Evernote. I got an Evernote subscription when I bought a LiveScribe. The LiveScribe did not stick, but Evernote sure did. Anything that I may use in the future (99% chance I won’t, but I keep in a bucket for just in case) goes there. That is mostly papers and blog posts that I find interesting at the time I happen across.

But a longer answer includes more than Evernote. I keep hand written notes in a variety of media:
– I use a Wacom One because when I do on-line teaching, I need to draw figures and share them. Originally I used Jamboard and Whiteboard. Now I am simply using jspaint.app.
– I have a Mi Writing LCD tablet. I use this for either very temporary notes and doodles, or when late at night in the sofa watching TV and need to dump an idea that just sparked. Next morning it will either find its way to Evernote or get deleted because it was not as cool as I initially thought it was.
– I was gifted a Rocketbook. I use it for work. Whatever notes I keep during the day to keep track on my work. Like that ticket I was working yesterday and it still needs something, other notes that you need to not forget while working on something stuff like that. If anything needs permanence, then the Rocketbook app makes sure it gets saved.
– Since I follow Conway’s recipe, I have more than one interesting projects open. Some of them personal. And I need to keep notes and write them on paper. For that I use a Moleskine Smart Writing System (again a gift). Please do make fun of me for using Moleskines.
– Other Moleskines because I like writing with a fountain pen.
– When I don’t have any of the above handy, I fold a PocketMod just to dump whatever on my head on paper.

Do you need all that? Does this sound overwhelming and unnecessary? Most likely yes. If you feel that is the case, you need only a few of things:
– A book about making better sketches.
– Any pen and paper you can write on.
– Your phone’s camera and a scanning application to capture your thoughts.

Do I have that many great ideas? Most likely not. But I like to write them down in order to evaluate them.

The case of an etcd restore that was not happening

When you provision a kubernetes cluster with kubeadm etcd is a static pod and its configuration file etcd.yaml is located in the /etc/kubernetes/manifests/ directory.

Assuming a non-production installation, I happened across a case where a backup of etcd was taken, a deployment was deleted and then the backup was restored. Naturally the expected result after editing etcd.yaml so that data-dir pointed to the restored database, was for the previously deleted deployment to reappear. It did not! Six restores in a row did not result in bringing it back. Let’s see the steps taken in a test cluster created to replicate what happened:

First, two deployments were created:

$ kubectl create deployment nginx --image nginx
deployment.apps/nginx created

$ kubectl create deployment httpd --image httpd
deployment.apps/httpd created

$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
httpd-757fb56c8d-vhftq   1/1     Running   0          4s
nginx-6799fc88d8-xklhw   1/1     Running   0          11s

Next, a snapshot of the etcd was requested:

$ kubectl -n kube-system exec -it etcd-ip-10-168-1-35 -- sh -c "ETCDCTL_API=3 \
ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt \
ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key etcdctl --endpoints=https://127.0.0.1:2379 \
snapshot save /var/lib/etcd/snapshot.db "
:
:
{"level":"info","ts":1637177906.1665,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/var/lib/etcd/snapshot.db"}
Snapshot saved at /var/lib/etcd/snapshot.db

Oh my god, we deleted an important deployment!

$ kubectl delete deployment nginx 
deployment.apps "nginx" deleted

$ kubectl get pod
NAME                     READY   STATUS        RESTARTS   AGE
httpd-757fb56c8d-vhftq   1/1     Running       0          53s
nginx-6799fc88d8-xklhw   0/1     Terminating   0          60s

$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
httpd-757fb56c8d-vhftq   1/1     Running   0          114s

Quick! Bring it back. First let’s restore the snapshot we have, shall we?

$ kubectl -n kube-system exec -it etcd-ip-10-168-1-35 -- sh -c "ETCDCTL_API=3 \
ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt \
ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key etcdctl --endpoints=https://127.0.0.1:2379 \
snapshot restore --data-dir=/var/lib/etcd/restore /var/lib/etcd/snapshot.db "
:
:
{"level":"info","ts":1637178021.3886964,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/var/lib/etcd/snapshot.db","wal-dir":"/var/lib/etcd/restore/member/wal","data-dir":"/var/lib/etcd/restore","snap-dir":"/var/lib/etcd/restore/member/snap"}

And now just edit /etc/kubernetes/manifests/etcd.yaml so that it points to the restored directory:

- --data-dir=/var/lib/etcd/restore

And after kubelet does its thing for a minute or two, it should work, right? No:

$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
httpd-757fb56c8d-vhftq   1/1     Running   0          11m

This was the situation I was pointed at and asked to offer an opinion.

Could there be an issue with etcd?

journalctl -u kubelet | grep etcd reveals nothing.

kubectl -n kube-system logs etcd-ip-10-168-1-35 does not reveal anything:

:
:
2021-11-17 19:50:24.208303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-11-17 19:50:34.208063 I | etcdserver/api/etcdhttp: /health OK (status code 200)

But look at this:

$ kubectl -n kube-system logs etcd-ip-10-168-1-35 | grep restore
2021-11-17 19:48:34.261932 W | etcdmain: found invalid file/dir restore under data dir /var/lib/etcd (Ignore this if you are upgrading etcd)
2021-11-17 19:48:34.293681 I | mvcc: restore compact to 1121

So there must be something there that directs etcd to read from /var/lib/etcd and not from /var/lib/etcd/restore. What could it be?

# ls /etc/kubernetes/manifests/
etcd.yaml       httpd.yaml           kube-controller-manager.yaml
etcd.yaml.orig  kube-apiserver.yaml  kube-scheduler.yaml

The person who asked my opinion thoughtfully wanted to have a backup of the etcd.yaml file. Only it happened that keeping it in the same directory messed up the setup. Look what happens next:

$ sudo rm /etc/kubernetes/manifests/etcd.yaml.orig 

$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
httpd-757fb56c8d-vhftq   1/1     Running   0          16m
nginx-6799fc88d8-xklhw   1/1     Running   0          16m


Note that the nginx pod returned with the exact same name as before.

So the takeaway from this adventure is that kubelet reads all files in /etc/kubernetes/manifests not only the *.yaml files and thus do not keep older versions of files in there, for results will be unexpected.

What are you using your RPi for?

This is what a friend whom I’d not seen for some time asked me yesterday. Well these days not much.

In the past I’ve used it for running OpenELEC, but these days a FireTV and whatever Android box the ISP gives you do the job fine.

I’m currently using it for two things:

Next project, with no actual ETA because I’m interested in other stuff more: run these in k3s.

I’m sure other people have more exciting stuff to do with it, but what little time I have for spare computing right now, I’m using it on Groovy and J.

Using systemd-nspawn to run an older Ubuntu version and Docker images

Work for this post was partly triggered by something I saw at work and partly because of a lightning talk I saw on Monday. There was a remark by a friend that even though we love and want to run latest and greatest supported versions, we cannot always upgrade in time. And as such, sometimes we end-up with unsupported OS versions for a significant time. Not unlike the law of welded systems.

But it got me thinking. Assuming, for example, that you have a number of Xenial systems that due to library dependencies you cannot upgrade, is there a middle ground that might be acceptable for a while until you push forward? Using a Xenial docker container and treating it as a lightweight VM somehow, could be a solution. But what if you need to run docker containers too? Maybe you need something more elaborate.

I fired up a VM running Ubuntu Focal and decided to figure out how to run a systemd-nspawn Xenial container.

Create the machine:

# debootstrap --arch=amd64 xenial /var/lib/machines/xenial1

Start the machine

# systemd-nspawn -D /var/lib/machines/xenial1
root@xenial1 # rm /etc/securetty
root@xenial1 # passwd root
root@xenial1 # apt-get update
root@xenial1 # apt-get install dbus resolvconf
root@xenial1 # systemctl enable systemd-resolved
root@xenial1 # cat > /etc/resolvconf/resolv.conf.d/base
nameserver 127.0.0.53
options edns0 trust-ad
search home
ctrl-D
root@xenial1 #

Yes, I know that it is best to fix securetty instead of removing it, but this is a PoC on my VM. You can now proceed to configure systemd to run the container:

# /etc/systemd/system/xenial1.service

[Unit]
Description=Xenial1 Container

[Service]
LimitNOFILE=100000
ExecStart=/usr/bin/systemd-nspawn --machine=xenial1 --directory=/var/lib/machines/xenial1/ --bind /var/run/docker.sock:/var/run/docker.sock --bind /mnt2:/mnt2 -b 
Restart=always

[Install]
Also=dbus.service

In your host system you can now:

root@focal # systemctl daemon-reload
root@focal # systemctl start xenial1

You can of course log into the machine with machinectl login xenial1. Notice above that the nspawn container mounts the docker socket, since we assume that you have docker installed in the Focal host and that you want to "run" docker containers from within the Xenial machine.

We only need to install the docker client in the Xenial container and as such:

root@xenial1 # cat > /etc/apt/sources.list.d/docker.list
deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu xenial stable
ctrl-D
root@xenial1 # apt-get update
root@xenial1 # apt-get install docker-ce-cli=5:18.09.7~3-0~ubuntu-xenial

Now you’re all set. The docker client inside Xenial, starts whatever container you want in Focal and does whatever you like.

Suppose that you want a non-root user like ubuntu in the Xenial machine to be able to run docker commands; what do you do? You add the /etc/group line for the docker group from Focal in Xenial and in Xenial you simply useradd -aG docker ubuntu.

ubuntu@xenial1 > docker run -d -p 8080:8080 bitnami/nginx
:
ubuntu@focal > curl http://127.0.0.1:8080/
:
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
:

Enjoy.