Ubuntu VM on Parallels M1 does not boot after upgrade

So I upgraded a VM running on Parallels on my M1, only to be greeted with:

This issue seems not to be unique to Parallels. It also happens with UTM. The quick fix here was to boot the machine with the previous kernel and customize grub to boot using GRUB_DEFAULT=saved and GRUB_SAVEDEFAULT=true for as long as the issue persists.

Granted this does not work when trying to install from an Ubuntu 20.04.4 ISO image, but you can start from an older one for as long as the issue persists.

Update: kernel

Linux upwork-box 5.4.0-105-generic #119-Ubuntu SMP Mon Mar 7 18:50:13 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux

seems to work

Using systemd-nspawn to run an older Ubuntu version and Docker images

Work for this post was partly triggered by something I saw at work and partly because of a lightning talk I saw on Monday. There was a remark by a friend that even though we love and want to run latest and greatest supported versions, we cannot always upgrade in time. And as such, sometimes we end-up with unsupported OS versions for a significant time. Not unlike the law of welded systems.

But it got me thinking. Assuming, for example, that you have a number of Xenial systems that due to library dependencies you cannot upgrade, is there a middle ground that might be acceptable for a while until you push forward? Using a Xenial docker container and treating it as a lightweight VM somehow, could be a solution. But what if you need to run docker containers too? Maybe you need something more elaborate.

I fired up a VM running Ubuntu Focal and decided to figure out how to run a systemd-nspawn Xenial container.

Create the machine:

# debootstrap --arch=amd64 xenial /var/lib/machines/xenial1

Start the machine

# systemd-nspawn -D /var/lib/machines/xenial1
root@xenial1 # rm /etc/securetty
root@xenial1 # passwd root
root@xenial1 # apt-get update
root@xenial1 # apt-get install dbus resolvconf
root@xenial1 # systemctl enable systemd-resolved
root@xenial1 # cat > /etc/resolvconf/resolv.conf.d/base
nameserver 127.0.0.53
options edns0 trust-ad
search home
ctrl-D
root@xenial1 #

Yes, I know that it is best to fix securetty instead of removing it, but this is a PoC on my VM. You can now proceed to configure systemd to run the container:

# /etc/systemd/system/xenial1.service

[Unit]
Description=Xenial1 Container

[Service]
LimitNOFILE=100000
ExecStart=/usr/bin/systemd-nspawn --machine=xenial1 --directory=/var/lib/machines/xenial1/ --bind /var/run/docker.sock:/var/run/docker.sock --bind /mnt2:/mnt2 -b 
Restart=always

[Install]
Also=dbus.service

In your host system you can now:

root@focal # systemctl daemon-reload
root@focal # systemctl start xenial1

You can of course log into the machine with machinectl login xenial1. Notice above that the nspawn container mounts the docker socket, since we assume that you have docker installed in the Focal host and that you want to "run" docker containers from within the Xenial machine.

We only need to install the docker client in the Xenial container and as such:

root@xenial1 # cat > /etc/apt/sources.list.d/docker.list
deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu xenial stable
ctrl-D
root@xenial1 # apt-get update
root@xenial1 # apt-get install docker-ce-cli=5:18.09.7~3-0~ubuntu-xenial

Now you’re all set. The docker client inside Xenial, starts whatever container you want in Focal and does whatever you like.

Suppose that you want a non-root user like ubuntu in the Xenial machine to be able to run docker commands; what do you do? You add the /etc/group line for the docker group from Focal in Xenial and in Xenial you simply useradd -aG docker ubuntu.

ubuntu@xenial1 > docker run -d -p 8080:8080 bitnami/nginx
:
ubuntu@focal > curl http://127.0.0.1:8080/
:
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
:

Enjoy.

Vagrant was unable to mount VirtualBox shared folders

After upgrading my ubuntu/focal64 box I got greeted by this wonderful message:

Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:

mount -t vboxsf -o uid=1000,gid=1000 vagrant /vagrant

The error output from the command was:

: Invalid argument

The solution was rather simple: A sudo apt-get install -y virtualbox-guest-dkms inside the VirtualBox guest followed by a vagrant reload at the host.

In case it matters, the VirtualBox host machine was an Ubuntu 21.04.

Two tricks that make life when dual booting between Windows 10 and Ubuntu easier

So you have your laptop with Windows 10 and you also need to run Ubuntu for some reason. Even if Ubuntu is the main OS, you may want to keep Windows around for the occasional system upgrade (Dell Update comes to mind for example) and software that runs exclusively one one of the two platforms (UCINET is such a program for me).

You are then faced with two problems:

  • Default boot operating system
  • The clock gets descynchronized when rebooting between the two operating systems.

StackExchange comes to the rescue. For the first problem you have to modify grub. I have chosen to make so that upon reboot, it will boot the previous operating system it did, unless I choose otherwise via the menu. I use the saved method from this answer.

For the second issue, there are a number of answers that usually involve tweaking systemd or the windows registry, but the easiest thing you can do is to ensure that the windows time service is started automatically with a delay.

 

resizing a vagrant box disk

[ I am about to do what others have done before me and blog about it one more time ]

While I do enjoy working with Windows 10, I am still not using WSL (waiting for WSL2) and work with either chocolatey or a vagrant Ubuntu box. It so happens that after pulling a few docker images the default 10G disk if full and you cannot work anymore. So, let’s resize the disk:

The disk on my ubuntu/bionic64 box, is a VMDK one. So before resizing, we need to transform it to a VDI first, which is easier for VirtualBox to handle:

VBoxManage clonehd .\ubuntu-bionic-18.04-cloudimg.vmdk .\ubuntu-bionic-18.04-cloudimg.vdi --format vdi

Now we can resize it, to say 20G:

VBoxManage modifymedium disk .\ubuntu-bionic-18.04-cloudimg.vdi --resize 20000

We’re almost there. We need to tell vagrant to boot from the VDI disk now. To do so open VirtualBox and visit the storage settings of the vagrant VM. Remove the VDMK disk(s) there and add the VDI on SCSI0 port. That’s it. We’re one step closer. Close VirtualBox and vagrant up now to boot from the VDI.

Now you have a 20G disk, but still a 10G partition. parted to the rescue:

$ sudo parted /dev/sda
(parted) resizepart 

It will ask you the partition number. You answer 1 (which is the /dev/sda1). It will ask you for the end of the partition. You answer -1 (which means until the end of disk). quit and you’re out.

You have changed the partition size, but still the filesystem reports the old size. resize2fs (assuming a compatible filesystem) and:

$ sudo resize2fs /dev/sda1

Now you’re done. You may want to vagrant reload to check whether everything works fine. Once you’re sure of that you can delete the old VMDK disk.

VMWare Fusion and Ubuntu

Ever since I got a Mac, I bought VMWare’s Fusion in order to be able to work with software that exists only in the Windows world. The really nice thing that good friend Moses pointed out yesterday, is that Fusion now supports easy installs for Ubuntu too! I had never took notice of that, since I run most of my VMs on VirtualBox.

I am an LXDE fan, so I first tried a Lubuntu install. It went fine, but it was not an Easy Install (in Fusion’s terminology). Then I went ahead and installed normal Ubuntu and afterwards (since I cannot do any real work with Unity) installed LXDE. The Easy Install went smooth and I did not need even need to consider keyboard configurations (something I had to do with Debian-LXDE and VirtualBox). I also changed the available RAM for the VM and now I have a machine that just works.

Oh the fun of using closed software in order to work easier with open source.