This year I started my own company: 0to1. To get up and running quickly I decided to stick with technology that I knew before, hardware I have used for the past 8 years without breaking down has been Apple MacBook Pro’s. The new Macbook M1 Pro had raving reviews, super performance, and long battery life.
Knowing that the processor architecture changed from Intel (x86) to ARM (M1), after a year things should certainly have been ironed out… right?
After checking https://isapplesiliconready.com/for/m1 I found out that some tools didn’t work, like VirtualBox but there are often others that do work without issues.
After installing Rancher desktop on the m1 I ran into an issue that /usr/local/bin
was not created and ownership was not set to my user, this was easy to resolve by the solution below.
(https://github.com/rancher-sandbox/rancher-desktop/issues/1155#issuecomment-1004831158)
Rancher Desktop was working fine, but I needed an etcd node integrated with Kubernetes for a little programming project I have going on. Looking back I should have just used Rancher Desktop to run an etcd cluster locally, but I remembered Minikube had etcd installed automatically.
I wanted to see if Minikube on the M1 could be easily set up. So decided to try out Minikube.
Minikube it is!
brew install minikube
The brew install was easy, but starting minikube gave an error that Virtualbox was not found, this is the default driver that is used by minikube. As I mentioned above, VirtualBox isn’t supported on the M1 and likely never will. So I needed another driver.
😄 minikube v1.24.0 on Darwin 12.1 (arm64)
✨ Automatically selected the virtualbox driver
❌ Exiting due to DRV_UNSUPPORTED_OS: The driver ‘virtualbox’ is not supported on darwin/arm64
A list of all the available supported drivers for Apple can be found here:
https://minikube.sigs.k8s.io/docs/drivers/
Docker Desktop recently changed its policy to a paid plan for certain users, this gave me a new challenge:
Can I run Docker without docker-desktop on the new M1?
All the solutions online gave the Intel processor solution: Install minikube. This quickly became a chicken or egg situation. Using Docker as the driver without using docker-desktop is not an option at the moment, the only solution mentioned time and time again seems to be the docker-desktop on the M1 Pro.
Surely there are other drivers that I could use?
Other drivers
Since Rancher desktop is using Qemu on the M1 Pro, I surely could use it… Right? Since KVM uses Qemu I decided to give the kvm2 driver a try:
Rogiers-MBP:0to1 rogierdikkes$ minikube start –driver=kvm2
😄 minikube v1.24.0 on Darwin 12.1 (arm64)
❌ Exiting due to DRV_UNSUPPORTED_OS: The driver ‘kvm2’ is not supported on darwin/arm64
No support in Minikube, so I tried hyperkit:
brew install hyperkit
Error: hyperkit: no bottle available!
You can try to install from source with:
brew install –build-from-source hyperkit
Slowly options started running out, building from source? This is while the package is easy to install on the Intel MacBooks, so I didn’t want to go this route. Time to give good ol’ Redhat a try.
Podman to the rescue?
With podman the installation went very smoothly, unfortunately, I didn’t make the finish line with podman. With Podman I ran into bugs on the M1. I decided to write down what I did in case the bug I encountered gets fixed.
brew install podman
After installing Podman you have to init the machine. Minikube requires 2 CPUs, so provide the podman VM with 2 CPUs at init.
podman machine init --cpus 2
Downloading VM image: fedora-coreos-35.20220103.2.0-qemu.aarch64.qcow2.xz: done
Extracting compressed file
Then start the machine:
podman machine start
INFO[0000] waiting for clients…
INFO[0000] listening tcp://127.0.0.1:7777
INFO[0000] new connection from to /var/folders/s1/cythsgh50hd9tl7_38qknb2r0000gn/T/podman/qemu_podman-machine-default.sock
Waiting for VM …
Machine “podman-machine-default” started successfully
As you can see podman downloads a qcow2
file to use with Qemu. A similar approach Rancher Desktop takes. After this is completed you can start minikube with the podman driver:
minikube start --driver=podman
😄 minikube v1.24.0 on Darwin 12.1 (arm64)
✨ Using the podman (experimental) driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image …
E0107 13:11:31.539966 90940 cache.go:201] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=1953MB) …| ERRO[0123] accept tcp [::]:38413: use of closed network connection
ERRO[0123] accept tcp [::]:41795: use of closed network connection
ERRO[0123] accept tcp [::]:36729: use of closed network connection
ERRO[0123] accept tcp [::]:44515: use of closed network connection
ERRO[0123] accept tcp [::]:35925: use of closed network connection
✋ Stopping node “minikube” …
🔥 Deleting “minikube” in podman …
🤦 StartHost failed, but will try again: creating host: create: creating: create kic node: create container: podman run -d -t –privileged –device /dev/fuse –security-opt seccomp=unconfined –tmpfs /tmp –tmpfs /run -v /lib/modules:/lib/modules:ro –hostname minikube –name minikube –label created_by.minikube.sigs.k8s.io=true –label name.minikube.sigs.k8s.io=minikube –label role.minikube.sigs.k8s.io= –label mode.minikube.sigs.k8s.io=minikube –network minikube –ip 192.168.49.2 –volume minikube:/var:exec –memory-swap=1953mb –memory=1953mb –cpus=2 -e container=podman –expose 8443 –publish=127.0.0.1::8443 –publish=127.0.0.1::22 –publish=127.0.0.1::2376 –publish=127.0.0.1::5000 –publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.28: exit status 126
stdout:
stderr:
Error: the requested cgroup controller `cpu` is not available: OCI runtime error
🔥 Creating podman container (CPUs=2, Memory=1953MB) …
😿 Failed to start podman container. Running “minikube delete” may fix it: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube –label name.minikube.sigs.k8s.io=minikube –label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:
stderr:
Error: volume with name minikube already exists: volume already exists
❌ Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube –label name.minikube.sigs.k8s.io=minikube –label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:
stderr:
Error: volume with name minikube already exists: volume already exists
What seems to be happening is that the group CPU is unavailable in the first deployment. Currently, there is an issue open for this, so podman was not the way forward.
The issue: https://github.com/kubernetes/minikube/issues/13261
next! VFTOOL
Since podman did not work and the Minikube setup on the M1 was broken I had the hope that the binaries for the Ubuntu setup would work. Again I just should have used Rancher Desktop, but now I was curious if there was any way to get Minikube working on the M1. The idea was to get Ubuntu running on the M1, since VirtualBox was no longer an option, my approach was to use vftool
to run a virtual machine. Inside that virtual machine, you run minikube. I had the hope that the Ubuntu binaries would not have issues and Minikube would just get installed.
First, let’s create a folder so we can isolate our mess and allows us to start setting up vftool
to build a virtual machine.
mkdir -p ~/VM/Ubuntu
cd ~/VM/
git clone git@github.com:evansm7/vftool.git
cd ./vftool
make
file ./build/vftool
cp ./build/vftool /usr/local/bin/
Above copy command to /usr/local/bin
is optional, it saved me a couple of seconds to find the binary when you want to use vftool after the build. The file
the command should give the following output:
Mach-O 64-bit executable arm64
After this is complete you can move onward to setting up the VM vftool
will be using. First, download the iso for Ubuntu. We also need 7zip
to unpack the ISO
to get access to the kernel
and initrd
. We need to gunzip
the kernel and copy the initrd to our VM folder.
cd ~/VM/Ubuntu
curl --silent -L https://cdimage.ubuntu.com/focal/daily-live/current/focal-desktop-arm64.iso -o ~/VM/Ubuntu/focal.iso
brew install 7zip
7zz x -tiso -y ~/VM/Ubuntu/focal.iso -o~/VM/Ubuntu/focal-unpacked
cp ~/VM/Ubuntu/focal-unpacked/casper/initrd ~/VM/Ubuntu/
cp ~/VM/Ubuntu/focal-unpacked/casper/vmlinuz ~/VM/Ubuntu/vmlinuz.gz
gunzip ~/VM/Ubuntu/vmlinuz.gz
After this is done you have the 2 files you need, ISO
and the focal-unpacked folder is no longer necessary. The next step is to create a block device to use for Ubuntu, I have tried to use the ISO
downloaded however vftool
kept giving the error that it was an invalid storage device. So we create an empty block device to use as the disk for the virtual machine:
dd if=/dev/zero of=~/VM/Ubuntu/focal-server-cloudimg-arm64.img seek=60000000 obs=1024 count=0
This creates an empty raw
device for you to use with the size of 60G, the reason is that minikube uses around 33G for its installation alone. We will be using this volume for the docker images and minikube images etc. If we don’t use this image we will run out of disk space when starting minikube. Since this virtual machine is entirely in memory we need to add a lot of memory to make minikube run without getting OOM
. Now let’s start the virtual machine!
vftool -k ~/VM/Ubuntu/vmlinuz -i ~/VM/Ubuntu/initrd -d ~/Ubuntu/VM/focal-server-cloudimg-arm64.img -m 8192 -p 2 -a "console=tty0 console=hvc0 root=/dev/vda1 ds=nocloud"
The following message returns:
2022-01-07 18:11:43.308 vftool[1565:1811117] vftool (v0.3 10/12/2020) starting
2022-01-07 18:11:43.309 vftool[1565:1811117] +++ kernel at /Users/rogierdikkes/VM/Ubuntu/vmlinuz, initrd at /Users/rogierdikkes/VM/Ubuntu/initrd, cmdline ‘console=tty0 console=hvc0 root=/dev/vda1 ds=nocloud’, 2 cpus, 4096MB memory
2022-01-07 18:11:43.310 vftool[1565:1811117] +++ fd 3 connected to /dev/ttys007
2022-01-07 18:11:43.310 vftool[1565:1811117] +++ Waiting for connection to: /dev/ttys007
2022-01-07 18:11:47.523 vftool[1565:1811117] +++ Attaching disc /Users/rogierdikkes/VM/Ubuntu/focal-server-cloudimg-arm64.img
2022-01-07 18:11:47.523 vftool[1565:1811117] +++ Configuration validated.
2022-01-07 18:11:47.523 vftool[1565:1811117] +++ canStart = 1, vm state 0
2022-01-07 18:11:47.602 vftool[1565:1811140] +++ VM started
With screen
we will now connect to this virtual machine, you need to search for the tty
which you can attach to. In my case it was /dev/ttys007
screen /dev/ttys007
This should open a session with Ubuntu. Now you can finish the Ubuntu install on the virtual machine. While installing you can keep everything default, except for 1 field: URL
.
The URL
the field is incorrect in the installation, the defaults point towards images that do not exist:
http://cdimage.ubuntu.com/releases/focal/release/ubuntu-20.04-live-server-arm64.iso
http://cdimage.ubuntu.com/releases/focal/release/ubuntu-20.04-desktop-arm64.iso
Replace these 2 URLs
with the ISO
we downloaded earlier:
https://cdimage.ubuntu.com/focal/daily-live/current/focal-desktop-arm64.iso
After this, you see the download and you get the login prompt. You can use the username ubuntu
without a password.
Now this is done we need to mount the raw disk to /var/lib/docker
so all the images end up here:
sudo mkdir -p /var/lib/docker
sudo mkfs.ext4 /dev/vda
sudo mount /dev/vda /var/lib/docker
Once this is done we will create a symlink to the /var/lib/docker
directory to place the minikube files at the same location as the docker files. It’s a bit messy, but I kept getting out of disk space errors and was done with this…
sudo ln -s /var/lib/docker /var/lib/minikube
After this is done let’s install the Docker engine on Ubuntu, you can use this guide. https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
For the completion guide I list the steps here in a oneliner:
sudo apt update -y && \
sudo apt install -y ca-certificates curl gnupg lsb-release && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg && \
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null && \
sudo apt update -y && \
sudo apt install -y docker-ce docker-ce-cli containerd.io conntrack
OverlayFS
cannot mount on top of the liveCD, this will give the following error:
docker: Error response from daemon: error creating aufs mount to /var/lib/docker/aufs/mnt/8017e1251b9eab3500fcd0b677b96c9e5f9430dd5249354b120feb0019aa09e8-init: mount target=/var/lib/docker/aufs/mnt/8017e1251b9eab3500fcd0b677b96c9e5f9430dd5249354b120feb0019aa09e8-init data=br:/var/lib/docker/aufs/diff/8017e1251b9eab3500fcd0b677b96c9e5f9430dd5249354b120feb0019aa09e8-init=rw:/var/lib/docker/aufs/diff/077e60cb2a82a857e71c73b6ec7cd8c667d0f1b15fdc1f5a228eac2e8f852872=ro+wh,dio,xino=/dev/shm/aufs.xino: invalid argument.
In dmesg
you can see the following error:
[ 681.897119] overlayfs: missing ’lowerdir’
You can solve this by using a heredoc
to configure the vfs
storage-driver in the Docker daemon.json
file, this has some negative side effects but it was the only storage driver I got working since aufs
was deprecated and overlayfs
doesn’t work in the liveCD.
sudo bash -c "tee <<EOF > /etc/docker/daemon.json
{
\"storage-driver\": \"vfs\"
}
EOF"
Because we just used the vfs
storage-driver we cannot run the minikube command anymore with driver type none
, minikube will fail in its preflight check:
error execution phase preflight: [preflight] Some fatal errors occurred:
\[ERROR SystemVerification\]: unsupported graph driver: vfs
After adding the storage driver vfs
you need to restart the docker process:
sudo systemctl restart docker
Check if the hello-world runs:
sudo docker run hello-world
After this is completed correctly you can download minikube:
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm64
chmod +x ./minikube
sudo cp ./minikube /usr/local/bin
Adjust the following parameter:
sudo sysctl fs.protected_regular=0
Set a password, else you cannot add yourself to the docker
group:
passwd
After setting the password add the ubuntu user to the docker
group:
sudo usermod -aG docker $USER && newgrp docker
After this is complete you can run the minikube command, running it with sudo will make it fail:
minikube start --driver=docker
Since minikube starts just like your OS in memory this is the moment where OOM
events happen. Now let’s check if minikube works:
minikube kubectl -- get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcd69978-dt8cw 1/1 Running 0 12s
kube-system etcd-minikube 1/1 Running 0 32s
kube-system kube-apiserver-minikube 1/1 Running 0 28s
kube-system kube-controller-manager-minikube 1/1 Running 0 25s
kube-system kube-proxy-qc7sm 1/1 Running 0 12s
kube-system kube-scheduler-minikube 1/1 Running 0 28s
kube-system storage-provisioner 1/1 Running 0 20s
Done?
Halfway through the vftool
the setup I started to realize that connecting from my MacBook to the Minikube setup would take more and more time. I started to think about alternatives. I had the Rancher Desktop running before on Intel and had the idea that it was an alternative. There is another post on how to set up Rancher Desktop. Having the ability to correct directly from my M1 terminal and my vscode setup was not possible in the vftool setup above.
The vftool
setup is not 100% stable, I had dozens of kernel panics
during the setup and trying to figure out how to set up minikube inside vftool
. I even had kernel oops
during the creation of the symlink
.