Thoughts on website ideas, PHP and other tech topics, plus going car-free
Creating a VirtualBox VM-to-VM network to control Kubernetes
Categories: Docker, Technical

I’ve recently wanted to look at Kubernetes, having found some cases where Docker Compose and Swarm don’t seem to do what I want. I’ve been putting it off for far too long, so I am jumping in. At the time of writing, I am intrigued by this monster article.

The first piece of advice I’ve received is that Kubernetes (or Minikube) really needs to operate in its own hypervisor-level virtual machine. However, that’s tricksier for me, since my dev environment is already in a (VirtualBox) virtual machine, and I don’t want to nest them. I was originally expecting to just run Minikube in my existing VM, but I now get the impression that my existing (non-Kube) containers are going to confuse things. I assume this is because Kube wants to be able to scan the state of each host under its control in order to be able to make the smallest set of change to mutate a server fleet from the current state to a desired state.

This means I need to create a new VM, configure it, and create a network between my dev VM and the new Kube VM. This requirement is somewhat niche, and so the below may just be personal notes; nevertheless, it is possible someone else might find some parts useful.

Creating a new VM

To start, I grabbed the ISO of the latest Ubuntu Server LTS release, since a GUI isn’t needed here. Then create a new VM in the VirtualBox GUI; here’s some suggested config:

  • Default video RAM (16M)
  • 1G or 2G RAM, depending on what you can spare
  • 15G disk, dynamically allocated (so it’ll only take what you use, rather than reserving all 15G straight away)
  • 2 CPUs (it’s a Kubernetes requirement, it won’t start with 1)
  • CD-ROM booting to “ubuntu-20.04-live-server-amd64.iso” (or whatever your ISO is called)
  • Boot in the order: Harddisk, Optical (remove Floppy)
  • I run a ultra-high-def display, so I tweak the Display Scale Factor to 2.0, but use what works for you here
  • Shared Folders: I like to map a read/write folder from “/home/myuser/Development/KubeVolume” on the host to “KubeVolume” on the guest. This becomes a nice replacement for the Shared Clipboard feature, which doesn’t work in console-only guests
  • Network: leave the first adapter as it is, and add a second one. Choose a “Bridged Adapter” and the choose the primary network device in your host (this means inter-machine traffic goes through your device rather than a virtual device, but that’s fine)

Then the dev (existing) VM needs to have the same second network adapter added to it. My dev machine is running Mint 19.3 and auto-detected the second network without any manual configuration in the guest (it can vary from one distro to another, as we’ll see later below).

Preparing the VM

The next thing to do is to run through the installer. This should boot from the ISO. I like to update the installer if it offers (the ISO release is usually a bit out of date). Then go through the installation process as one would expect. The network screen should detect both networks, which is why it is very helpful to have configured this already.

I like to call a machine “k8s”, choose a login name to match my dev machine name, and a trivial password (it’s local only, it doesn’t need to be secure). Towards the end I agree to install an OpenSSH Server, but don’t configure it. I skip the offer of standard server software – this can be done manually.

Once the installation is done, it should offer a reboot, which you should do. If you’ve got the boot order set up correctly in the VirtualBox configuration, it’ll detect that the harddisk is a valid boot source, and will boot from that instead of the CD-ROM (“optical”).

Once the boot messages have flown past, it should give you a login prompt. Login here using the credentials set up earlier. My experience was that the login prompt is shown and then a bunch more boot messages are added, making a bit of a mess of the screen. I tend to need to type my username, press return, type the password, press return, without paying much heed to the worsening mess on the screen! I finally do a Ctrl-L to clean it up.

Preparing the guest OS

I tend to start off with bringing the OS up to date:

sudo apt-get update
sudo apt-get upgrade -y

Secondly, let’s get Docker and ipconfig installed:

sudo apt-get install -y net-tools

Next, let’s get the VirtualBox Guest Additions installed – that’ll enable features like Shared Folders. Click on the Devices menu for the VM, and select “Insert Guest Additions CD image…”. Then you can do:

sudo mkdir /media/cdrom
sudo mount /dev/cdrom /media/cdrom
cd /media/cdrom
sudo apt-get install -y gcc make perl
sudo ./

While we are here, let’s add the current user to the Shared Folders and Docker groups:

sudo usermod -a -G vboxsf $USER
sudo usermod -a -G docker $USER

The changes will need a reboot, so do that:

sudo reboot

Checking the network

My initial build of this machine did not have the correct network devices configured when the OS was installed, and thus some manual set-up was required. If your network devices don’t come up automatically, you may find that this /etc/network/interfaces does the trick:

auto lo

auto enp0s3
iface enp0s3 inet dhcp

auto enp0s8
iface enp0s8 inet dhcp

However, do check whether ifconfig gives you an IPv4 address for each VM against the secondary adapter (enp0s8 in this case). If it does, you don’t need to change anything further.

Finally, to ensure that you have connectivity, get the IP address for each machine, and confirm the connection by pinging one from the other. For example, you might have Dev VM as, and the Kube VM as Thus from the Dev VM, ping 43.41, and from the Kube VM, ping 43.178.

Installing Kubernetes on the Kube VM

For now, I am going with Minikube, here are the install instructions. Incidentally, I have chosen to ignore the advice to ensure the guest CPU supports virtualisation instructions; I only want to orchestrate Docker containers, and this does not need full-fat virtualisation.

Then let’s start Kube:

minikube start --driver=docker

If there are any problems starting this, I’ve found it’s worth issuing a “minikube delete” and then experimenting with other start options (e.g. –preload=false).

It looks like this will start a Docker-in-Docker (Din) configuration, which makes the VirtualBox VM somewhat unnecessary. I wondered whether to skip the DinD approach, which I think can be done thus:

minikube delete
sudo apt-get install -y conntrack
sudo minikube start --driver=none

However this failed for me due to permission errors, so I will persist with what works for now.

Installing Kubectl on the Dev VM

On Ubuntu, you can just run:

sudo snap install kubectl --classic

Connecting kubectl to the correct host

On the Kube VM, you can do:

kubectl api-resources

This shows what compute resources are available on the Kube machine. I get this:

NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings true Binding
componentstatuses cs false ComponentStatus
configmaps cm true ConfigMap
endpoints ep true Endpoints
events ev true Event
limitranges limits true LimitRange
namespaces ns false Namespace
nodes no false Node
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
pods po true Pod
podtemplates true PodTemplate
replicationcontrollers rc true ReplicationController
resourcequotas quota true ResourceQuota
secrets true Secret
serviceaccounts sa true ServiceAccount
services svc true Service
mutatingwebhookconfigurations false MutatingWebhookConfiguration
validatingwebhookconfigurations false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds false CustomResourceDefinition
apiservices false APIService
controllerrevisions apps true ControllerRevision
daemonsets ds apps true DaemonSet
deployments deploy apps true Deployment
replicasets rs apps true ReplicaSet
statefulsets sts apps true StatefulSet
tokenreviews false TokenReview
localsubjectaccessreviews true LocalSubjectAccessReview
selfsubjectaccessreviews false SelfSubjectAccessReview
selfsubjectrulesreviews false SelfSubjectRulesReview
subjectaccessreviews false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
cronjobs cj batch true CronJob
jobs batch true Job
certificatesigningrequests csr false CertificateSigningRequest
leases true Lease
endpointslices true EndpointSlice
events ev true Event
ingresses ing extensions true Ingress
ingressclasses false IngressClass
ingresses ing true Ingress
networkpolicies netpol true NetworkPolicy
runtimeclasses false RuntimeClass
poddisruptionbudgets pdb policy true PodDisruptionBudget
podsecuritypolicies psp policy false PodSecurityPolicy
clusterrolebindings false ClusterRoleBinding
clusterroles false ClusterRole
rolebindings true RoleBinding
roles true Role
priorityclasses pc false PriorityClass
csidrivers false CSIDriver
csinodes false CSINode
storageclasses sc false StorageClass
volumeattachments false VolumeAttachment

However we want to do this from the dev machine, and by default if we run the same command from there, we will get the following:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

So we need a way to connect the dev box to the Kube box [todo].


Remove or expand the section on adding the OpenSSH server, depending on whether it was used.

The Kube master needs to publish a public IP that the dev server needs to connect to. I think how to set this up is detailed here, I’ll merge this into the main blog post once I have got this working.

It’d be interesting to set up a separate Kube worker VM in VirtualBox (so in my case I would have my existing dev VM (GUI), a master Kube VM (console), and a worker Kube VM (console)).

Leave a Reply