Minikube Macos
There are many times that I want to test something, do a proof of concept for myself, or simply just play around with something new. Although I could setup a test cluster, it’s just easier to have a clean slate without having to worry about anything. Simply just deploy the workload, play around with it, break it, fix it, whatever. Then when you’re done, destroy the cluster and move on.
This is where minikube
comes into play. It can mimic a production-like cluster without having to manually install a CNI, load balancer, ingress, etc.
Prerequisites
- Apple Computer with Apple Silicon
- vfkit
Instead of using Docker Desktop, we’re going to be using vfkit, a preferred driver for using minikube inside a VM. There is no additional requirements if you plan on running a single node. When you want to communicate with other clusters or have a multi-node setup you will need the vmnet-shared network that is provided by vm-helper.
Installation
First we’re going to install homebrew if it’s not installed already. Then we’re going to install minikube and vfkit.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install minikube vfkit
Creating the Cluster
Starting the cluster is simple. In this example, I’m going to use Calico but you can use flannel, clilum, kindnet, or a path to a CNI manifest file. By default this is auto
.
minikube start --cni calico --driver vfkit
😄 minikube v1.37.0 on Darwin 15.6.1 (arm64)
✨ Using the vfkit driver based on user configuration
💿 Downloading VM boot image ...
> minikube-v1.37.0-arm64.iso: 387.80 MiB / 387.80 MiB 100.00% 430.03 KiB
👍 Starting "minikube" primary control-plane node in "minikube" cluster
💾 Downloading Kubernetes v1.34.0 preload ...
> preloaded-images-k8s-v18-v1...: 332.38 MiB / 332.38 MiB 100.00% 563.68
🔥 Creating vfkit VM (CPUs=2, Memory=6144MB, Disk=20000MB) ...
❗ Failing to connect to https://registry.k8s.io/ from inside the minikube VM
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
🔗 Configuring Calico (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
minikube profile list
┌──────────┬────────┬─────────┬──────────────┬─────────┬────────┬───────┬────────────────┬────────────────────┐
│ PROFILE │ DRIVER │ RUNTIME │ IP │ VERSION │ STATUS │ NODES │ ACTIVE PROFILE │ ACTIVE KUBECONTEXT │
├──────────┼────────┼─────────┼──────────────┼─────────┼────────┼───────┼────────────────┼────────────────────┤
│ minikube │ vfkit │ docker │ 192.168.64.6 │ v1.34.0 │ OK │ 1 │ * │ * │
└──────────┴────────┴─────────┴──────────────┴─────────┴────────┴───────┴────────────────┴────────────────────┘
Note: If you have multiple instances you can use the
-p
or--profile
to specify which cluster the action will be performed on.
Flag | Description |
---|---|
–cni auto | CNI plug-in to use. Valid options: auto, bridge, calico, cilium, flannel, kindnet, or path to a CNI manifest (default: auto) |
-c, –container-runtime | The container runtime to be used. Valid options: docker, cri-o, containerd (default: auto) |
–cpus | Number of CPUs allocated to Kubernetes. Use “max” to use the maximum number of CPUs. Use “no-limit” to not specify a limit (Docker/Podman only) (default “2”) |
–disksize 20g | Disk size allocated to the minikube VM (format: <number>[<unit>], where unit = b, k, m or g). (default “20000mb”) |
-d, –driver | Used to specify the driver to run Kubernetes in. The list of available drivers depends on operating system. |
–ha | Create Highly Available Multi-Control Plane Cluster with a minimum of three control-plane nodes that will also be marked for work. |
–memory | Amount of RAM to allocate to Kubernetes (format: [], where unit = b, k, m or g). Use “max” to use the maximum amount of memory. Use “no-limit” to not specify a limit (Docker/Podman only) |
-n, –nodes | The total number of nodes to spin up. Defaults to 1. (default 1) |
Configuration
By default the only addons that are enabled are default-storageclass and storage-provisioner. Once the cluster is up and running you can start enabling addons. The first thing we want is an ingress so we don’t have to use the minikube tunnel or NodePorts. Next, we’re going to add ingress-dns so we can quickly and easily access workloads by name.
minikube addons enable ingress
💡 ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
▪ Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
🔎 Verifying ingress addon...
minikube addons enable ingress-dns
💡 ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
▪ Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
🌟 The 'ingress-dns' addon is enabled
sudo mkdir /etc/resolver
minikube ip
192.168.64.6
We’re going to configure our machine to resolve all .test top level domains to the cluster on the machine. To do that we must create the /etc/resolver/minikube.conf file. This file can be named anything but it needs to exist in the /etc/resolver directory.
cat << EOF | sudo tee /etc/resolver/minikube.conf
domain test
nameserver $(minikube ip)
search_order 1
timeout 5
EOF
After we added the resolver we should restart the service and flush the cache.
sudo killall -HUP mDNSResponder
sudo dscacheutil -flushcache
Finally, and as a bonus, we’re going to setup the load balancer. Generally, for testing purposes this isn’t really necessary and is there to mimic the production cluster.
minikube addons enable metallb
❗ metallb is a 3rd party addon and is not maintained or verified by minikube maintainers, enable at your own risk.
❗ metallb does not currently have an associated maintainer.
▪ Using image quay.io/metallb/speaker:v0.9.6
▪ Using image quay.io/metallb/controller:v0.9.6
🌟 The 'metallb' addon is enabled
minikube addons configure metallb
-- Enter Load Balancer Start IP: 192.168.64.10
-- Enter Load Balancer End IP: 192.168.64.30
▪ Using image quay.io/metallb/speaker:v0.9.6
▪ Using image quay.io/metallb/controller:v0.9.6
✅ metallb was successfully configured
Testing and Troubleshooting
There are several things that you need to test to ensure that things are running as they should.
- You can ping the
$(minikube ip)
- Create a test deployment that should be accessible by their name
- hello-john.test and hello-jane.test have DNS records
- both example apps respond to pings
- Both ecample apps can be accessed through the browswer
ping -c 3 $(minikube ip)
PING 192.168.64.6 (192.168.64.6): 56 data bytes
64 bytes from 192.168.64.6: icmp_seq=0 ttl=64 time=0.575 ms
64 bytes from 192.168.64.6: icmp_seq=1 ttl=64 time=0.580 ms
64 bytes from 192.168.64.6: icmp_seq=2 ttl=64 time=0.685 ms
--- 192.168.64.6 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.575/0.613/0.685/0.051 ms
kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml
deployment.apps/hello-world-app created
ingress.networking.k8s.io/example-ingress created
service/hello-world-app created
service/hello-world-app created
kubectl get ingresses -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kube-system example-ingress nginx hello-john.test,hello-jane.test 192.168.64.6 80 49s
nslookup hello-john.test $(minikube ip)
Server: 192.168.64.6
Address: 192.168.64.6#53
Non-authoritative answer:
Name: hello-john.test
Address: 192.168.64.6
nslookup hello-jane.test $(minikube ip)
Server: 192.168.64.6
Address: 192.168.64.6#53
Non-authoritative answer:
Name: hello-jane.test
Address: 192.168.64.6
ping -c 3 hello-john.test
PING hello-john.test (192.168.64.6): 56 data bytes
64 bytes from 192.168.64.6: icmp_seq=0 ttl=64 time=0.537 ms
64 bytes from 192.168.64.6: icmp_seq=1 ttl=64 time=1.025 ms
64 bytes from 192.168.64.6: icmp_seq=2 ttl=64 time=0.800 ms
--- hello-john.test ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.537/0.787/1.025/0.199 ms
ping -c 3 hello-jane.test
PING hello-jane.test (192.168.64.6): 56 data bytes
64 bytes from 192.168.64.6: icmp_seq=0 ttl=64 time=0.436 ms
64 bytes from 192.168.64.6: icmp_seq=1 ttl=64 time=0.492 ms
64 bytes from 192.168.64.6: icmp_seq=2 ttl=64 time=0.757 ms
--- hello-jane.test ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.436/0.562/0.757/0.140 ms
If you can’t ping $(minikube ip)
then it might be due to the cluster using the docker driver.
Ensure that the cluster is running if you are getting timeout or connection errors when executing the kubectl
command.
Performing an nslookup
should return a response, with $(minikube ip)
in the server field. In the event that you don’t get a response ensure that the configure is correct, pointed at the correct IP and try restarting mDNSResponder and flushing the DNS cache.
When pinging the DNS name and it’s not resolving to an IP, see the previous paragraph for what to do.
Conclusion
Now you have a basic functional cluster. There are many options for the start sub-command and I strongly recommend reading over all the options for that as well as the other sub-commands.