Building a k3s Cluster on Raspberry Pis
Table of Contents
A while ago I thought to myself that I’d like to set up yet another thing that will drive me insane when something inevitably (and often inexplicably) goes wrong, and so I bought a few Raspberry Pi 4B computers with the intention of clustering them with kubernetes.
Motivation⌗
I had a few reasons that I wanted to create this cluster:
- I wanted an easy (haha) way to stand up services that I wanted to run on my home network like Pi-hole to block ads across all devices on my home network, a private docker registry for all of my images, homeassistant for home automation, and many more
- A cluster could serve as a simple test bench for services and applications that I develop with the intention of exposing to the internet through a cloud provider sometime later
- It’s an excellent tool to learn how to set up and maintain big data tools
- I wanted to keep something in my house that was much like a pet in that it required a lot of time and effort to keep it alive and healthy, but without any of the benefits like companionship
After three or four total wipes and complete OS reinstalls of each Pi, two PoE switches, and many late nights spent wondering why in the hell prometheus and/or grafana suddenly stopped working - I eventually got there, so I figured I’d write about it not just for your benefit, but also as a way to help me remember what the hell I did to get this thing working in the first place (I’m not really doing a great job at selling the whole concept of creating a cluster, am I?).
In this post, I’ll be going over the initial setup to get kubernetes up and
running. I’ve gone with a flavor of kubernetes called K3s
developed by the folks over at Rancher since it’s much more lightweight and
offers pretty much everything I’ll need. There are two ways of getting k3s
running; one way is considerably easier than the other, but you won’t score
nearly as many cool points with the other masochists people who have set up
their own kubernetes clusters at home.
Local setup⌗
It’s not strictly necessary to install kubectl
on your local machine, but
it’s very nice to have. I’m on a
Mac, so I use brew
to do it. If you’re also
on a Mac, you can just run the following command:
brew install kubernetes-cli
If you’re on a different operating system, then I don’t know what you should run to get that installed. I trust that you probably know your OS better than I do. Do what feels right to you.
Setting up the Pis⌗
Hardware⌗
At a minimum, you’re going to need:
- One or more Raspberry Pi (duh)
- One microSD card for each Pi
- A power source
- Some way to connect the Pis to the internet
A convenient way to take care of both the power source and internet access is via PoE (Power over Ethernet), which will also require:
- A PoE capable switch
- One PoE HAT (Hardware Attached on Top) for each Pi
- One ethernet cable per Pi, plus another one to connect the router or gateway to the switch
It might cost a little extra, but I think using a PoE HAT makes for a much cleaner look and a much smoother experience.
Operating System⌗
Right off the bat, don’t be like me and install a 32-bit operating system on
your first try. You’ll get all the way to the end, and then realize that the
reason every pod is entering a CrashLoopBackOff
state is because it was built
for ARM64, and you’ll have to start all over again. Download the Raspberry Pi
Imager and choose your operating system
of choice (I just went with the 64-bit Raspberry Pi OS Lite because I’m not an
OS snob). Open up the advanced options to set some things like the hostname of
each node, username/password, WiFi, and locale settings. Make sure to enable
SSH, set up public key authentication, and disable password authentication.
This is not the only chance you have to do this, but the other method is just
not as easy. Write the OS with your configuration to some SD cards, put them
into the Pis, and plug them in. Once online, you should be able to SSH into one
of the nodes.
Setting up the Network⌗
To make things easier, I reserved some IPs for my cluster nodes in my router. I’m sure there are ways to set k3s up with dynamic IPs, but I didn’t want to get that fancy, and I have plenty of IP space in my local network.
Installing k3s the Hard Way⌗
K3s is fairly simple to install in its own right, but doing it manually is still quite a bit more work than the alternative. By default, it will be set up with flannel, klipper, and traefik. I’m going to keep flannel, but I want to use a different service load balancer, and I’m going to configure traefik myself later, so I’m going to configure my k3s installation to not use klipper or traefik. You don’t have to do this if you don’t want to.
The master node⌗
SSH into the node you want to designate as the master node, and run the following command to install the k3s server process:
curl -sfL https://get.k3s.io | sh -s - --disable servicelb --disable traefik --token <hopefully-a-good-token>
Make sure to keep track of the token, since it will be used to set up the worker
nodes. Once this is up and running, log out of the master node, and copy the
contents of the k3s configuration file from /etc/rancher/k3s/k3s.yaml
into
~/.kube/config
onto your local machine.
You should be able to run
kubectl get nodes
And see:
NAME STATUS ROLES AGE VERSION
pimaster Ready control-plane,master 152d v1.24.10+k3s1
…or, you know, something like that. It probably won’t be 152 days old. Yes I’ve been putting off writing this for that long.
The worker nodes⌗
Technically, k3s can run on a single node, but where’s the fun in that? SSH into one of the other nodes, and run the following command to run the agent process:
curl -sfL https://get.k3s.io | https://k3s.example.com sh -s - agent --server https://<master-node-ip> --disable servicelb --disable traefik --token <the-same-token-from-before>
If all goes right, log out of the node and run
kubectl get nodes
And see something like:
NAME STATUS ROLES AGE VERSION
pimaster Ready control-plane,master 152d v1.24.10+k3s1
pinode-1 Ready <none> 152d v1.24.10+k3s1
Now all you have to do is just repeat this step on each node. Have fun!
Installing MetalLB⌗
When running a kubernetes cluster through a cloud provider, any time a
LoadBalancer
service is created, an external load balancer with its own IP
address is provisioned and can balance traffic to all replicas of a pod across
the cluster. Since I’ve opted to disable the service load balancer that k3s
comes with out of the box, if any LoadBalancer
service is created on the
cluster, it will be stuck in a pending state indefinitely, since it has no idea
how to go and provision the load balancer. MetalLB is a service-level network
load balancer for bare-metal clusters (like this one) that aren’t running on an
IaaS platform (GCP, AWS, Azure, etc.). This has the option of being run in layer
2 or layer 4 (BGP) mode, and offers slightly different functionality for each.
I’m going to be running MetalLB in layer 2 mode, which doesn’t actually offer
any load balancing benefits since it only designates a single pod as the
“active” pod, but it does provide immediate cutover if that pod was to ever stop
responding (node failure is very common on my cluster). Additionally, any
service of type LoadBalancer
will have a consistent externally-accessible IP
address that persists after cluster reboots.
Once all of the agents have been created, go back into the router settings and adjust the DHCP address pool to block out a set of IPs. These are the IPs that are going to be reserved for MetalLB. I’m installing MetalLB via manifest because it’s easy and I don’t have to think about it, but if you’re the thinking type, there’s a whole bunch of options here.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml
Once that’s done, create the IpAddressPool
that will hold the configuration
for IPs. This should correspond to the block of IPs that your router should not
be able to assign.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool # boring name
namespace: metallb-system
spec:
addresses:
- 192.168.1.240-192.168.1.250 # or whatever your IP range is
And also the L2Advertisement
configuration:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example # name it something more interesting than this
namespace: metallb-system
If successful, the MetalLB pods should be up and running:
kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
speaker-m2sxs 1/1 Running 4 (12d ago) 156d
speaker-fzxjg 1/1 Running 4 (12d ago) 156d
speaker-742f5 1/1 Running 6 (12d ago) 156d
speaker-nq96n 1/1 Running 2 (12d ago) 156d
controller-6c58495cbb-lmz7m 1/1 Running 3 (12d ago) 156d
(oh cool 6 restarts that’s not concerning haha)
Done…?⌗
Yep. If everything’s gone well up until this point, that’s it. The cluster is up and running. Yay?
Installing k3s the Easy Way⌗
Okay, so chances are you’ve probably skipped all of that stuff above this because you don’t really care about doing things “the hard way,” and to be honest neither do I, so here’s the super easy way: running other people’s ansible playbooks. Yeah, I like to put blind faith into someone else’s work.
This will automate the installation of k3s on each node, set up MetalLB, and do some other stuff.
Ansible setup⌗
In order to run ansible
, you’re going to need to install it. So, do that.
Clone this repository (I reallly
hope you have git
). Create a new inventory:
cp -R inventory/sample inventory/my-cluster
Edit inventory/my-cluster/hosts.ini
so it looks something like this:
[master]
<master-ip>
[node]
<node-1-ip>
<node-2-ip>
<node-3-ip>
...
[k3s_cluster:children]
master
node
Copy ansible.example.cfg
to ansible.cfg
and edit the file to point to the
new files.
MetalLB setup⌗
Open up your DHCP server settings (usually in your router unless you’re
~*fancy*~) and block out a range of IPs. Open up
inventory/my-cluster/group_vars/all.yml
and edit the metal_lb_ip_range
value
to whatever the IP range you set in your router.
Install⌗
Just run:
ansible-playbook site.yml -i inventory/my-cluster/hosts.ini
Cluster configuration⌗
Cpy your cluster config:
scp <user>@<master-ip>:~/.kube/config ~/.kube/config
You should be able to run
kubectl get nodes
And see something like this:
NAME STATUS ROLES AGE VERSION
pinode-3 Ready <none> 156d v1.24.10+k3s1
pinode-2 Ready <none> 156d v1.24.10+k3s1
pimaster Ready control-plane,master 156d v1.24.10+k3s1
pinode-1 Ready <none>
And you’re done!
Next Steps⌗
So now that we’ve gotten a cluster up and running, it’s time to start putting it to work. I plan on releasing a few more posts going over the things that I’ve set up on my cluster so far, but really you can do whatever you want with it at this point.