Skip to main content

Kubernetes Up & Running on Raspberry Pi

Written on November 10, 2018 by Ron Rivera.

10 min read
––– views

Back in August I bought a couple of Raspberry Pi 3 Model B+ to replace my aging Raspberry Pi 1 Model B (Rev 1 & 2) boards. The new rpis are more powerful than the older models with 1.4Ghz CPU and 1GB memory and it would be interesting to know the kind of workloads that can run on them. I saw an article in the Internet about running Kubernetes on it which really got me excited and wanted to try out.

This post documents the bootstrapping process to get kubernetes up and running on my Raspberry Pi. The steps are based from this gist from Alex Ellis.

Prerequisites

The following commands need to be executed from all the raspberry pis that will participate in the cluster.

Disable swap

sudo dphys-swapfile swapoff && \
  sudo dphys-swapfile uninstall && \
  sudo update-rc.d dphys-swapfile remove

Install Docker

curl -sSL get.docker.com | sh && \
  sudo usermod pi -aG docker

Install kubernetes

Let's configure the apt repos first:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Fire away!

sudo apt-get update -q && \
  sudo apt-get install -qy kubeadm kubectl kubelet

Initialise the master

$ sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.68.101 --apiserver-cert-extra-sans=192.168.11.161
[init] Using Kubernetes version: v1.12.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.68.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.68.101 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.68.101 192.168.11.161]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 88.008650 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master1" as an annotation
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7ivizs.j3u889c9jbq3a8je
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes master has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of machines by running the following on each node
as root:
 
  kubeadm join 192.168.68.101:6443 --token 7ivizs.j3u889c9jbq3a8je --discovery-token-ca-cert-hash sha256:92a5cc54dcfc02090bc62cb24eeac4be4d52222b4e16735289933aa8756b07a2

List the running containers:

$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
aa21893b644d        cbfd7b701c50           "/usr/local/bin/kube…"   3 hours ago         Up 3 hours                              k8s_kube-proxy_kube-proxy-zztbt_kube-system_4ef31447-2c38-11e9-834d-b827eba8aa5f_0
7ef414efc760        k8s.gcr.io/pause:3.1   "/pause"                 3 hours ago         Up 3 hours                              k8s_POD_kube-proxy-zztbt_kube-system_4ef31447-2c38-11e9-834d-b827eba8aa5f_0
1362f53c6303        e7a8884c8443           "etcd --advertise-cl…"   3 hours ago         Up 3 hours                              k8s_etcd_etcd-k8s-master1_kube-system_0267391d1532ab5038d4d5441cd627b9_0
20834841d2f4        0ddf6718d29a           "kube-scheduler --ad…"   3 hours ago         Up 3 hours                              k8s_kube-scheduler_kube-scheduler-k8s-master1_kube-system_b734fcc86501dde5579ce80285c0bf0c_0
f6f3176cdf29        bd6b57bce692           "kube-controller-man…"   3 hours ago         Up 3 hours                              k8s_kube-controller-manager_kube-controller-manager-k8s-master1_kube-system_c1d5aacbd405dfae53a088bbc880cbba_0
2bdbc7b35b62        c17fe5008018           "kube-apiserver --au…"   3 hours ago         Up 3 hours                              k8s_kube-apiserver_kube-apiserver-k8s-master1_kube-system_8ea4262f61190ca03c0eaacce3e4b8d7_0
b0245cc9220b        k8s.gcr.io/pause:3.1   "/pause"                 3 hours ago         Up 3 hours                              k8s_POD_etcd-k8s-master1_kube-system_0267391d1532ab5038d4d5441cd627b9_0
69d712a6ea7b        k8s.gcr.io/pause:3.1   "/pause"                 3 hours ago         Up 3 hours                              k8s_POD_kube-scheduler-k8s-master1_kube-system_b734fcc86501dde5579ce80285c0bf0c_0
ceab03b0d2fd        k8s.gcr.io/pause:3.1   "/pause"                 3 hours ago         Up 3 hours                              k8s_POD_kube-controller-manager-k8s-master1_kube-system_c1d5aacbd405dfae53a088bbc880cbba_0
82872d75820a        k8s.gcr.io/pause:3.1   "/pause"                 3 hours ago         Up 3 hours                              k8s_POD_kube-apiserver-k8s-master1_kube-system_8ea4262f61190ca03c0eaacce3e4b8d7_0

Run the following commands to communicate to the cluster:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploy the pod network

$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created

Verify the components are up and running:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-lmqlz              1/1     Running   0          6h20m
kube-system   coredns-86c58d9df4-ntpfm              1/1     Running   0          6h20m
kube-system   etcd-k8s-master1                      1/1     Running   0          6h20m
kube-system   kube-apiserver-k8s-master1            1/1     Running   0          6h19m
kube-system   kube-controller-manager-k8s-master1   1/1     Running   0          6h19m
kube-system   kube-proxy-zw8pb                      1/1     Running   0          71m
kube-system   kube-proxy-zztbt                      1/1     Running   0          6h20m
kube-system   kube-scheduler-k8s-master1            1/1     Running   0          6h20m
kube-system   weave-net-cbgl8                       2/2     Running   0          71m

Join the worker nodes

Run the following command on each of the worker node that will participate in the cluster.

$ sudo kubeadm join 192.168.68.101:6443 --token 7ivizs.j3u889c9jbq3a8je --discovery-token-ca-cert-hash sha256:92a5cc54dcfc02090bc62cb24eeac4be4d52222b4e16735289933aa8756b07a2
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.68.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.68.101:6443"
[discovery] Requesting info from "https://192.168.68.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.68.101:6443"
[discovery] Successfully established connection with API Server "192.168.68.101:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-worker1" as an annotation
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the master to see this node join the cluster.

If you get an erorr that says something like x509: certificate has expired or is not yet valid when joining a worker node to the master, it is likely that the pi's date is not correct. Run this command to set it correctly (adapt to your timezone accordingly): sudo date --set='TZ="Australia/Sydney" 18 Dec 2018 23:35'

Verify the installation

$ kubectl cluster-info
Kubernetes master is running at https://192.168.68.101:6443
KubeDNS is running at https://192.168.68.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    master   6h18m   v1.12.2
k8s-worker1   Ready    <none>   69m     v1.12.2
k8s-worker2   Ready    <none>   6m4s    v1.12.2
k8s-worker3   Ready    <none>   4m22s   v1.12.2
$ kubectl get all --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-86c58d9df4-lmqlz              1/1     Running   0          6h19m
kube-system   pod/coredns-86c58d9df4-ntpfm              1/1     Running   0          6h19m
kube-system   pod/etcd-k8s-master1                      1/1     Running   0          6h18m
kube-system   pod/kube-apiserver-k8s-master1            1/1     Running   0          6h18m
kube-system   pod/kube-controller-manager-k8s-master1   1/1     Running   0          6h18m
kube-system   pod/kube-proxy-4s6sl                      1/1     Running   0          5m27s
kube-system   pod/kube-proxy-xkbnp                      1/1     Running   0          7m8s
kube-system   pod/kube-proxy-zw8pb                      1/1     Running   0          70m
kube-system   pod/kube-proxy-zztbt                      1/1     Running   0          6h19m
kube-system   pod/kube-scheduler-k8s-master1            1/1     Running   0          6h18m
kube-system   pod/weave-net-8b8th                       2/2     Running   0          7m8s
kube-system   pod/weave-net-cbgl8                       2/2     Running   0          70m
kube-system   pod/weave-net-f6ksl                       2/2     Running   0          99m
kube-system   pod/weave-net-h9jkk                       2/2     Running   0          5m27s
 
NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP         6h20m
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   6h19m
 
NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/kube-proxy   4         4         4       4            4           <none>          6h19m
kube-system   daemonset.apps/weave-net    4         4         4       4            4           <none>          99m
 
NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   2/2     2            2           6h19m
 
NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-86c58d9df4   2         2         2       6h19m

Install helm and tiller

Now let's install helm, the de facto package manager for kubernetes, following the instructions from the helm documentation.

$ curl -sL -O https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-arm.tar.gz
$ tar -zxvf helm-v2.11.0-linux-arm.tar.gz
$ mv linux-arm/helm /usr/local/bin/helm

It doesn't seem like there's an arm64 image for tiller as the pod keeps on crashing. Fortunately someone's already built an arm image so we'll pass a --tiller-image option as per this post.

Run helm init:

$ helm init --tiller-image=jessestuart/tiller:v2.9.0
$HELM_HOME has been configured at /home/pi/.helm.
 
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
 
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Then create a rolebinding for the default system account:

$  kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created

To confirm helm is installed correctly, run helm version.

Wrapping up

I now have a kubernetes cluster running on my Raspberry Pi which I can use for experimentation. Documenting the steps to provision them is going to be useful as I play around with it.

The bootstrap process can be completed in a couple of hours and if I wanted to reinstall the cluster and have a clean slate, I can just run the following command on all the nodes:

sudo kubeadm reset && \
  sudo apt-get purge -y kubeadm kubectl kubelet kubernetes-cni kube* && \
  sudo apt-get autoremove -y && \
  sudo rm -rf ~/.kube && \
  sudo reboot

That's all folks, 'til the next post.

Tweet this article

Join my newsletter

Get notified whenever I post, 100% no spam.

Subscribe Now