K3s
Suitable for Raspberry Pi and other lightweight environments. Architecture.
Prerequisites#
- Set up your Raspberry Pi devices with
cloud-init
as defined in the Raspberry Pi section. - Assign static IPs to all the nodes.
sudo nano /etc/hosts
on each node and add the IP and hostnames of the other nodes so they can resolve during the join process.
Installation#
Configuration file to be used on install instead of CLI arguments. Use the version of iptables bundled with K3s.
cat << EOF > /etc/rancher/k3s/config.yaml
prefer-bundled-bin: true
cluster-init: true
EOF
Addons#
Any manifest found in /var/lib/rancher/k3s/server/manifests
is tracked as an AddOn
custom resource in the kube-system
namespace.
Included: coredns
, traefik
, local-storage
, and metrics-server
. You can put your own files in the manifests dir for deployment as an AddOn
.
Note
The embedded servicelb
LoadBalancer controller does not have a manifest file, but can be disabled as if it were an AddOn
.
View any warnings encountered with:
kubectl get event -n kube-system
# or
kubectl describe {AddOn resource}
Important
It's your responsibility to ensure that files stay in sync across server nodes.
Datastore#
- Embedded SQLite - Default. For clusters with only 1 server (control plane) node.
- Embedded etcd - Allows HA with multiple server nodes.
- External database - - Allows HA with multiple server nodes. Supports etcd, MySQL, MariaDB, PostgreSQL.
Service#
ufw disable
sudo apt install linux-modules-extra-raspi
curl -sfL https://get.k3s.io | sh -
SERVER=`uname -n`
TOKEN=`cat /var/lib/rancher/k3s/server/node-token`
ufw disable
sudo apt install linux-modules-extra-raspi
curl -sfL https://get.k3s.io | K3S_URL=https://$SERVER:6443 K3S_TOKEN=$TOKEN sh -
curl
s
ilent, no progress meter or error messages.f
ail silently on server errors.- If moved, redo the request on the new
L
ocation.
sh
-s
Read commands from standard input.- A
--
signals the end of options. - Any arguments after the
--
are treated as filenames and arguments. - An argument of
-
is equivalent to--
.
Checking the service with systemd.
Logs are in /var/log/syslog
.
Pod logs at /var/log/pods
.
Containerd logs at /var/lib/rancher/k3s/agent/containerd/containerd.log
.
systemctl status k3s
journalctl -u k3s
journalctl -u k3s-agent
When running with systemd, logs will be created in /var/log/syslog
and viewed using journalctl -u k3s (or journalctl -u k3s-agent on agents).
Deploying workloads#
cat << EOF > ~/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
EOF
cat << EOF > ~/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: local-path-pvc
EOF
kubectl create -f pvc.yaml
kubectl create -f pod.yaml
kubectl get pv
kubectl get pvc
# The status should be `Bound` for each.
Helm is also supported through a controller that allows using a HelmChart
CRD.