How to set up k0s Kubernetes: A quick and dirty guide
The Kubernetes architecture of k0s consists of a single binary that includes everything you need to run Kubernetes on any system that includes the Linux kernel. Putting it to use is straightforward:
- Download the k0s binary
- Create a server to instantiate the Kubernetes control plane
- Create a Kubernetes worker
- Access the cluster
This Tech Talk provides a more in-depth recipe for getting started with k0s quickly and easily.
Create a single node Kubernetes cluster with k0s
The first thing we need to do is create a server that will act as the k0s controller. The host itself doesn't have to be huge; for this guide I used an AWS t2.medium instance (2 CPUs, 4GB RAM) running Amazon Linux 2. Just make sure that port 6443 is open so that you can contact the cluster later.Now you can install k0s with a simple one line command:
sudo curl -sSLf https://get.k0s.sh | sudo shOnce the script downloads, all you need to do is start the server:
sudo k0s install controller --enable-workerWait a few moments and then start the service:
sudo k0s startYou can make sure it's running by checking its status:
sudo k0s statusNow let's access the new cluster.
Access the k0s cluster
The k0s distribution includes kubectl, so actually accessing the cluster requires no extra steps:$ sudo k0s kubectl get nodesHowever, if you want to access the cluster from somewhere else (or if you want to use an independent install of kubectl), you're going to need the
NAME STATUS ROLES AGE VERSION
ip-172-31-29-164 Ready 97s v1.20.4-k0s1
KUBECONFIG
file.When you create the server, k0s creates a
KUBECONFIG
file for you, so copy it to your working directory and point to it:sudo cp /var/lib/k0s/pki/admin.conf ~/admin.conf export KUBECONFIG=~/admin.confNow you can access the cluster itself:
sudo k0s kubectl get namespaces NAME STATUS AGENotice that if you look for the nodes, there is no master node:. Remember, k0s implements the control plane as naked processes.
default Active 5m32s
kube-node-lease Active 5m34s
kube-public Active 5m34s
kube-system Active 5m34s
sudo k0s kubectl get nodesBut what happens if we try to access the cluster from another server, using a tool such as Lens?
NAME STATUS ROLES AGE VERSION
ip-172-31-8-33 Ready <none> 5m1s v1.19.3
Accessing k0s from outside the cluster: Customizing the k0s Kubernetes cluster
Now let's look at accessing the cluster from an external server. We can easily get theKUBECONFIG
file by using the key we used to create the server to copy it to our local machine using SCP:scp -i <SERVER_KEY> ec2-user@<SERVER_IP>:~/admin.conf .From there, we'll want to use the public IP address of the server rather than localhost, so open the admin.conf file and edit the server address. For example, in my case, the public IP of my server is 52.10.92.152:
export KUBECONFIG=admin.conf
apiVersion: v1Now if we were to test this connection, we'd see something odd.
clusters:
- cluster:
server: https://52.10.92.152:6443
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBRENDQWVpZ0F3SUJBZ0lVRzhGakJZVVNZOFBrOWNjcTVhK3lFenNBNXAwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0dERVdNQlFHQTFVRUF4TU5hM1ZpWlhKdVpYUmxjeTFqWVRBZUZ3MHlNREV4TWpNd016TXpNREJhR...
kubectl versionSo we're making the connection, and Kubernetes is working, but the credentials are incorrect. To solve this problem, we need to configure k0s to include the public IP address. Note that we could have done this prior to our initial installation--usually, that will make setup easier--but we can take care of the issue now as well.
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: x509: certificate is valid for 127.0.0.1, 172.31.8.33, 172.31.8.33, 172.31.8.33, 10.96.0.1, not 52.10.92.152
To start, we can export the actual configuration file k0s will use:
sudo k0s default-config > k0s.yamlWe can then edit that file to add the public IP, and any other address at which we want to call the server:
apiVersion: k0s.k0sproject.io/v1beta1Next, stop the k0s service.
kind: Cluster
metadata:
name: k0s
spec:
api:
address: 172.31.8.33
sans:
- 172.31.8.33
- 172.31.8.33
- 52.10.92.152 extraArgs: {}
controllerManager:
extraArgs: {}
scheduler:
extraArgs: {}
storage:
type: etcd
kine: null
etcd:
peerAddress: 172.31.8.33
network:
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
provider: calico
calico:
mode: vxlan
vxlanPort: 4789
vxlanVNI: 4096
...
sudo k0s stopClean up the previous installation, which was made without the configuration file:
sudo k0s resetReinstall k0s with your new config file:
sudo k0s install controller --enable-worker -c k0s.yamlStart the k0s service again:
sudo k0s startIf you need to modify your current configuration at some point in the future, you’re free to change the config file even while k0s is running -- but remember that you will need to restart k0s to apply the changes:
sudo k0s stopFrom here, everything should Just Work. At this point, you can also access the Kubernetes cluster with Lens by importing the KUBECONFIG.
sudo k0s start
Add additional nodes to the Kubernetes cluster
Scaling the cluster is just a matter of adding additional worker nodes or control planes. To do that, you're going to need a token so the new server knows where to "phone home." To generate that token, go to the control plane:k0s token create --role=workerObviously, in this case we're creating a new worker node. You'll wind up with a really long string of text such as:
H4sIAAAAAAAC/2yV0Y7iOhKG7/speIGZYycwpxtpLyZgBwIxY8dlJ74LcYZAnBBCGtKs9t1XzcxIu9K5K1f9+n7Lsup/ybujKvvr8dzOJzf8Urj361D21/nLl8nvev4ymUwm17K/lf18Ug1Dd53/9Rf+2/vq46+vX31//m069Z+iouyH489jkQ/ll/x9qM79cfj4YvMhn0+2CRq2CV4IsJE8BkuhIkjARBxREM8ZGhY1jhIQgSBsybXqDKJ+AlFgkFPiUYV5HRmlmNnRoN9pdiqkqja+o2XLApZ+v1skhIZuu8ddlK/MSdZUCLhvuKOBRZYIZRl3dMUlVQLoVMKsirHptKs2VnUZNOOplPSilQgMGD9eOSaImkrdMYvwN5l2TJCoEm1xLw/dnxn935mEqC2JuXClFgbNNK8pU0SkdknNXplZ1mAd0y6TqTBxrXzjWZoDDmJil+DT1cKxKDsxAqAWXFHFgaAEImIRfWoTRG+fbwIgNuJUj4k2Yo/GSwzFQ5Ea21ZV3DdqJ6000rV5YxIFh41Br57Ba79MFTXSabuk5zxUZ5nG9xyGkyDTMVHfe1YrY0IcMXe4JSSiuXIKlhHOkIEyZKBSg8BxnD/Mujyc77BSjx3N+iKluVTnj3gVxBqmvvGpA1dgvXRLvYp7mYohTxnX4YjzU7RV3ut9j88986b3Ag0CMGNlas+2ji6LpvA2XpUomX2opTE2HJZlSo86XE/F8TruqHvfEZpmzYzJZjzHYOKSBlJoK/K22pQy7uNavPNH5vPU1SDXnsDFJoHDNCe4YvUbk+HhpkI+TaRI9aprdaN2GV57WetcDEWfLzOUeW871bzds1MQ5pDdWWqrzUPFWw/PRBtFW4+J/HsHVkbpHhSTsJ7tidMljQabKmN0NLNt8MOc3FWmNMlQtEjUYcz8SNnQcBMKynyC42X0zrVlvKaB8DqR11GwqHHAiA1ipWqxspQf33wAFVjkFrzpAiBRK51ZQ40XXGdTARFwEAHA4SZhfReIjEoLYjBNeR2B1vG0COtNvhIQO3HM0niqaJerlE/L5hWXZNorQne8sX2hqz6HYmYfwecffIiaBhKx4NM/98+ocGvPtsGuOA5Ek1mjDt2Ce+NHhkRrH8zFyjUK22P2MXgQ2ladTMZTty5OgnKotCbDKFJz2hM1JqvgaFD30ErdsjS7m4fd7pYCWczWi5MZEvJm2GIIslZxtjSyeAhPhfHNYuILNDttUYUV5ahsA1FqGPWK+rIRIDxbs1asi1YEpol6CKuLaSgkTbbJfSvLpR2s300zn8LeZzf5cLdd6pgO6WVP7h97sKMljoJUs7zmD4nED+1oLGp6grDok6UxQNmHQviy02tPfe9kTsa7BJtlaTHdNxneK/deoA52cL1tvegae2+UUbvereBum8IT8HaL26CRtUmVDC5GsiYmHS1klTJZjDtpr4vm9RajyN/iIGLp4WFOxlmCRrMUUxsO25KwXqRUJ83IchJhaCqyRdW3QkcO2i4FhO7xyhL14A+r3yIZWpw0fLMPVZVj5f+QaPN7N1NZ8wNHKlHEhQmwQBF47uUtP//rZTJp86acT2p0fSnO7VCOw6+Y+FX/iok/mfFUfTber8/T+7505fBlfz4P16HPu/+nvfd92Q5f/pCezfrY2vlkcW5/Hg8vXV/+LPuyLcrrfPLv/7x8Up/mvyH/gH8aP68wnOuynU9qN15n+Ovl56OyaLj8ffC+9f3x9PLfAAAA////I+m0AwcAAA==This may seem excessive, but this is actually just a KUBECONFIG that's been BASE64-encoded. The benefit here is that you can put the worker node anywhere, as long as it can access the control plane over the network.
To create the worker, instantiate a new server (if necessary) and download k0s:
sudo curl -sSLf https://get.k0s.sh | sudo shOn your new worker host, create a text file to store the long join token you just generated. Then go ahead and install the worker with the join token:
sudo k0s install worker --token-file /path/to/token/fileNow if you were to go back to kubectl and check for nodes, you'd see the new node in your list, as in:
sudo k0s start
kubectl get nodes NAME STATUS ROLES AGE VERSIONYou can also increase the robustness of the cluster by creating additional controller nodes. For more details on control plane high availability, see the k0s documentation.
ip-172-31-14-157 Ready <none> 81s v1.19.3
ip-172-31-8-33 Ready <none> 11h v1.19.3