Kubernetes Cheat Sheet
Often you know what you want to do, you just can't remember the Kubernetes terminology, vocabulary, or syntax for how to do it. This Kubernetes cheat sheet is designed to help solve that problem.
Kubernetes provides a way to orchestrate containers to provide a robust, cloud native environment. The architecture looks something like this:
Kubernetes Terms
Kubernetes terminology with which you should be familiar:
Cluster - Group of physical or virtual servers wherein Kubernetes is installed
Node (Master) - Physical or virtual server that controls the Kubernetes cluster
Node (Worker) - Physical or virtual servers where workloads run in a given container technology
Pods - Group of containers and volumes which share the same network namespace
Labels - User defined Key:Value pair associated to Pods
Master - Control plane components which provide access point for admins to manage cluster workloads
Service - An abstraction which serves as a proxy for a group of Pods performing a “service”
List of Kubernetes objects
Kubernetes enables you to control and orchestrate various types of objects, either by their full name or their "shortname". These objects include:
Workloads
Container
CronJob / cronjobs / cj
DaemonSet / daemonsets / ds
Deployment / deployments / deploy
Job / jobs
Pod / pods / po
ReplicaSet / replicasets / rs
ReplicationController / replicationcontrollers / rc
StatefulSet / statefulsets / sts
Services
Endpoints / endpoints / ep
EndpointSlice / endpointslices
Ingress / ingresses / ing
IngressClass / ingressclasses
Service / services / svc
Config & Storage
ConfigMap / configmaps / cm
CSIDriver / csidrivers
CSINode / csinodes
Secret / secrets
PersistentVolumeClaim / persistentvolumeclaims / pvc
StorageClass / storageclasses / sc
CSIStorageCapacity
Volume
VolumeAttachment / volumeattachments
Clusters
APIService / apiservices
Binding / bindings
CertificateSigningRequest / certificatesigningrequests / csr
ClusterRole / clusterroles
ClusterRoleBinding / clusterrolebindings
ComponentStatus / componentstatuses/cs
FlowSchema / flowschemas
Lease / leases
LocalSubjectAccessReview / localsubjectaccessreviews
Namespace / namespaces/ns
NetworkPolicy / networkpolicies / netpol
Node / nodes / no
PersistentVolume / persistentvolumes / pv
PriorityLevelConfiguration / prioritylevelconfigurations
ResourceQuota / resourcequotas / quota
Role / roles
RoleBinding / rolebindings
RuntimeClass / runtimeclasses
SelfSubjectAccessReview / selfsubjectaccessreviews
SelfSubjectRulesReview / selfsubjectrulesreviews
ServiceAccount / serviceaccounts / sa
StorageVersion
SubjectAccessReview / subjectaccessreviews
TokenRequest
TokenReview / tokenreviews
Metadata
ControllerRevision / controllerrevisions
CustomResourceDefinition / customresourcedefinitions / crd,crds
Event / events / ev
LimitRange / limitranges / limits
HorizontalPodAutoscaler / horizontalpodautoscalers / hpa
MutatingWebhookConfiguration / mutatingwebhookconfigurations
ValidatingWebhookConfiguration / validatingwebhookconfigurations
PodTemplate / podtemplates
PodDisruptionBudget / poddisruptionbudgets / pdb
PriorityClass / priorityclasses / pc
PodSecurityPolicy / podsecuritypolicies / psp
In general, any of these commands will work with any of these objects. So rather than:
kubectl get pods
You can use:
kubectl get deployments
Now let's look at getting started.
Get Started
Setting up a Kubernetes Environment is straightforward.
Setup Kubernetes with k0s
There are many ways to create a Kubernetes cluster; in this case we are assuming you are using k0s with a single-node configuration. The minimum requirements for this install are:
1 vCPU (2 vCPU recommended)
1 GB of RAM (2 GB recommended)
1.7 GB of free disk space
To actually perform the installation, perform these steps on a Linux host:
sudo curl -sSLf k0s.sh | sudo sh
sudo k0s install controller
sudo systemctl start k0scontroller
sudo systemctl enable k0scontroller
mkdir ~/Documents
sudo cp /var/lib/k0s/pki/admin.conf ~/Documents/kubeconfig.cfg
sudo chown $USER ~/Documents/kubeconfig.cfg
export set KUBECONFIG=~/Documents/kubeconfig.cfg
For more information on installing k0s, including making your clusters available from other machines, see this guide to getting started with k0s.
Install kubectl
You can get kubectl, the standard Kubernetes client, in multiple ways.
Use the k0s kubectl instance
The k0s install comes with its own installation of kubectl, so no installation is required, You would simply add "k0s" to the head of each command, as in:
k0s kubectl get nodes
Install kubectl manually
Installing kubectl on Linux is a simple matter of downloading the binary and adding it to your path:
curl -LO "https://dl.k8s.io/release/$(curl -L -s >https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
For more information on installing kubectl or for instructions for other operating systems, see the kubectl documentation.
Install Lens Kubernetes IDE
The easiest way to get access to kubectl is to install Lens, which includes kubectl. It also, however, provides alternative ways to accomplish most of these tasks without actually using kubectl.
You can download and install Lens here, then use the kubectl tool from the terminal.
Description: Showcasing how you can access the terminal via Lens IDE
Managing Kubernetes resources
Now that you have your software, we can look at actual tasks.
Start a single instance of a pod
kubectl run mywebserver --image=nginx
Create a resource from the command line:
kubectl create deployment myotherwebserver --image=nginx
Accessing the terminal via the Lens IDE and creating a resource via terminal:
Create resource(s) such as pods, services or daemonsets from a YAML definition file:
kubectl create -f ./my-manifest.yaml
Note that the file itself has a format such as:
---
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
For more information on creating YAML documents, see this Introduction to YAML.
Create or apply changes to a resource
kubectl apply -f ./my-manifest.yaml
Leveraging Lens IDE, Clicking into a pod and making configuration changes to a pod:
Delete a resource via Lens
kubectl delete -f ./my-manifest.yaml
Leveraging Lens IDE, viewing a pod and deleting the pod via Lens
Scale a resource
kubectl scale --replicas=3 deployment.apps/myotherwebserver
or
kubectl scale --replicas=3 -f my-manifest.yaml
Viewing a dployment and leveraging Kubernetes and the Lens IDE UI to scale that deployment:
Connect to a running container
kubectl attach mywebserver -c mynginx -i
Accessing Lens IDE terminal to connect to a running container:
Run a command in a single container pod
kubectl exec mywebserver -- /home/user/myscript.sh
Delete a resource
kubectl delete pod/mywebserver
or
kubectl delete -f ./my-manifest.yaml
Accessing a pod via Lens IDE and deleting the pod via the User Interface:
Viewing resources
Once you have all of these objects, you'll need a way to see what's going on.
View the cluster and client configuration
kubectl config view
Accessing terminal via Lens IDE to view the config:
List all resources in the default namespace
kubectl get services
Filtering resources through namespaces via Lens IDE
List all resources in a specific namespace
kubectl get pods -n my-app
Filtering resources through namespaces via Lens IDE
List all resources in all namespaces in wide format
kubectl get pods -o wide --all-namespaces
Opening Lens IDE terminal to list all resources in all namespaces in wide format:
List all resources in json (or yaml) format
kubectl get pods -o json
Opening terminal via lens IDE and viewing all resources in JSON format:
Describe resource details
kubectl describe pods
kubectl describe pod mywebserver
Opening terminal via Lens IDE and running a command to describe pods within your cluster
Get documentation for a resource
kubectl explain pods
kubectl explain pod mywebserver
Opening terminal via Lens IDE and reviewing documentation for a specific resource, ex. Pods:
List of resources sorted by name
kubectl get services --sort-by=.metadata.name
Opening terminal via Lens IDE and listing resources by name
List resources sorted by restart count
kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'
Opening terminal viaLens IDE to view resources sorted by “Restart count”
Rolling update pods for resource
kubectl rolling-update echoserver -f my-manifest.yaml
Networking
Kubernetes networking is an entire topic on its own, but here are a few commands that come in handy when building and accessing applications.
Types of services
First, it's important to understand (and remember) the different types of services, because each has a different set of behaviors.
ClusterIP is the default ServiceType, ClusterIP services have a cluster-internal IP address, so they can only be reached by other cluster components.
NodePort enables you to create a service that's available from outside the cluster by exposing the service on the same port for every node. For example, the same service might be available on host1.example.com:32768, host2.example.com:32768, and host3.example.com:32768.
LoadBalancer requires coordination with your cloud provider's load balancer, which automatically routes requests to the service. For this reason, not all distributions of Kubernetes will support LoadBalancer services.
ExternalName is the most complex ServiceType, coordinating the service with your DNS server.
Port vs targetport
One aspect of Kubernetes networking that frequently gets confusing is the notion of port versus targetPort. Here's the difference:
port: the port receiving the request
targetPort: the container port receiving the request
Think of the request like an arrow flying into a container.
Here are come commands that reference ports (and targetPorts):
Port forwarding from port 5000 to targetPort 6000
kubectl port-forward mywebserver 5000:6000
kubectl port-forward svc/my-service 5000:6000
kubectl port-forward deploy/my-deployment 5000:6000
Create a service that directs requests on port 80 to container port 8000
kubectl expose deployment nginx --port=80 --target-port=8000 --type=LoadBalancer
Logs
Finally, you need to know what's going on inside your application.
Dump resource logs to stdout
kubectl logs myotherwebserver-8458cdb575-s6cp4
Stream logs for a specific container within a pod
kubectl logs -f mywebserver -c mynginx
Leveraging Lens UI to view pod logs without writing any command line: