How to deploy Spinnaker on Kubernetes: a quick and dirty guide
We've recently provided an even easier way to install Spinnaker on Kubernetes, but this article gives you a good look at how things work.
In general, we're going to take the following steps:
- Create a Kubernetes cluster. (We'll use a Google Kubernetes Engine cluster, but any cluster that meets the requirements should work.)
- Create the Kubernetes objects Spinnaker will need to run properly.
- Create a single pod that will be used to coordinate the deployment of Spinnaker itself.
- Configure the Spinnaker deployment.
- Deploy Spinnaker
Create a Kubernetes cluster
You can deploy Spinnaker in a number of different environments, including on OpenStack and on your local machine, but for the sake of simplicity (and because a local deployment of Spinnaker is a bit of a hefty beast) we're going to do a distributed deployment on a Kubernetes cluster.In our case, we're going to use a Kubernetes cluster spun up on Google Kubernetes Engine, but the only requirement is that your cluster has:
- at least 2 vCPU available
- approximately 13GB of RAM available (the default of 7.5GB isn't quite enough)
- at least one scheduleable (as in untainted) node
- functional networking (so you can reach the outside world from within your pod)
- Create an account on http://cloud.google.com and make sure you have billing enabled.
- Configure the Google Cloud SDK on the machine you'll be working with to control your cluster.
- Go to the Console and scroll the left panel down to Compute->Kubernetes Engine->Kubernetes Clusters.
- Click Create Cluster.
- Choose an appropriate name. (You can keep the default.)
- Under Machine Type, click Customize.
- Allocate at least 2 vCPU and 10GB of RAM.
- Change the cluster size to 1.
- Keep the rest of the defaults and click Create.
- After a minute or two, you'll see your new cluster ready to go.
Create the Kubernetes objects Spinnaker needs
In order for your deployment to go smoothly, it will help for you to prepare the way by creating some objects ahead of time. These includes namespaces, accounts, and services that you'll use later to access the Spinnaker UI.- Start by configuring kubectl to access your cluster. How you do this will depend on your setup; to configure kubectl for a GKE cluster, click Connect on the Kubernetes clusters page then click the Copy icon to copy the command to your clipboard.
- Paste the command into a command line window:
gcloud container clusters get-credentials cluster-2 --zone us-central1-a --project nick-chase Fetching cluster endpoint and auth data. kubeconfig entry generated for cluster-2.
- Next we're going to create the accounts that Halyard, Spinnaker's deployment tool, will use. First create a text file called spinacct.yaml and add the following to it:
apiVersion: v1 kind: ServiceAccount metadata: name: spinnaker-service-account namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: spinnaker-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - namespace: default kind: ServiceAccount name: spinnaker-service-account
This file creates an account called spinnaker-service-account, then gives assigns it the cluster-admin role. You will, of course, want to tailor this approach to your own security situation.
Save and close the file. - Create the account by running the script with kubectl:
kubectl create -f spinacct.yaml serviceaccount "spinnaker-service-account" created clusterrolebinding "spinnaker-role-binding" created
- We can also create accounts from the command line. For example, use these commands to create the account we'll need later for Helm:
kubectl -n kube-system create sa tiller serviceaccount "tiller" created kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller clusterrolebinding "tiller" created
- In order to access Spinnaker, you have two choices. You can either use SSH tunnelling, or you can expose your installation to the outside world. BE VERY CAREFUL IF YOU'RE GOING TO DO THIS as Spinnaker doesn't have any authentication attached to it; anybody who has the URL can do whatever your Spinnaker user can do, and remember, we made the user the cluster-admin.For the sake of simplicity, and because this is a "quick and dirty" guide, we're going to go ahead and create two services, one for the front end of the UI, and one for the scripting that takes place behind the scenes. First, create the spinnaker namespace:
kubectl create namespace spinnaker namespace "spinnaker" created
- Now you can go ahead and create the services. Create a new text file called spinsvcs.yaml and add the following to it:
apiVersion: v1 kind: Service metadata: namespace: spinnaker labels: app: spin stack: gate name: spin-gate-np spec: type: LoadBalancer ports: - name: http port: 8084 protocol: TCP selector: load-balancer-spin-gate: "true" --- apiVersion: v1 kind: Service metadata: namespace: spinnaker labels: app: spin stack: deck name: spin-deck-np spec: type: LoadBalancer ports: - name: http port: 9000 protocol: TCP selector: load-balancer-spin-deck: "true"
Here we're creating two load balancers, one on port 9000 and one on port 8084; if your cluster doesn't support load balancers, you will need to adjust accordingly or just use SSH tunneling. - Create the services:
kubectl create -f spinsvcs.yaml service "spin-gate-np" created service "spin-deck-np" created
Prepare to configure the Spinnaker deployment
Spinnaker is configured and deployed through a configuration management tool called Halyard. Fortunately, Halyard itself is easy to get; it is itself available as an image.- Create a deployment to host Halyard:
kubectl create deployment hal --image gcr.io/spinnaker-marketplace/halyard:1.5.0 deployment "hal" created
- It will take a minute or two for Kubernetes to download the image and instantiate the pod; in the meantime, you can edit the hal deployment to use the new spinnaker account. First execute the edit command:
kubectl edit deploy hal
- Depending on the operating system of your kubectl client, you'll either see the configuration in the command window, or a text editor will pop up. Either way, you want to add the serviceAccountName to the spec just above the containers:
... spec: serviceAccountName: spinnaker-service-account containers: - image: gcr.io/spinnaker-marketplace/halyard:stable imagePullPolicy: IfNotPresent name: halyard resources: {} ...
- Save and close the file; Kubernetes will automatically edit the deployment and start a new pod with the new credentials.
deployment "hal" edited
- Get the name of the pod by executing:
kubectl get pods NAME READY STATUS RESTARTS AGE hal-65fdf47fb7-tq4r8 0/1 ContainerCreating 0 23s
Notice that the container isn't actually running yet; wait until it is before you move on.kubectl get pods NAME READY STATUS RESTARTS AGE hal-65fdf47fb7-tq4r8 1/1 Running 0 4m
- Connect to bash within the container:
kubectl exec -it <CONTAINER-NAME> bash
So in my case, it would be
kubectl exec -it hal-65fdf47fb7-tq4r8 bash
This will put you into the command line of the container. Change to the spinnaker user's home directory:spinnaker@hal-65fdf47fb7-tq4r8:/workdir# cd spinnaker@hal-65fdf47fb7-tq4r8:~#
- We'll need to interact with Kubernetes, but fortunately kubectl is already installed; we just have to configure it:
kubectl config set-cluster default --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt kubectl config set-context default --cluster=default token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) kubectl config set-credentials user --token=$token kubectl config set-context default --user=user kubectl config use-context default
- Another tool we're going to need is Helm; fortunately that's also exceedingly straightforward to install:
spinnaker@hal-65fdf47fb7-tq4r8:~# curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6689 100 6689 0 0 58819 0 --:--:-- --:--:-- --:--:-- 59194
- The script needs some quick updates to run without root or sudo access:
sed -i 's/\/usr\/local\/bin/\/home\/spinnaker/g' get_helm.sh
sed -i 's/sudo //g' get_helm.sh
export PATH=/home/spinnaker:$PATH - Now go ahead and run the script:
spinnaker@hal-65fdf47fb7-tq4r8:~# chmod 700 get_helm.sh spinnaker@hal-65fdf47fb7-tq4r8:~# ./get_helm.sh Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.8.2-linux-amd64.tar.gz Preparing to install into /usr/local/bin helm installed into /usr/local/bin/helm Run 'helm init' to configure helm.
- Next we'll have to run it against the actual cluster. We want to make sure we use the tiller account we created earlier, and that we upgrade to the latest version:
helm init --service-account tiller --upgrade Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. For more information on securing your installation see: https://docs.helm.sh/usi ng_helm/#securing-your-helm-installation Happy Helming!
Configure the Spinnaker deployment
Deploying Spinnaker involves defining the various choices you're going to make, such as the Docker repos you want to access or the persistent storage you want to use, then telling Halyard to go ahead and do the deployment. In our case, we're going to define the following choices:- Distributed installation on Kubernetes
- Basic Docker repos
- Minio (an AWS S3-compatible project) for storage
- Access to Kubernetes
- Version 1.8.1 of Spinnaker itself
- UI accessible from outside the cluster
- We'll start by setting up the Docker registry. In this example, we're using Docker Hub; you can find instructions on using other registries here. In addition, we're specifying just one public repo, library/nginx. From inside the halyard container, execute the following commands:
ADDRESS=index.docker.io REPOSITORIES=library/nginx hal config provider docker-registry enable hal config provider docker-registry account add my-docker-registry \ --address $ADDRESS \ --repositories $REPOSITORIES
As you can see, we're enabling the docker-registry provider, then configuring it using the environment variables we set:+ Get current deployment Success + Add the my-docker-registry account Success + Successfully added account my-docker-registry for provider dockerRegistry.
- Now we need to set up storage. The first thing that we need to do is set up Minio, the storage provider. We'll do that by first pointing at the Mirantis Helm chart repo, where we have a custom Minio chart:
helm repo add mirantisworkloads https://mirantisworkloads.storage.googleapis.com "mirantisworkloads" has been added to your repositories
- Next you need to actually install Minio:
helm install mirantisworkloads/minio NAME: eating-tiger LAST DEPLOYED: Sun Mar 25 07:16:47 2018 NAMESPACE: default STATUS: DEPLOYED
Make note of the internal URL; we're going to need it in a moment.
RESOURCES: ==> v1beta1/StatefulSet NAME DESIRED CURRENT AGE minio-eating-tiger 4 1 0s
==> v1/Pod(related) NAME READY STATUS RESTARTS AGE minio-eating-tiger-0 0/1 ContainerCreating 0 0s
==> v1/Secret NAME TYPE DATA AGE minio-eating-tiger Opaque 2 0s
==> v1/ConfigMap NAME DATA AGE minio-eating-tiger 1 0s
==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE minio-svc-eating-tiger ClusterIP None <none> 9000/TCP 0s minio-eating-tiger NodePort 10.7.253.69 <none> 9000:31235/TCP 0s
NOTES: Minio chart has been deployed.
Internal URL: minio: minio-eating-tiger:9000
External URL: Get the Minio URL by running these commands: export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services minio-eating-tiger)export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT - Set the endpoint to the default for the internal URL you saved a moment ago. For example, my internal URL was:
minio: minio-eating-tiger:9000
So I'd set my endpoint as follows:ENDPOINT=http://minio-eating-tiger.default:9000
- Set the access key and password, then configure Haylard with your storage choices:
MINIO_ACCESS_KEY=miniokey MINIO_SECRET_KEY=miniosecret echo $MINIO_SECRET_KEY | hal config storage s3 edit --endpoint $ENDPOINT \ --access-key-id $MINIO_ACCESS_KEY \ --secret-access-key hal config storage edit --type s3
- Now we're ready to set it to use Kubernetes:
hal config provider kubernetes enable hal config provider kubernetes account add my-k8s-account --docker-registries my-docker-registry hal config deploy edit --type distributed --account-name my-k8s-account
- The last standard parameter we need to define is the version:
hal config version edit --version 1.8.1 + Get current deployment Success + Edit Spinnaker version Success + Spinnaker has been configured to update/install version "1.8.1". Deploy this version of Spinnaker with `hal deploy apply`.
- At this point we can go ahead and deploy, but if we do, we'll have to use SSH tunelling. Instead, let's configure Spinnaker to use those services we created way back at the beginning. First, we'll need to find out what IP addresses they've been assigned:
kubectl get svc -n spinnaker NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE spin-deck-np 10.7.254.29 35.184.29.246 9000:30296/TCP 35m spin-gate-np 10.7.244.251 35.193.195.231 8084:30747/TCP 35m
- We want to set the UI to the EXTERNAL-IP for port 9000, and the api for the EXTERNAL-IP for port 8084, so for me it would be:
hal config security ui edit --override-base-url http://35.184.29.246:9000 hal config security api edit --override-base-url http://35.193.195.231:8084
Deploy Spinnaker
Now that we've done all of our configuration, deployment is paradoxically easy:hal deploy applyOnce you execute this command, Halyard will begin cranking away for quite some time. You can watch the console to see how it's getting along, but you can also check in on the pods themselves by opening a second console window and looking at the pods in the spinnaker namespace:
kubectl get pods -n spinnakerThis will give you a running look at what's happening. For example:
kubectl get pods -n spinnaker NAME READY STATUS RESTARTS AGE spin-clouddriver-bootstrap-v000-pdgqr 1/1 Running 0 1m spin-orca-bootstrap-v000-xkhhh 0/1 Running 0 36s spin-redis-bootstrap-v000-798wm 1/1 Running 0 2mAs you can see, the pods come up as Halyard gets to them. The entire process can take half an hour or more, but eventually, you will see that all pods are running and ready.
kubectl get pods -n spinnaker NAME READY STATUS RESTARTS AGE spin-clouddriver-bootstrap-v000-pdgqr 1/1 Running 0 2m spin-orca-bootstrap-v000-xkhhh 1/1 Running 0 49s spin-redis-bootstrap-v000-798wm 1/1 Running 0 2m spin-redis-v000-q9wzj 1/1 Running 0 7s
kubectl get pods -n spinnaker NAME READY STATUS RESTARTS AGE spin-clouddriver-bootstrap-v000-pdgqr 1/1 Running 0 2m spin-orca-bootstrap-v000-xkhhh 1/1 Running 0 54s spin-redis-bootstrap-v000-798wm 1/1 Running 0 2m spin-redis-v000-q9wzj 1/1 Running 0 12s
kubectl get pods -n spinnaker NAME READY STATUS RESTARTS AGE spin-clouddriver-bootstrap-v000-pdgqr 1/1 Running 0 2m spin-clouddriver-v000-jswbg 0/1 ContainerCreating 0 3s spin-deck-v000-nw629 0/1 ContainerCreating 0 5s spin-echo-v000-m5drt 0/1 ContainerCreating 0 4s spin-front50-v000-qcpfh 0/1 ContainerCreating 0 3s spin-gate-v000-8jk8d 0/1 ContainerCreating 0 4s spin-igor-v000-xbfvh 0/1 ContainerCreating 0 4s spin-orca-bootstrap-v000-xkhhh 1/1 Running 0 1m spin-orca-v000-9452p 0/1 ContainerCreating 0 4s spin-redis-bootstrap-v000-798wm 1/1 Running 0 2m spin-redis-v000-q9wzj 1/1 Running 0 18s spin-rosco-v000-zd6wj 0/1 Pending 0 2s
NAME READY STATUS RESTARTS AGE spin-clouddriver-bootstrap-v000-pdgqr 1/1 Running 0 8m spin-clouddriver-v000-jswbg 1/1 Running 0 6m spin-deck-v000-nw629 1/1 Running 0 6m spin-echo-v000-m5drt 1/1 Running 0 6m spin-front50-v000-qcpfh 1/1 Running 1 6m spin-gate-v000-8jk8d 1/1 Running 0 6m spin-igor-v000-xbfvh 1/1 Running 0 6m spin-orca-bootstrap-v000-xkhhh 1/1 Running 0 7m spin-orca-v000-9452p 1/1 Running 0 6m spin-redis-bootstrap-v000-798wm 1/1 Running 0 8m spin-redis-v000-q9wzj 1/1 Running 0 6m spin-rosco-v000-zd6wj 1/1 Running 0 6mWhen that happens, point your browser to the UI URL you configured in the last section; it's the address for port 9000. For example, in my case it is:
http://35.184.29.246:9000You should see the Spinnaker "Recently Viewed" page, which will be blank because you haven't done anything yet:
To make sure everything's working, choose Actions->Create Application:
Enter your name and email address and click Create.
You should find yourself on the Clusters page for your new app:
So that's it! Next time, we'll look at actually creating a new pipeline in Spinnaker.
(Thanks to Andrey Pavlov for walking me through the mysteries of how to make this work!)
Want to learn more?
Check out our Spinnaker Fundamentals course, from code check-in to production in one day.