Spinnaker Canary Pipelines: How to set up Kayenta with Prometheus
Our goal is to make it possible for you to create a canary deployment using Prometheus for an existing Kubernetes application.
We're making a couple of assumptions:
- You already understand the basics of how to use Spinnaker, how to create pipelines, and so on.
- You've got a general idea of how canary deployments work in terms of the way configurations work, and so on. You can get more information and background here.
- You've already created (or have access to) a Kubernetes cluster big enough to handle both Spinnaker and Prometheus. In our example, we used a Google Kubernetes Engine cluster with 4 vCPU and 26 GB of RAM.
- You've already installed Helm and initialized it with a service account. Check out this article for information on installing Helm. The easiest way to do the initialization is:
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade - You have an existing application you want to track. If you need something to work with, feel free to use this YAML file for a sample app:
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: demo-app release: canary name: demo-app-canary spec: replicas: 1 template: metadata: labels: app: demo-app release: canary spec: containers: - image: kshatrix/canary-demo name: demo-app readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 2 --- apiVersion: v1 kind: Service metadata: annotations: prometheus.io/path: / prometheus.io/port: "8000" prometheus.io/scrape: "true" labels: app: demo-app release: canary name: demo-app-canary spec: ports: - name: http port: 80 - name: http-metrics port: 8000 selector: app: demo-app
- You have Prometheus installed. If you don't, you can install it on your Kubernetes cluster using:
helm install -n prom stable/prometheus --namespace hal
Install and configure Spinnaker
The first step is to go ahead and install Spinnaker. Spinnaker is installed and configured using Halyard, so we're going to go ahead and install that using Helm. Earlier, when we installed Spinnaker using Helm, we went ahead and did the Spinnaker deployment right away, but this time we want to tell Halyard to wait for instructions so we can do some additional configuration before the actual deployment:helm repo add mirantisworkloads https://mirantisworkloads.storage.googleapis.com helm install -n hal mirantisworkloads/halyard --namespace hal --set command="sleep infinity" --set daemon=trueYou should see a series of pods in the hal namespace:
kubectl get pods -n hal NAME READY STATUS RESTARTS AGE hal-hal-apply-6967cbdb4d-s457x 1/1 Running 0 4m jenkins-hal-5549f48b6b-scq9k 1/1 Running 0 4m minio-hal-0 1/1 Running 0 5m minio-hal-1 1/1 Running 0 5m minio-hal-2 1/1 Running 0 5m minio-hal-3 1/1 Running 0 5m prom-prometheus-alertmanager-794b44784f-bchbh 2/2 Running 0 7m prom-prometheus-kube-state-metrics-7b5b5f55-kfb28 1/1 Running 0 7m prom-prometheus-node-exporter-dsq9d 1/1 Running 0 7m prom-prometheus-pushgateway-78d69775d8-hgk4l 1/1 Running 0 7m prom-prometheus-server-689675c8f4-wbtzp 2/2 Running 0 7mNow let’s get into the halyard pod to configure kayenta. Start by getting a command line on the Halyard pod. (Obviously, substitute your own pod here):
kubectl -n hal exec -it hal-hal-apply-6967cbdb4d-s457x bashNow you can execute these commands inside the hal pod:
- Enable canary deployments in general so they're available in the UI:
hal config canary enable
- Configure canary to use Prometheus as a source of metrics:
hal config canary prometheus enable hal config canary prometheus account add my-prometheus --base-url http://prom-prometheus-server:80
- Configure canary to use Minio as a storage. We’ll use the Minio instance deployed during the halyard chart installation - the same instance that stores the Spinnaker configuration itself:
hal config canary aws enable echo "miniosecret" | hal config canary aws account add my-minio --bucket spin-bucket --endpoint http://minio-hal:9000 --access-key-id miniokey --secret-access-key hal config canary aws edit --s3-enabled=true
- Optionally set the defaults for created accounts:
hal config canary edit --default-metrics-store prometheus hal config canary edit --default-metrics-account my-prometheus hal config canary edit --default-storage-account my-minio
- Deploy the instance of Spinnaker:
hal deploy apply
If you don’t want to apply configuration through the hal cli commands, you can adjust your halconfig with the canary section and pass it to the halyard chart directly:canary: enabled: true serviceIntegrations: - name: prometheus enabled: true accounts: - name: my-prometheus endpoint: baseUrl: http://prom-prometheus-server:80 supportedTypes: - METRICS_STORE - name: aws enabled: true accounts: - name: my-minio bucket: spin-bucket rootFolder: kayenta endpoint: http://minio-hal:9000 accessKeyId: miniokey secretAccessKey: miniosecret supportedTypes: - CONFIGURATION_STORE - OBJECT_STORE s3Enabled: true reduxLoggerEnabled: true defaultMetricsAccount: my-prometheus defaultStorageAccount: my-minio defaultJudge: NetflixACAJudge-v1.0 defaultMetricsStore: prometheus stagesEnabled: true templatesEnabled: true showAllConfigsEnabled: true
You can get more information and more details on how to use halyard chart in How to Deploy Spinnaker on Kubernetes: A Quicker and Dirtier Guide.
Create the test workloads
We need some application to provide metrics that we are going to analyze. You're typically going to deploy two versions (canary and baseline) during your canary pipeline execution, so you'll have something like two deployments in a “prod” namespace, such as:kubectl get deployment -n prod NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE demo-app-baseline 1 1 1 1 31m demo-app-canary 1 1 1 1 31mNote that you're typically not going to deploy these manually; they're part of your pipeline. So you'll need to make sure that they (or services that are backed by them) have the appropriate annotations that will direct them to Prometheus.
In the case of the sample app these deployments are running two versions of a simple flask application that exports just a few metrics in the prometheus format.
There we also have two services on top of them:
kubectl get svc -n prod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo-app-baseline ClusterIP 10.27.243.147 <none> 80/TCP,8000/TCP 32m demo-app-canary ClusterIP 10.27.253.11 <none> 80/TCP,8000/TCP 32mMetrics are exported on the 8000 port, so the following annotations have been added for services to be scraped by the prometheus server:
apiVersion: v1 kind: Service metadata: annotations: prometheus.io/path: / prometheus.io/port: "8000" prometheus.io/scrape: "true" ...
Create the Spinnaker Application
Now let’s go to the Spinnaker ui and create an application. (Look for the hal-deck-hal service, on port 9000.) Start by creating an application:Once we've created the application, we need to enable “Canary” in the Features tab of the application config:
If you can see new Canary options in your Spinnaker UI under Delivery, then we are ready to create our first configuration. Click Canary Configs, then Add Configuration.
The process of canary configuration creation is very well described in the Spinnaker docs https://www.spinnaker.io/guides/user/canary/config so we will only focus on a Prometheus-related part here.
Now we will take one of the exported metrics and add it into our canary configuration. The sample application is designed to demonstrate the flask_request_count metric, but if you're running a different one, use a metric you know is being emitted by your application.
Let’s query the metric in Prometheus web view to get its labels:
For the configuration we need some labels to differentiate the baseline and canary instances. As you can see by looking at the name-value pairs for each instance, we can use combination of “release” and “kubernetes_namespace” to identify the instance of our application. Let’s take a look at the fields that we have in the Metric Scope of the Canary Analysis stage:
The values of the Baseline and Canary fields will match our metric’s release labels, Baseline Location and Canary Location will match the kubernetes_namespace labels.
Based on it we can go back to our Canary Configuration and create a template for our metric.
To do that click the Add New Template button under Filter Templates.
Here in our template we’ve assumed that we are only interested in responses with 500 code. We've also added two variable bindings (scope and location) that are implicitly available for all templates.
As you might guess, depending on the target for which the query is made, ${scope} will be interpolated with either “Baseline” or “Canary” and ${location} with “Baseline Location” or “Canary Location”.
Now let’s add the metric itself:
A good way to verify if Kayenta is able to communicate with Prometheus is to get the autocompletion for the “Metric Name”. Let’s give our metric a name, choose previously created “Filter Template” and metric itself in the “Metric Name” field:
From this point, configuration is pretty much the same as for any other provider. You might want to add additional metrics, create groups of metrics, and so on. Don’t forget to assign proper weights for the groups and save the configuration. When you done, canary config is ready to be used in “Canary Analysis” stages.
So that's how to create a canary configuration using Kayenta and Prometheus. You can then use it in your canary deployments.
Intrigued? Want to learn more about Spinnaker? Check out our SPIN50 class to learn how to get from code check-in to production using Spinnaker in just one day of focused lectures and labs.