Keep calm and Observe — Part 2

Sam J. —
5 min readSep 7, 2020


Consuming Gitlab webhooks in MetaControl


In part 1 we sketched our overall scenario: running a sample microservice in AKS while using Gitlab to host sources, configurations and pipelines for CI/CD and infrastructure.

In this part we will learn how to

  • add the sample service workload
  • build and trigger pipelines posting webhooks
  • mapping webhook payloads in MetaControl.
  • connect Gitlab to AKS and deploy via pipeline and see results in MetaControl

If you jumped into the series have a look at part 1 and for basic information.

Workload: Spring Boot Petclinic

Our sample application is available at The project is extended using skaffold for quick builds, containerization and deployments on k8s and has some pipeline instructions for Gitlab to automate it.

The pipeline uses skaffold and the jib-maven-plugin to compile our project, containerize it with Google distroless base images and deploy the it on Gitlabs’ container registry. Finally it will try to deploy to the specified k8s cluster.

You can fork the project it into your Gitlab account and give the pipeline a spin. The last step will fail since no cluster is configured yet but our pipeline will already be good enough to link Gitlab with MetaControl and display events.

Make sure you adapt the image name from “” to your project repository (files: skaffold.yaml, k8s/deployment.yaml).

Creating a Gitlab Webhook

We want to learn more about our pipeline states. Most of the times pipelines will succeed and we want to know when this is not the case. Luckily Gitlab brings webhooks for the project that can be inspected.

Create a new webhook under settings/webhooks and just select “pipeline events”. The POST URL should have the following pattern<webhookconfig>/<tenantId>/bucket/<bucketId>

Try it out and examine the payload.

Gitlab webhook configuration testing

The result is a lengthy description of the pipeline running. You can preview the payload on a tool like, just replace the URL accordingly.

Mapping the payload

Picking the right information from the payload is a bit of an exercise.

Below you find an example how your webhook mapping could look like. The mapping section defines the MetaControl Event Model in the keys and the mapping to the payload as values. Essentially you can either select data from the payload via JSON-path expressions (starting with a $) or you can add static strings.

If fields expect a boolean outcome they will be processed top-down with the last successful expression to be the winner.

If the fields expects a data value and accepts multiple of them in form of arrays those will be concatenated.

Status: will check the object.attributes.status in the given order. Last true expression wins.

Event: we use the git commit message as the headline

Severity: optional, can be 1–6 . We could check this but simply set it to “3” as the average. You could also make it dependent on the pipeline state or single stages of course

Category, Ident, Message: these list of fields are concatenated

MessagePropertySection: creates key-value sections and uses the JSON-Path for selection in the payload. This way it is easy to collect meta information and attach it the the event.

Open your config repository you created in part 1 and edit the file webhooks.yaml. Paste the above configuration and commit it. MetaControl will pick up changes from your config repo periodically.

Failed pipeline event in MetaControl (details)

With the mapping finished we can now execute a webhook call from Gitlab and test our setup.

Send a pipeline event and check the result in MetaControl.

If all is configured fine and you push new payloads via the Gitlab test button, you should see something popping up in the MetaControl UI. Open the event and examine its details.

The header will display out mapped fields and the collapsible dropdown sectopms below will describe all messagePropertySections we have chosen in our mapping.

This way you can incorporate an amount of additional data that might help to figure out a root cause.

Deploying to Kubernetes

Now that we have some visibility and see our deployment pipeline fail, let’s add and configure a Kubernetes cluster on Azure and finish the last bit of this story.

In order to deploy our service to AKS we need to integrate it first with Gitlab and also create a secret to enable AKS to access our docker image repository on Gitlab.

Gitlab Kubernetes Integration

Setting up the integration between AKS and Gitlab is not straight forward and requires several steps. You can find a good step-by-step guide in the Gitlab docs. Make sure you also setup a Gitlab environment so the KUBE_CONFIG is accessible from the CI/CD pipeline (it will be empty otherwise).

Kubernetes Pull-secret

Kubernetes needs to authenticate to our Gitlab container registry to be able to pull the image. The pullSecret referenced in our deployment.yaml file is called petclinic-pull-gitlab. Let’s create it on the cluster.

We can request a new Gitlab access token in the private settings and give it scope to read_registry. In our case we use “ petclinic” as the token name. Now we are able to create our secret on AKS:

kubectl create secret docker-registry petclinic-pull-gitlab — — docker-username=petclinic — docker-password=<masked>


Final pipeline check

With the deployment configuration setup we are now able to run out pipeline completely and hopefully see some green, successful events in the MetaControl UI.

Next: In part 3 we will look at our sample application, send custom events from the business logic, scrape actuator endpoints and logs.

Stay tuned.

Additional links



Sam J. —

Passionate Cloud Native Developer from CH. Grüezi!