Prerequisites

  • Kubernetes based platform (OpenShift, vanilla k8s. etc)
  • git-based repository (GitHub, GitLab, etc)
  • Tekton or OpenShift Pipelines installed on cluster

Introduction

Many companies choose to take a mono-repository approach when organizing their projects within a git framework. This typically means that one repository contains many different types of projects; for example, a front end NodeJS application and a backend Java based API server are held side by side in the same parent directory. Existing CI/CD frameworks like Jenkins have plug-ins to handle this type of git pattern. However, OpenShift Pipelines (Tekton) adheres to the methodology that all aspects of the pipeline are first class citizens of a Kubernetes based platform. This allows for finding existing solutions by pulling down code from current community sources or rolling out a custom implementation, which the following solution will show.

This article describes the process of using triggers within a single project while employing a repository that contains many different projects. These may have similar or different build processes.

Image from Pixabay

Understanding EventListeners

EventListeners in OpenShift Pipelines are how triggers are exposed externally to the CI/CD pipeline. These triggers originate from within the cluster and allow the acceptance of different types of webhooks from external sources, such as GitLab or GitHub. When an EventListener is applied to a namespace, the OpenShift Pipeline Operator will then generate a new pod with the preface of EventListener in the name.

These EventListeners pair up with the different types of trigger mechanisms that come with the trigger extensions of OpenShift Pipelines. The big advantage of how OpenShift Pipelines structures triggers is that it allows the reuse of different CI/CD resources that follow the same abstractions. This is in contrast to the pattern often seen in Jenkins pipelines: files contain the same block of code throughout an organization, making modifying existing CI/CD processes difficult and harder to maintain. The following examples highlight the new pattern of abstraction for OpenShift Pipelines.

The following code is an example of an EventListener using the desired pattern described above:

apiVersion: triggers.tekton.dev/v1alpha1
kind:
EventListener
metadata:
 name:
hello-chris-eventlistener 
spec:
 serviceAccountName:
pipeline
 triggers:
 - name:
hello-chris-trigger
   interceptors:
   - cel:
       filter:
>-
         
body.commits[0].title.split(" ")[0] == "hello-chris"
       overlays:
       - key:
working-dir
         expression:
'body.commits[0].title.split(" ")[0]'
   bindings:
   - kind:
ClusterTriggerBinding       
     ref:
hello-chris-binding
   template:
     ref:
hello-chris-template
 - name:
hello-dave-trigger
   interceptors:
   - cel:
       filter:
>-
         
body.commits[0].title.split(" ")[0] == "hello-dave"
       overlays:
       - key:
working-dir
         expression:
'body.commits[0].title.split(" ")[0]'
   bindings:
   - kind:
ClusterTriggerBinding
      ref: hello-chris-binding
   template:
     ref:
hello-chris-template

The fact that the EventListener allows a split to take place is key. In this case, that split comes between two projects that exist in the same repository: the hello-chris and the hello-dave project. These each operate according to their own definitions as to how the CI/CD process should proceed. For more information on TriggerBinding or TriggerTemplate, see this trigger documentation and the example listed in the resources section. The following segment describes the core part of how this split works using the interceptor mechanism.

Interceptors

The EventListener mechanism that makes the parsing process of the git webhook payload possible, as well as the splitting of the pipeline, is the interceptors. This article will focus on the CEL (Common Expression Language) interceptor, but other interceptors can be found in the Tekton and the OpenShift Pipelines documentation. CEL is a programming scripting language for the parsing and validation of content transferred between different web services.

{
 "object_kind":
"push",
 "before":
"95790bf891e76fee5e1747ab589903a6a1f80f22",
 "after":
"da1560886d4f094c3e6c9ef40349f7d38b5d27d7",
 ...
 },
 
"commits": [
   {
     "id":
"b6568db1bc1dcd7f8b4d5a946b0b91f9dacd7327",
     "Message":
"updated the README with instructions on how to test the pipeline",
     "title":
"hello-chris updated README."
  ...
    }
 ]
}

Example of the JSON payload that comes from GitLab

The commit title here is critical; this is what the CEL expression field is parsing when it looks at the payload to decide if it will add the overlay.

body.commits[0].title.split(" ")[0] == "hello-chris"

The CEL expression shown above is splitting up the contents of the commit title with a space character as the delimiter. This is also checking what the first word in the string contains: If it is hello-chris or hello-dave, it will then pick that part of the repository to trigger the build process. 

One aspect of the CEL implementation to keep in mind is that, currently, there is not a good way to validate if hello-chris or hello-dave exists within an array of commits. In this example, the CEL always looks at the first commit that is being pushed and ignores any following commits. This makes squashing commits prior to a push critical; only one build can be triggered per one push event.

Once the interceptor has been triggered by the expression, additional information can be parsed and sent along to the Pipeline via the overlays. The example above shows the same text validated in the expression being parsed into parameters by the TriggerBinding. Ultimately, that information is sent to the TriggerTemplate to instantiate a PipelineRun based on the information defined in the Pipeline and is to be used within the individual Tasks.

Pipeline Abstraction

The pattern used in the EventListener allows for an additional layer of abstraction that makes OpenShift Pipelines a powerful CI/CD tool. The above example indicates that the hello-dave project runs the same CI/CD process as the hello-chris project by using the same Pipeline. Because they both use the same Pipeline process, with different content, the hello-chris Pipeline is a reusable abstraction instead of being limited to one project. Since both hello-chris and hello-dave are NodeJS based web servers, the TriggerBinding, TriggerTemplate, and Pipeline could instead be prefaced with web-server, allowing for a domain-specific abstraction.

apiVersion: tekton.dev/v1beta1
kind:
Pipeline
metadata:
 name:
hello-chris-pipeline
spec:
 workspaces:
 - name:
shared-workspace
 params:
 - name:
deployment-name
   type:
string
   description:
name of the deployment to be patched
 - name:
git-url
   type:
string
   description:
url of the git repo for the code of deployment
 - name:
git-revision
   type:
string
   description:
revision to be used from repo of the code for deployment
   default:
"master"
 - name:
IMAGE
   type:
string
   description:
image to be built from the code
 - name:
TLSVERIFY
   type:
string
   default:
"false"
   description:
tls verification
 - name:
working-dir
   type:
string
   default:
""
   description:
default working directory to build from
 tasks:
 - name:
fetch-repository
   taskRef:
     name:
git-clone
     kind:
ClusterTask
   workspaces:
   - name:
output
     workspace:
shared-workspace
   params:
   - name:
url
     value:
$(params.git-url)
   - name:
deleteExisting
     value:
"true"
   - name:
revision
     value:
$(params.git-revision)
 - name:
build-image
   taskRef:
     name:
buildah
     kind:
ClusterTask
   params:
   - name:
IMAGE
     value:
$(params.IMAGE)
   - name:
TLSVERIFY
     value:
$(params.TLSVERIFY)
   - name:
DOCKERFILE
     value:
"./$(params.working-dir)/Dockerfile"
   workspaces:
   - name:
output
     workspace:
shared-workspace
   - name:
source
     workspace:
shared-workspace
   runAfter:
    - fetch-repository

 

If hello-dave was a different type of application, like a UI (user interface) rather than a web server, there could be a different TriggerTemplate, TriggerBinding, and Pipeline. This would establish a different CD/CD process from hello-chris, allowing the EventListener to trigger different processes from the entry point of the incoming webhook.

Further Limitations

As referenced above, the current EventListener CEL implementation only allows for the processing of a single commit instead of being able to loop through all the different commits contained in a push event. This issue means that only one CI/CD process can be triggered at a time. In reviewing the API web server and UI application example, in which both have their own CI/CD process, we see that if both were modified in the same commit, they would only trigger one pipeline based on the first word in the commit title.

Conclusion

OpenShift Pipelines is a powerful tool that allows significant abstractions to be made when developing CI/CD processes. Combined with the mechanisms within EventListener, this allows developers to decide which project they want to build based on different parameters from the payload read by the EventListener. This process can be expanded from this example to make complex CI/CD processes based on events upstream from GIT, allowing automation that can significantly improve velocity for organizations looking to embrace modern development methodologies.

Additional Resources

Example Project

https://github.com/cnuland/hello-chris-tekton

GitLab webhook payload overview

https://docs.gitlab.com/ee/user/project/integrations/webhooks.html

Tekton CEL Documentation

https://tekton.dev/vault/triggers-v0.7.0/cel_expressions/


About the authors

Christopher Nuland is a cloud architect for Red Hat services. He helps organizations with their cloud native migrations with a focus with the Red Hat OpenShift product.

Read full bio