This post was originally written by Andrew Block in his blog: http://blog.andyserver.com/2016/02/jenkins-slaves-openshift-external-jenkins-environment/?ref=dzone

jenkins-external-logo In part 1 and part 2 of the series on Jenkins and OpenShift, we used OpenShift as the execution environment to run a Jenkins master instance and a set of either statically defined or dynamically provisioned slave instances.

However, many organizations already have an existing Jenkins infrastructure in place to act as the backbone of their continuous integration and continuous delivery pipelines, but they may still desire the ability to take advantage of the elasticity OpenShift can provide.

The following outlines the steps necessary for integrating an external Jenkins environment with OpenShift to run jobs.

First, lets review the high level architecture from the first two posts. Each of the master and slave instances are run in Docker containers and deployed to OpenShift as pods. When creating a static set of slave instances, each of the slaves is configured to use Kubernetes services in order to communicate with the master and register itself in Jenkins.

The use of Kubernetes services provide a level of abstraction over the actual location of the master since pods, like docker containers, can come and go. Once the slaves have registered themselves with the master, they will be able to take on pending jobs.

When leveraging the Kubernetes Jenkins plugin to dynamically provision slave instances, many of the same steps described previously are used, however, instead of statically deploying a set of slave pods, the Jenkins master communicates with the OpenShift API to manage the lifecycle of slave instances. In both paradigms, each use two Kubernetes services to locate and communicate with the master.

This is the key area that will need to change when integrating an external Jenkins instance. Instead of the service being configured to point to the Jenkins master in the OpenShift cluster, it will instead be configured to point to the location of the instance externally.

jenkins-external-overview

The resources are once again found on GitHub. Clone the repository to your local machine or update it if it is already present:

git clone https://github.com/sabre1041/ose-jenkins-cluster

Next, create a new project in OpenShift either in the web console or on the command line using the occlient called jenkins which will house the resources that will be created:

oc new-project jenkins

Enter the directory containing the Git repository cloned previously and add the three templates to the newly created project

oc create -f support/jenkins-cluster-persistent-template.json,support/jenkins-cluster-ephemeral-template.json,support/jenkins-external-services-template.json

Note: If you followed the steps from an earlier post and would like to reuse the same project, you can either remove or replace the existing templates. using the oc delete template <name> or oc replace -f <files>

The jenkins-cluster-persistent and jenkins-cluster-ephemeral templates are almost identical to the previous postings, A new third template is available to create the service objects to support existing Jenkins instances within the enterprise. Instead of load balancing a set of pods running in OpenShift, the template uses external services to reference the location of the Jenkins master outside of OpenShift.

Lets once again instantiate the template to create a Jenkins master and slave infrastructure in OpenShift. You may be wondering why we would want to leverage a template that creates the Jenkins master in OpenShift when we will be communicating with an externally facing instance. To support both use cases where the master may be running either in OpenShift or externally, we will use the same template and if it is chosen to leverage an external instance, the objects in OpenShift relating to the master can be deleted.

oc new-app --template=jenkins-cluster-ephemeral

The master and slave resources should now created. Since the master components will not be used in OpenShift, let’s go ahead and delete them. The oc delete command can be used to remove objects from an OpenShift project. The -l parameter can be used to target a subset of objects so for our use case, only the master components will be deleted. The template added labels to each of the components that were instantiated in the form application=jenkins to represent the master and application=jenkins-slave to represent the slaves. Execute the following command to remove objects targeting the Jenkins master:

oc delete all -l=application=jenkins

With the existing Jenkins master components now removed, let’s instantiate another template which will create the services necessary to communicate with the externally facing Jenkins instance. The template takes in a parameter called JENKINS_IP which specifies the location of the external Jenkins instance. Run the following command to instantiate the template specifying the IP address of the externally facing Jenkins instance to create the new services:

oc new-app --template=jenkins-external-services -p JENKINS_IP=

Note: If a pod containing a slave instance is currently running, it must be deleted in order to inject the correct service address and port referring to the external Jenkins instance

Configuring the Jenkins Master

With the OpenShift components in place, let’s focus on the existing Jenkins environment and the steps necessary to configure the master to leverage OpenShift to run slave instances. First, there must be communication channels between OpenShift and the Jenkins master. This includes communication from OpenShift and Jenkins on the port exposed by the web console as well as a TCP port that is used for the JNLP slaves to communicate with the master. Port 50000 is the recommended JNLP slave port but it is possible to utilize a different port if necessary.

The Jenkins master must also be able to communicate with the OpenShift API when leveraging the Kubernetes Jenkins plugin to dynamically provision slaves. On the Jenkins master, ensure the requisite firewall ports are opened. If systemd is the firewall implementation that is being used, execute the following commands to enable communication on ports 8080 and 50000 and to reload the configuration.

firewall-cmd --zone=public --add-port=8080/tcp --permanent
firewall-cmd --zone=public --add-port=50000/tcp --permanent
firewall-cmd --reload

The majority of the functionality to enable the dynamic capabilities between Jenkins and OpenShift is through the use of several Jenkins plugins. The following are a list of plugins that need to be installed in the Jenkins environment:

Plugins can be installed in Jenkins by logging onto the master web interface and selecting the Manage Jenkins link on the lefthand side, and then selecting Manage Plugins. On the Available tab, mark the checkboxes next to the plugins listed above and scroll down to the bottom of the page and select Download now and install after and Restart. This will download and install the plugins and then restart the Jenkins instance when no jobs are actively being run.

jenkins-install-plugins-1024x765

 

Next, configure the JNLP port which slave instances use to communicate with the Jenkins master. Select the Manage Jenkins link on the lefthand side and select System Security. Select Enable Security if this checkbox is not currently checked, and next to TCP port for JNLP slave agents, select the Fixed radio button and enter 50000 (or the value of the chosen JNLP port as described above). Click Save to apply the changes.

jenkins-security-1024x785

 

To completely disable jobs from executing on the master and to solely utilize slave, the executors count on the master needs to be set to 0. This value is set on the Jenkins system configuration page which can be accessed by selecting Manage Jenkins on the lefthand size of the master landing page and selecting Configure System. Next to # of executors, enter 0 and then hit Save to apply the changes.

jenkins-executors

 

 

At this point, the configurations for both OpenShift and Jenkins to utilize statically defined slaves instances are complete. Either using the OpenShift web console or using the OpenShift CLI, verify at least one slave instance is currently running. The slave instance should now be available in the list of executors on the lefthand side of the Jenkins landing page.

jenkins-slave-register-1024x776

 

Select an existing job or create a new job and start a new build as a test to validate they are running on the slave instance within OpenShift.

jenkins-slave-job-running-1024x771

 

Now that jobs have been validated to utilize statically defined slaves instances within OpenShift from an externally facing Jenkins master, let’s cover the second use case where Jenkins leverages the Kubernetes plugin to dynamically provision slave instances in OpenShift. The first step is to scale down the statically defined instances so the dynamic slave provisioning functionality can be validated.

oc scale dc jenkins-slave --replicas=0

Next, let’s configure the settings for the Kubernetes plugin on the system configuration page. Once again it can be accessed by selecting Manage Jenkins on the landing page and then selecting Configure System. Scroll down towards the bottom of the page and locate the Cloudsection.

Select Add a new cloud and choose Kubernetes which will generate a new section of configuration options.

Under Kubernetes, enter a name for the configuration, such as OpenShift and then enter the url of the OpenShift API.  You can choose to enter the server certificate key that is used for HTTPS communication or select the Disable HTTPS Security Check to disable this feature. Since a Kubernetes namespace is equivalent to to an OpenShift project, enter jenkins in this text box next to the Kubernetes namespace field.

jenkins-kubernetes-plugin-1024x772

 

When Jenkins communicates with OpenShift, it must provide authentication in order to interact with the API. In Jenkins, these values are stored as credentials. In the previous post, we leveraged an OpenShift service account to provide the authentication details since the Jenkins master was running in OpenShift. As Jenkins is running outside of OpenShift, a username and password for an account with access to the jenkins project must be provided.

Click the Add button to start the credential creation process and then in the dialog box next to kind, use the dropdown to select OpenShift OAuth Access Token. Enter the username, password and description into the textboxes. If desired, select the Advanced button and enter a unique id for the credential. Otherwise one will be created by default.

jenkins-kubernetes-plugin-credentials

 

Finally click Add to create the credential. Select the credential from the dropdown box next to Credentials.

Validate the master is able to successfully communicate with the OpenShift API by clicking the Test Connection button which should return a Connection Successful message.

jenkins-kubernetes-connection-successful

Next, we will specify the addresses the dynamically provisioned slaves will use to communicate back to the Jenkins master. Two OpenShift capabilities will be used within these addresses. First, SkyDNSprovides a mechanism to reach OpenShift resources using domain names, including Kubernetes Services. Earlier, Kubernetes services were created to reference the location of the Jenkins master. By referencing the service, the master can be reached by the slaves. Service addresses in SkyDNS take the form <service>.<project>.svc.cluster.local.

In the Jenkins URL field, enter http://jenkins.jenkins.svc.cluster.local:8080&nbsp;which leverages the jenkins Kubernetes service.  For the Jenkins tunnel field, enter jenkins-slave.jenkins.svc.cluster.local:50000 to use the jenkins-slaveKubernetes service which will provide a communication channel between the slave and master using the JNLP port. Click Apply to save the changes.

When a new slave instance is provisioned by the master, it communicates with OpenShift to create a pod with the appropriate Docker image and settings to execute the job. These configurations are specified by defining a new Pod Template.

In the Kubernetes plugin configuration, select Add Pod Template and then Kubernetes Pod Template to add the fields to the configuration page. There are only three fields that require input. First, add a name to the template, such as slave.

Next, the Docker image that will be used for the slave needs to be specified. When the template was instantiated earlier, a new build of a slave image was started, completed and then pushed to the integrated Docker registry. To determine the location of the image in the integrated registry, execute the following command:

oc get is jenkins-slave --no-headers | awk '{ print $2 }'

Insert the response into the textbox next to Docker image.

Finally, set the Jenkins slave root directory textbox to match the home directory specified in the slave Dockerfile.

/opt/jenkins-slave

Click Save to apply the changes to complete the required configuration to provision dynamic slaves.

jenkins-kubernetes-pod-template-1024x738

 

To validate the configurations are successful, first scale down any statically defined slave instances by running the following command

oc scale dc jenkins-slave --replicas=0

Start a new build and confirm the job is running in a pod on OpenShift.

jenkins-kubernetes-dynamic-running-1024x735

 

With a a successful execution of Jenkins jobs from both statically defined and dynamically provisioned slaves, we were able to demonstrate how an existing Jenkins environment can integrate with OpenShift. The ability to leverage the elastic resources of a platform as a service, such as OpenShift, to support the continuous integration and continuous delivery of applications allows businesses additional opportunities to handle the demand of the modern day world.

Author

Andrew Block
Senior Consultant,
Red Hat Consulting

Specialities:

• Integration
○ Apache Camel
○ JBoss Fuse
• Cloud Technologies
○ OpenShift/OpenStack
• Automation
○ Continuous Integration/Continuous Delivery
○ Configuration management (Chef, Ansible)

OSS Contributor:

• Jenkins Ecosystem
• Apache Camel

@sabre1041

https://www.facebook.com/sabre1041


About the author

Andrew Block is a Distinguished Architect at Red Hat, specializing in cloud technologies, enterprise integration and automation.

Read full bio