OpenShift Operators provide full control over the life cycle of complex applications. They use custom resources to define an application’s desired state, and implement controllers that watch those custom resources to make the cluster’s state the desired state.

The Operator SDK provides great support to develop OpenShift Operators using Go, Ansible, or Helm Charts. But it is also possible to develop Openshift Operators in other languages. This can be achieved by directly accessing the Kubernetes API or by using one of the official Kubernetes client libraries.

In this article, we will use the JavaScript Kubernetes client and the Red Hat ubi8/nodejs-14 container image to build an OpenShift Operator that implements the memcached Operator example from the OpenShift documentation in TypeScript.

The full source code for this project can be found here.

Introduction

Memcached is a general purpose distributed in-memory key-value store. There is a Memcached Docker image available on Docker Hub which we can run in a pod on OpenShift, and we can spin up several pods running that image to create a Memcached cluster. To showcase the use of OpenShift Operators, we will create a custom resource named “Memcached” that defines the size of our Memcached cluster, and the corresponding Operator that will create the Memcached cluster based on the values found in the custom resource.

For Example, the custom resource below describes a Memcached cluster named “memcached-sample” with a desired size of 2 pods:

apiVersion: cache.example.com/v1
kind: Memcached
metadata:
name: memcached-sample
spec:
size: 2

Setting up Our TypeScript Environment

We will start by cloning the source code of the ts-operator sample application associated with this project:

git clone git@github.com:nodeshift-blog-examples/operator-in-JavaScript.git
cd operator-in-JavaScript

We can then tell npm to install the project dependencies. This will install the TypeScript compiler and the @kubernetes/client-node library which we use to interact with the OpenShift API:

npm install

 

Setting up Our OpenShift Environment

Let’s start by logging into our OpenShift cluster. We will need to use an account with cluster-admin privileges. If you are using Red Hat CodeReady Containers, you can type:

oc login -u kubeadmin https://api.crc.testing:6443

 

Our Operator is  named ts-operator and will watch a custom resource definition called Memcached. It will maintain a deployment resource based on the size of the memcached cluster defined in our CRD.

We will need the following OpenShift resources in place for our ts-operator:

  • A ts-operator namespace where our operator will be running
  • A custom resource definition that defines our new Memcached resource
  • A ts-operator service account that will be running our Operator pod
  • A role that defines the permissions we want to give to our ts-operator
  • A ClusterRoleBinding that assigns our role to our ts-operator service account
  • An ImageStream that will track the container image of our Operator
  • A deployment that will deploy our Operator pod and run it using our ts-operator service account

All those resources are available in the github repository for this article and can be created using the following command:

oc apply -k resources

We can then switch to the ts-operator project:

oc project ts-operator

With our TypeScript and our OpenShift environments ready, we can start to add some logic to our Operator in our index.ts file.

Implementing the Operator Logic

Let’s go over the code in our index.ts file to understand the logic of our operator.

The first step is to import our Kubernetes client:

import * as k8s from "@kubernetes/client-node";

 


We then create 3 TypeScript interfaces to represent our custom resource.  We have a size property in the spec that defines the number of pods in our Memcached cluster, and the status field will list the name of the pods in the Memcached deployment:

interface MemcachedSpec {
size: number;
}
interface MemcachedStatus {
pods: string[];
}
interface Memcached {
apiVersion: string;
kind: string;
metadata: k8s.V1ObjectMeta;
spec?: MemcachedSpec;
status?: MemcachedStatus;
}

We can then load our kubeconfig:

const kc = new k8s.KubeConfig();
kc.loadFromDefault();

This will automatically load our kubernetes token from ~/.kube/config or /run/secrets/kubernetes.io/serviceaccount depending on whether our code runs on a workstation or from within a pod.

Our Operator will be using three  Kubernetes APIs:

  • The App API to create a deployment object and update its replica count
  • The CustomObjects API to watch and update the status of our custom resource
  • The CoreV1API to get the names of the pods created by our deployment

We hence create a client for each one of those APIs:

const k8sApi = kc.makeApiClient(k8s.AppsV1Api);
const k8sApiMC = kc.makeApiClient(k8s.CustomObjectsApi);
const k8sApiPods = kc.makeApiClient(k8s.CoreV1Api);

Since the Kubernetes client comes with TypeScript type definitions, we can use auto-complete to browse the APIs and figure out which methods we need. For example, in the screen capture below, we are using VSCode’s auto-complete feature on the pods API:  operatormake1

In our case, we are using the Kubernetes API to create an operator. So we want to watch our custom memcached resource. We start by creating a watch object and create a function to watch our Memcached URL:

const watch = new k8s.Watch(kc);
async function watchResource(): Promise<any> {
log("Watching API");
return watch.watch(
 `/apis/${MEMCACHED_GROUP}/${MEMCACHED_VERSION}/namespaces/${NAMESPACE}/${MEMCACHED_PLURAL}`,
 {},
 onEvent,
 onDone,
);
}

The onDone callback function is called when we get disconnected from the API Server. This would happen, for example, if the master node running the API server pod we are connected to was getting rebooted. Since we want to keep a constant connection, we just reconnect on that event:

function onDone(err: any) {
log(`Connection closed. ${err}`);
watchResource();
}

 


The onEvent callback function is called when one of our custom resources is created, modified or deleted. We will call our reconcile function when we receive the ADDED and MODIFIED events, and delete our Memcached Deployment resource when we receive the DELETE event:

async function onEvent(phase: string, apiObj: any) {
log(`Received event in phase ${phase}.`);
if (phase == "ADDED") {
 scheduleReconcile(apiObj);
} else if (phase == "MODIFIED") {
 scheduleReconcile(apiObj);
} else if (phase == "DELETED") {
 await deleteResource(apiObj);
} else {
 log(`Unknown event type: ${phase}`);
}
}

We use our k8sApi object created earlier to delete the Deployment object:

async function deleteResource(obj: Memcached) {
log(`Deleted ${obj.metadata.name}`);
return k8sApi.deleteNamespacedDeployment(obj.metadata.name!, NAMESPACE);
}

Now let’s go over our reconciliation logic. The first thing we do is wait one second before executing the reconcileNow function. We want our reconciliation logic to be idempotent, so there is little benefit in processing events that arrive almost at the same time. Instead, we process the latest event received in a second:

let reconcileScheduled = false;
function scheduleReconcile(obj: Memcached) {
if (!reconcileScheduled) {
 setTimeout(reconcileNow, 1000, obj);
 reconcileScheduled = true;
}
}

We then either update/replace the deployment object if it already exists, or create it if it does not. We have a template of the deployment object stored in the memcached-deployment.json file:

async function reconcileNow(obj: Memcached) {
reconcileScheduled = false;
const deploymentName: string = obj.metadata.name!;
//check if deployment exists. Create it if it doesn't.
try {
 const response = await k8sApi.readNamespacedDeployment(deploymentName, NAMESPACE);
 //patch the deployment
 const deployment: k8s.V1Deployment = response.body;
 deployment.spec!.replicas = obj.spec!.size;
 k8sApi.replaceNamespacedDeployment(deploymentName, NAMESPACE, deployment);
} catch (err) {
 //Create the deployment
 const newDeployment: k8s.V1Deployment = JSON.parse(deploymentTemplate);
 newDeployment.metadata!.name = deploymentName;
 newDeployment.spec!.replicas = obj.spec!.size;
 newDeployment.spec!.selector!.matchLabels!["deployment"] = deploymentName;
 newDeployment.spec!.template!.metadata!.labels!["deployment"] = deploymentName;
 k8sApi.createNamespacedDeployment(NAMESPACE, newDeployment);
}

 

Finally, we get the list of pods in the deployment and set this as our custom resource’s status:

const status: Memcached = {
 apiVersion: obj.apiVersion,
 kind: obj.kind,
 metadata: {
   name: obj.metadata.name!,
   resourceVersion: obj.metadata.resourceVersion,
 },
 status: {
   pods: await getPodList(`deployment=${obj.metadata.name}`),
 },
};

The current list of pods is obtained by calling the pods API:

async function getPodList(podSelector: string): Promise<string[]> {
try {
 const podList = await k8sApiPods.listNamespacedPod(
   NAMESPACE,
   undefined,
   undefined,
   undefined,
   undefined,
   podSelector,
 );
 return podList.body.items.map((pod) => pod.metadata!.name!);
} catch (err) {
 log(err);
}
return [];
}

Running Our Operator

We can run our operator in two ways:

  1. For real-world scenarios, we run the operator in a pod in the tn-operator namespace:
  2. For experimentation and testing, we can also run the operator from our workstation, as a traditional NodeJs application:

In this configuration, the operator uses the token mounted in the pod under /run/secrets/kubernetes.io/serviceaccount to authenticate internally to the OpenShift API.

The OpenShift resources we created earlier created a BuildConfig and a Deployment resources which already run the operator in this manner. You can verify that the operator pod is running by typing the following command:

oc logs -f deployment/ts-operator

This will tail the logs of the operator pod.

0_0

In this configuration, the operator uses the token found in our ~/.kube/config file. To run the operator this way, start by scaling down the ts-operator deployment running in the ts-operator namespace (otherwise you would have two instances of the operator running):

oc scale --replicas=0 deployment/ts-operator

We can then run the operator using the following commands:

npm run build && npm run start

If you use self-signed certificates, you can run the following command prior to running npm run start to tell node to ignore the self-signed certificate issues:

export NODE_TLS_REJECT_UNAUTHORIZED=0

The npm run start command simply runs the TypeScript compiler to produce the dist/index.js file, then runs node dist/index.js to execute the operator.

Seeing the Operator in Action

Now that our operator is running, we can create custom resources of kind Memcached to see it in action. A sample resource is provided in the git repository and can be created using the following command:

oc create -f resources/memcached-sample.yaml

This sample resource has the attribute size:2, so our operator will create two Memcached pods. We will see the events in our ts-operator pod logs:

unnamed-4

We can also see in the status field of our custom resource the name of the pods being listed:

pasted image 0 (6)-1

We can then modify the size attribute of our custom resource and watch the operator change the size of the cluster:

oc edit memcached memcached-sample
#Replace size:2 with size:5, save and exit your text editor

Then watch your operator create new pods:

oc get pods -w

Cleaning up

To delete all the resources created in this project:

oc delete -k resources

Conclusion

In this article, we showed how to use JavaScript and TypeScript to develop a simple OpenShift Operator. We showed how to set up a JavaScript and OpenShift environment and how to interact with the OpenShift API to implement the operator’s logic.


About the author

Guillaume Radde is an architect within the Red Hat Services organization. Since 2011, he has been supporting Red Hat customers all over the United States and help them build solutions using Red Hat technology. Guillaume is passionate about software development and containers. He holds a Masters Degree in Software Engineering and is both a Red Hat Certified Architect in Infrastructure and a Red Hat Certified Architect in Enterprise Applications.

Read full bio