Windows Operating System in a container? Who would have thought?!? If you asked me that question a few years back, I would have told you with conviction that it would never happen! But if you ask me now, I will answer you with a big, emphatic yes and even show you how to do so!In this article, I will demonstrate how you can run Windows workloads in OpenShift 4.6 by deploying a Windows container on a Windows worker node. In addition, I will then highlight some of the issues and challenges that I see from a system administrator perspective.

But before diving straight into it, let us discuss the underlying technology that makes Windows containers a reality.

A Brief History of Windows Containers

In 2016, Microsoft decided to partner with Docker to create a container engine implementing the Docker specification. This makes it easy to run containers natively for Windows with tools you are already using:

docker run -it microsoft/windowsservercore cmd

Throughout the early years of Windows container exploration by Microsoft, they released two implementations:

  1. Process Containers (also known as Windows Server Containers — WSC)
  2. Hyper-V Containers

This article discusses the differences between the two.

Windows Containers share a kernel with the container host and all the containers running on the host. In contrast, with Hyper-V Containers, the kernel of the container host is not shared with the Hyper-V Containers as explained here

Note that for this discussion, we will only talk about WSC. The following is the high level architecture of Windows Containers:

Source: DockerCon 2016

The Windows Containers Limitations

  1. Windows image size: Compared to their Linux counterparts, Windows images are much larger. A team from Microsoft is working on making the overall size smaller based upon work from Nano containers. However, several features were removed in this stripped down version which is described here. The notable feature lost is the PowerShell Core, which is a reduced footprint edition of PowerShell that is built on .NET Core which runs on the reduced footprint editions of Windows.
  2. GUI Applications: In Windows world, many applications are designed with UI in mind. The move to containerization does not support this paradigm.
  3. Limited Windows Server version support: Windows containers enable you to share the Windows kernel without an issue, but you cannot isolate an application completely from the system services and DLLs. As a result, you are limited to running containerized workloads that match the version as your underlying Windows nodes. This limitation is not present in the case of Linux containers.
  4. Windows anti-virus: We know that Windows server requires antivirus software in production, which adds resource overhead on the Windows worker nodes.

Windows Containers in OpenShift 4.6

For customers who have heterogeneous environments with a mix of Linux and Windows workloads, the announcement of a supported Windows container feature on OpenShift 4.6 is exciting news. As of this writing, the supported workloads to run on Windows containers can be either .NET core applications, traditional .NET framework applications, or other Windows applications that run on a Windows server.

So when did the work start to make Windows containers possible to run on top of OpenShift? In 2018, Red Hat and Microsoft announced the joint engineering collaboration with the goal of bringing a supported Windows containers feature into OpenShift.

Now, why would you want to run your Windows containers in OpenShift?

The easy answer to that question is that you now will be able to run your Windows application in a scheduled, orchestrated, and managed manner that enables you to automatically inherit all the benefits of running a workload in OpenShift, including:

  • Go to market faster
  • Accelerate application development
  • Avoiding public cloud lock-in
  • Enabling DevOps and collaboration
  • Self-service provisioning

Take a Dip on Windows Containers in Openshift 4.6

To try Windows containers in OpenShift, the assumption is that you have a running OCP 4.6 cluster either on AWS or in Azure. The following steps were tested in OCP 4.6 on AWS.Pre-requisites:

  • OCP 4.6 Installed and configured
  • Windows Node/s added to OCP 4.6 cluster
  • Working login to Windows Nodes via ssh-bastion-containers

Note: If you are interested in the steps necessary for adding a Windows Node on your existing OCP cluster, see the following link for more information.

First, let us verify the OCP version:

[mcalizo@bastion ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2020-10-03-051134   True        False         59m     Cluster version is 4.6.0-0.nightly-2020-10-03-051134
[mcalizo@localhost ~]$

Then, let us make sure we have a working Windows worker node:

[mcalizo@bastion ~]$ oc get nodes -l kubernetes.io/os=windows -o wide
NAME                                             STATUS   ROLES    AGE   VERSION                            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION    CONTAINER-RUNTIME
ip-10-0-140-53.ap-southeast-1.compute.internal   Ready    worker   45m   v1.19.0-rc.2.1023+f5121a6a6a02dd   10.0.140.53   <none>        Windows Server 2019 Datacenter   10.0.17763.1518   docker://19.3.12

Notice the Windows Worker node version above. We are using Windows Server 2019 Datacenter for this demonstration.

Now that we have done the pre-checks, let us deploy a Windows container!

Next, create a project/namespace where you want to deploy your windows containers:

[mcalizo@localhost ~]$ oc new-project windows-containers

Let us deploy a windows container:

[mcalizo@bastion ~]$ oc create -f windows-sample.yaml
service/win-webserver created
deployment.apps/win-webserver created

The windows-sample.yaml manifest instantiated a service and a deployment. The source can be found here

After a few minutes, you will see a container running:

[mcalizo@bastion ~]$ oc get pods
NAME                             READY   STATUS              RESTARTS   AGE
win-webserver-549cd7495d-6xxfl   0/1     ContainerCreating   0          5s

Now, locate the service that was created so that we can expose access to the application:

[mcalizo@bastion ~]$ oc get service
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP                                                                   PORT(S)        AGE
win-webserver   LoadBalancer   172.30.232.23   adbfbbd697cf443da8092bbe34fab053-444873145.ap-southeast-1.elb.amazonaws.com   80:30221/TCP   2m32s

Exposing the service win-webserver:

[mcalizo@bastion ~]$ oc expose svc win-webserver
route.route.openshift.io/win-webserver exposed

Verifying newly created route:

[mcalizo@bastion ~]$ oc get routes
NAME            HOST/PORT                                                                                 PATH   SERVICES        PORT   TERMINATION   WILDCARD
win-webserver   win-webserver-bwindowscontainers.apps.cluster-wel-4783.wel-4783.sandbox1545.opentlc.com          win-webserver   80                   None

Testing if we can access the route:

[mcalizo@bastion ~]$ curl -k win-webserver-bwindowscontainers.apps.cluster-wel-4783.wel-4783.sandbox1545.opentlc.com
<html><body><H1>Windows Container Web Server</H1></body></html>
[mcalizo@bastion ~]$

Autoscaling Using Windows Machine Config Operators

The Windows Node is installed and managed in a similar fashion as any other node by the Machine Config Operator. This means that we can scale up and down the Windows worker node as desired.

Let us verify the windows MachineSet’s:

[mcalizo@bastion ~]$ oc get machinesets -n openshift-machine-api | grep windows
cluster-wel-4783-mvtd2-windows-ap-southeast-1a   1         1         1       1           95m

We also need to verify the Windows machine:

[mcalizo@bastion ~]$ oc get machines -n openshift-machine-api  -l machine.openshift.io/os-id=Windows
NAME                                                   PHASE     TYPE        REGION           ZONE              AGE
cluster-wel-4783-mvtd2-windows-ap-southeast-1a-mt6xq   Running   m5a.large   ap-southeast-1   ap-southeast-1a   99m

With the information above, we can now scale up the MachineSet replica to 2:

[mcalizo@bastion ~]$ oc scale machineset cluster-wel-4783-mvtd2-windows-ap-southeast-1a --replicas=2 -n openshift-machine-api
machineset.machine.openshift.io/cluster-wel-4783-mvtd2-windows-ap-southeast-1a scaled

After that, we can verify the number of Windows machine replicas. Below, you will notice 2 machines, one of which is being provisioned:

[mcalizo@bastion ~]$ oc get machines -n openshift-machine-api  -l machine.openshift.io/os-id=Windows
NAME                                                   PHASE         TYPE        REGION           ZONE              AGE
cluster-wel-4783-mvtd2-windows-ap-southeast-1a-bqj8z   Provisioned   m5a.large   ap-southeast-1   ap-southeast-1a   66s
cluster-wel-4783-mvtd2-windows-ap-southeast-1a-mt6xq   Running       m5a.large   ap-southeast-1   ap-southeast-1a   104m
[mcalizo@bastion ~]$

Observations

Browsing the OCP UI, I noticed that unlike Linux containers, Windows containers do not have a working dashboard for monitoring and metrics. In addition, the pod terminal is not working due to this bug, although this is already fixed upstream and will be delivered on OCP 4.7.

 

 

 

 

Conclusions

Windows containers on OpenShift are now possible, and the potential use cases are really exciting. This is especially true for customers who have heterogeneous environments with a mix of Linux and Windows workloads. The ability to reap the benefits of Kubernetes to help them transform and evolve to the market needs by providing a consistent platform for their applications.

There are still a number of challenges that must be overcome before we can say that Windows containers are ready for production deployment. It is only a matter of time that the technology continues to mature through the efforts of the Kubernetes community, which will enable Windows containers to be deemed enterprise ready. I am positive that we will see the wider adoption of this feature very very soon!

I expect anti-virus operators are the next step…

References: