The eternal struggle of application development is choosing to pay down technical debt or adding new features. Why not both! This article explores modernization strategies enabled by OpenShift Virtualization that will help you do just that.

Legacy applications are usually monolithic and run on one or more virtual machines. Some applications are easier to modernize if they have well established counterparts in the containerized world (EAP, Spring Boot, etc). Large classic .Net applications running on IIS in Windows server are a lot harder to modernize in one shot. OpenShift Virtualization allows you to import your existing VM workloads into OpenShift and modernize your application in stages.

Virtual Machines are first-class citizens in OpenShift and have access to all the artifacts that pods do, including being able to access and be accessed using service endpoints. Once you have the VM running in your OpenShift project you can start to modernize and extend the functionality of your application.

Shift and Modernize

In the following use case, we will take a .Net application running on IIS on a Windows Server VM and import it into OpenShift Virtualization. Then we will go through the stages of containerizing each of the logical layers of the application. Note that this strategy can be used with other OS and Middleware combinations.

iterative-modernization

Stage 1 - Import VM to OpenShift Virtualization

You can import a VM into OpenShift in several ways. For VMware VMs you can import using the built-in migration tool. The tool lets you connect to a VSphere instance and directly import VMs to OpenShift Virtualization. For other platforms including Hyper-V and Xen, you can use virt-v2v to convert the VM image to an image usable by OpenShift Virtualization. Once converted, you can import the VM image using methods explained in my previous blog post "Getting Started with OpenShift Virtualization".

The VM is spun up using a configuration based on this sample Yaml. The Yaml contains a VM definition as well as a service and route objects. It also contains feature configurations that will make the Windows VM more performant on OpenShift Virtualization.

Relevant section:

        features:
acpi: {}
apic: {}
hyperv:
reenlightenment: {}
ipi: {}
synic: {}
synictimer: {}
spinlocks:
spinlocks: 8191
reset: {}
relaxed: {}
vpindex: {}
runtime: {}
tlbflush: {}
frequencies: {}
vapic: {}
evmcs: {} # do not use evmcs if testing with nested virt

After applying the yaml we can access the Windows UI through the VNC Console provided by OpenShift Virtualization.

stage1-vm

We can also access the application now via the exposed route.

stage1-app-screenshot

Stage 2 - Containerize App UI

It's not always feasible to migrate everything all at once. This is especially true for large applications. There might be dependencies on libraries that only exist in classic .NET. The additional effort would be required to move to new libraries available for .Net Core.

Luckily we don't have to migrate in one shot. In this case, we will start with the web UI. Once the UI layer is migrated to .Net Core, we can create a container image to be used in OpenShift. The New UI pod will consume the API provided by the existing API site running in IIS on the VM.

This Dockerfile will compile your .Net Core source code and create a runtime container image. This can also be done via S2I builds.

# https://hub.docker.com/_/microsoft-dotnet-core
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /source

# copy csproj and restore as distinct layers
COPY ./source .
RUN dotnet restore

# copy and publish app and libraries
COPY . .
RUN dotnet publish -c release -o /app --no-restore

# final stage/image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "ClientListUI.dll"]

You can use podman to build and push the image to your registry. Once done, you can deploy the web UI pod alongside the application VM.

stage2-diagram

Stage 3 - Containerize API

Following the same process as the UI migration, migrate the API project to .Net Core and deploy the API pod. At this stage, the original VM is only used to host the SQL Server database. The service for the VM is updated to reflect the connection to be used to access it.

kind: Service
apiVersion: v1
metadata:
name: stage3-win2019-db-vm
namespace: stage3
labels:
kubevirt.io: virt-launcher
kubevirt.io/domain: stage3-win2019-iis-vm
spec:
ports:
- protocol: TCP
port: 1433
targetPort: 1433
selector:
kubevirt.io: virt-launcher
kubevirt.io/domain: stage3-win2019-iis-vm

Once deployed, the App UI pod will use the service URL of the App API pod to make the api calls. The API App will then use the service URL for the VM to make its database queries.

stage3-diagram

Stage 4 - Containerize the database

Yes, you read that right. Migrate to a MS SQL instance running in a RHEL container. While this statement might have been unthinkable 5 years ago, today it's a real option and one that works well. Please visit the RedHat Catalog site for MS SQL For more information on how to configure and run MS SQL in a container.

Once the SQL pod is up you can use port forwarding to access the database through MS SQL Management Studio or Azure Data Studio.

└──╼ oc get pod | grep sql | grep Running
sql2019-1-1-gp29h 1/1 Running 0 39h

└──╼ oc port-forward sql2019-1-1-gp29h 1433:1433
Forwarding from 127.0.0.1:1433 -> 1433
Forwarding from [::1]:1433 -> 1433
Handling connection for 1433

You can now migrate the database from the MS SQL instance running in the Windows VM to the SQL Server container. Once that's done, update the connection string for the App API pod to point to the service for the new MS SQL pod.

At this point, the app is completely containerized. So it's time to salute that VM for its service and shut it down one last time.

stage4-diagram

Conclusion

By migrating VM workloads using the shift and modernize strategy, we can reduce technical debt, minimize any regressions that can occur during the process, as well as allow for new features to be developed simultaneously.