We’re often asked what workloads or components of any application are suitable to run as containers. Sometimes people simply assume anything and everything should be run as a container. Simply shoving everything wholesale into a container without careful and timely consideration of their suitability is a path fraught with danger (and dumpster fires!).

A particular item that demands careful attention is the suitability of containerized Relational Database Management System (RDBMS) workloads on OpenShift. This is this topic that we will be discussing as well as highlighting important points to consider. Unfortunately, the answer isn’t as simple as a “yes” or a “no,” but rather an “It depends.” We begin with some topics and potential challenges to keep in mind, and then present how to address them under which circumstances.

  • Stateful Workloads

    Statefulness takes many forms. And in application design, this means any data shared about the client session between requests for later use. Of particular concern are any applications that save their state locally to storage. Due to the heritage of container orchestration technologies being designed with limited consideration for stateful workloads, stateful workloads are in many ways a second-class citizen workload within the container landscape. Keep this point in mind when assessing workloads. Fortunately, Kubernetes and Openshift have made it easier to support stateful workloads through the use of persistent volumes (PVs) and StatefulSet objects.
    There are a wide array of technologies that can provide supported (and unsupported) storage for the Persistent volumes. The storage technology needs to be examined because it will have direct impacts on the DB, including the confidence of the persistence of the data, access speeds, concurrent reads, and concurrent writes.

    With regards to the backing technology, if you or someone you know is using native Linux NFS to back their PVs, this is a TERRIBLE idea, and workloads should NOT use it in Prod, let alone Database workloads. This is one of the few circumstances with a clear-cut answer, and that is to nuke it from orbit.

  • Disaster Recovery (DR)
    Given the above considerations with storage, how well the RDBMS can handle recovery from disasters (DR) is a major consideration. In our experience, most RDBMS do not react well to this and often
    require some sort of “manual” intervention that is antithetical to modern container practices.
  • OpenShift Load Balancing
    By default, the Kubernetes/OpenShift scheduler will attempt to make a reasonable attempt at balancing compute resources across available nodes and will sometimes shift workloads in circumstances wherein an individual node is becoming saturated and bogged down. This could cause a running pod containing a RDBMS to be halted and the workload shifted to another node. Similar to the DR concerns, this often
    doesn’t behave predictably. This often leads to RDBMS starting in an error or faulty state that requires undesirable manual intervention.
    Another related situation presents itself when a worker node is marked as unschedulable and the pods evacuated for regular maintenance activities such as patching.
  • Auto-scaling
    A big selling point for OpenShift is the built-in ability to auto-scale pods and load-balance incoming requests as the demand increases on a pod by monitoring its CPU and memory utilization. Unfortunately, most RDBMS technologies don’t react well to being scaled up or down on demand. Also, the backing Persistent Volume would need to support multiple containers mounting the same Volume. This can be mitigated by...
  • Long-running Processes
    One of the big promises of Kubernetes
    OpenShift is the ability to scale up and down containers to meet demand. Without the ability to scale up and the nature of Database workloads being a long-running process, you incur the extra overhead of Kubernetes managing the container without the ability to scale the database to zero when there is no demand. This results in unnecessary CPU cycles spent managing a container that will never be scaled up or down.
  • Dataset Size
    Containers historically operated with smaller-sized workloads and subsequently smaller memory and CPU footprints. For example, Kubernetes workloads are often measured in millicores. That said, larger workloads are certainly possible to host on OpenShift, but this can lead to concerns with load distribution across the cluster and appropriate worker node sizing. 
  • Red Hat Provided Images
    The Red Hat provided RDBMS container images may have limited support and generally fall under the category of “Commercially Reasonable Support.” This is something to be carefully weighed if the database is to be trusted with any production data, since direct support of the RDBMS may be limited.
Given the above considerations, there are some advantages and paths forward to using RDBMS deployed via containers on OpenShift:

  • Developer Self-Provisioning
    Using RDBMS for developer self-provisioning via the OpenShift catalog is a marked advantage. This allows developers to experiment with database technologies that they may be unfamiliar with and attempt novel techniques in a setting that is lower risk than a self-managed RDBMS. They can also create instances on the fly or  ad-hoc without reaching out to another team. This reduces the workload of the database teams and mental barrier-to-entry in experimentation. This also allows the opportunity for developers to have their own temporary database instances to use while they develop, reducing the need for coordination between developers/teams as to their use.
  • Non-Production Environments
    Tying together with the idea of self-provisioning, using databases in Non-Production environments promise similar benefits. By using containers, it is easy to provision new instances, allowing for the databases to be created from scratch between each run of testing and destroyed when a run completes. This guarantees that the testing conditions are the same between runs.
  • The following Specific Techniques should also be considered:
    • Deploy as a StatefulSet
      StatefulSet objects are a newer workload resource that was specifically designed to be the Kubernetes workload API object for managing stateful applications. Previously with Deployments there was no guarantee of start order for the individual pods or their uniqueness. They were therefore entirely ill-suited for orchestrating the workloads of stateful applications, such as databases.

      StatefulSet objects are designed to manage applications that require one or more of the following, which a database requires most of:
      • Stable, unique network identifiers.
      • Stable, persistent storage.
      • Ordered, graceful deployment and scaling.
      • Ordered, automated rolling updates.
    • Dedicate Worker Nodes (pinning)
      Given the generally important role of databases and their data, it’s recommended that dedicated worker nodes are provided. This can be done by modifying the pod scheduler using
      node affinity or selectors.
    • Running a Single Replica/Disable Autoscaling
      As mentioned previously, databases usually do not behave well with autoscaling. For this reason, it’s generally recommended to not make use of autoscaling, and set the replica count to 0. It is possible to write logic into the StatefulSet that allows for controlled scaling rules, but time and cost is likely better spent on work that matters, that is, on the application logic itself.
  • Partner Provided Operator And Container Images
    The
    single best recommendation is to use a Red Hat partner provided Operator and container image. Historically Red Hat is not a database company, nor in possession of the particular expertise required to operate databases at scale. Fortunately, there is an ecosystem of Red Hat partners who have such knowledge and who have built knowledge bases and operators designed to support more traditional production workloads. These parner-supported operators include their expertise to help support production workloads.

About the Author:

Kevin Franklin is currently a member of the Container Native Application Development team, a part of Red Hat's North American Public Sector Services organization. This is intended as the first in a series of blog posts and articles from this group.

Further Reading: