TL;DR: Securing your app with Istio, SSO, Vault. Step-by-step without coding! Assembling security aspects using cloud native patterns.

Today, securing your apps is a “must have” but it’s difficult to introduce it without modifying code if you didn’t think about it at the very beginning. Luckily, the new cloud native patterns brought by containers and platforms like Openshift/Kubernetes offers simple ways to address security concerns without touching code.

When talking about containers platforms, the focus is usually put onto the security requirements of running containers. We talk a lot about: capabilities restrictions, resources scheduling and isolation, network zones, infrastructure certificates issuing and renewal, registry segmentation and so on… On top of that we sometime neglect the application layers that may be enhanced from a security point-of-view. And it could be easy using new techniques and infrastructures services that are now easily available using container platforms.

This last months I explored some of this security facets and experienced with customers how to apply them to existing applications. I wrote a series of articles on how to secure an existing application made by Average Joe and where security was clearly an afterthought . This one is a summary on how to, step-by-step, add new security facets to achieve state-of-the-art secured application. After reading it, and going through the details, you will be able to cherry pick recipes to apply security where it makes sense to your own apps and context.

Beyond some features we use in Red Hat products among others, we will conclude on some thoughts on architecture trends. Since we now perceive the power of cloud native patterns, we will discuss on how concepts used in app design can now be used to design distributed systems « in the large ».

Preamble: Containerize and deploy your app with TLS Ingress

A prerequisite to all the rest of the discussion is – of course – the containerization of an existing application. And it’s worth mentioning OpenShift has plenty of advantages for easily realizing this: from the extensive container images catalog, to the Source-to-image mechanism, including the Buildah, Podman and Skopeo tools included in the distribution as well as the huge ISV community (see Red Hat).

We can summed up the benefits of this first transformation from the security point-of-view that way:

  • Components deployed as containers can take advantages of multi-tenancy, isolation and resource densification features of the underlying host system. Have a look at Ten Layers of Container Security white paper for full details on that topic,
  • Kubernetes / OpenShift allows fine grained control on what’s exposed to other services and to the outer-world. Deployments units (aka Pod) are no longer exposed and directly addressable,
  • Database credentials are managed as secrets independently from the application deployment. They can be viewed/edited from dedicated Ops guys using a powerful RBAC model ,
  • Exposition to the outer-world is controlled via an OpenShift Route with TLS support.

The detailed post on how to achieve that on your application – as a warm-up for the next parts – can be found here: Adding security layers to your App on OpenShift — Part 1: Deployment and TLS Ingress. Here’s below a simple schema representing the app we use as a sample, once containerized:

Step 0 – Our sample App containerized on OpenShift

Authentication and authorization

Adding proper authentication to an application may be cumbersome as implementing standards and interoperable authentication flows like OpenId Connect is a challenging task. So Keycloak is here to the rescue! For those of you who’d rather know the productized version: Keycloak is the upstream community project of RH SSO.

Compared to traditional ways of implementing security through embedded frameworks and UI, Keycloak is a game changer. It is lightweight, easily deployable and provides an extensive set of adapters for security interceptors and authentication flows. For any new application, it would be a no-brainer not to delegate all the authentication and authorization concerns to a building block such as Keycloak. Moreover it can also be easily used to leverage an already existing application with no change to existing code, just by embedding new dependency.

Considering Authentication and authorization, applying it to our application is as simple as adding:

  • A new backend dependency to include a Keycloak adapter that will intercept calls and protect backend services,
  • A new frontend dependency to include a Keycloak JS adapter that will implement the authentication flow,
  • A configuration file for connection details and endpoints authorization regarding user’s roles. One can easily externalize this file using a Kubernetes ConfigMap with the deployment.

The detailed post on how to achieve that on your application can be found here: Adding security layers to your App on OpenShift — Part 2: Authentication and Authorization with Keycloak

Step 1 – Adding Authentication & Authorization facet

As you may figure it out, the Authentication & Authorization facet is quite decoupled from the core business logic of the application. It can be done without touching the code and all the configuration can be externalized using the ConfigMap Kubernetes primitive. We have started adding a new cross-cutting concern to our containerized application in a way that is maintainable and sustainable.

Secret management

Second facet that can be enhanced in our application is the way we manage secrets and sensitive data into our applications. When talking about security, people mostly think of two topics: IAM and communication management. They do not naturally think of the secrets management part — usually minimizing the degree of sprawl or leakage of sensitive data to different applications and people in the organisation. They tend to think that secret management is not a concern ; however they are far to know how to efficiently create, store, renew and revoke them.

Secret Management solutions like Hashicorp Vault have became increasingly popular lately being easy to deploy and integrate in a Kubernetes world. Vault is an open source solution for protecting sensitive data and managing secrets. Vault can serve multiple purposes when used in an organisation. You can use Vault for storing secrets, API keys or infrastructure tokens and credentials like AWS IAM credentials, database credentials, …

In Adding security layers to your App on OpenShift — Part 3: Secret Management with Vault we have detailed how to deploy and integrate your application with Vault, delegating the management of Secrets to it with no impact on the code nor the packaging. For this facet, we are extensively using new cloud native patterns and components. Precisely, we used:

  • ServiceAccount to ensure the account running the app is authorized to access the sensitive data stored in Vault,
  • Init Container to retrieve sensitive data and place it in a file before the app main container is started.

A year ago, implementing this kind of integration was a little bit like doing “high flying trapeze”. But things are changing fast and it is now just a matter of including some vault.hashicorp.com/* annotations into your deployment:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
[...]
spec:
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/agent-init-first: "true"
        vault.hashicorp.com/agent-inject-secret-application.properties: "database/creds/fruit-catalog-role"
        vault.hashicorp.com/agent-inject-template-application.properties: |
          {{- with secret "database/creds/fruit-catalog-role" -}}
          spring.data.mongodb.uri=mongodb://{{ .Data.username }}:{{ .Data.password }}@mongodb/sampledb
          {{- end }}
        vault.hashicorp.com/secret-volume-path-application.properties: "/deployments/config/"
        vault.hashicorp.com/agent-pre-populate-only: "true"
        vault.hashicorp.com/role: "fruits-catalog"
[...]

This annotations are indeed processed by a MutatingAdmissionWebhook – another cloud native technique – that injects the correctly configured Vault Agent container in our app Pod. So you see that adding this new security facet into our application can be easily done through annotations, in a declarative way and decoupled from our core business logic. Init Containers are powerful components that may embed useful primitives.

Dynamic credentials

As an extension of the previous facet, one can use Hashicorp Vault to take care of all the things related to creation, renew and revocation of database credentials. Vault has an advanced feature called Dynamic Secrets that are ephemeral, programmatically generated when accessed, immediately revoked after use database credentials. Using this feature drastically reduce the risk of someone stealing and reusing credentials to access the database.

In Adding security layers to your App on OpenShift — Part 4: Dynamic secrets with Vault, we detailed how to enrich the previous static configuration to manage the renewal of the credential lease and its revocation. For that we’ve used two other cloud native techniques and components that are:

  • Sidecar container that is a container that runs along-side our app main one and makes sure the dynamic secret lease is renewed,
  • PreStop lifecycle hook to make sure that just before app main container is stopped, credentials will be revoked.

Things are changing fast and applying these configurations is now just a matter of annotations placed on the deployment. Remember the previous vault.hashicorp.com/agent-pre-populate-only: "true" annotation ? Well you just have to change its value to false , add a new vault.hashicorp.com/agent-revoke-on-shutdown: "true" and the MutatingAdmissionWebhook will take care of injecting an additional Sidecar container to preserve the credential after init phase and revoke it on shutdown.

Step 2 – Adding Secret Management facet

We just represent in the schema above a second security facet added to our application. We’ve done it with loose coupling, declarative configuration and no impact on the application code nor design. The separation of concerns is fully respected so far.

Zero-trust network with Service Mesh

Cloud native patterns are not only about containers! Cloud native patterns come from the inherent highly distributed nature of applications that may be hosted on hybrid infrastructures — different cloud providers or on-premise — and architectural style like Service Oriented Architecture and its new evolution step called Microservices . From the security point-of-view, cloud native adoption also mean that there’s a shift in the security model to apply. We see a transition from perimeter security dissociating untrusted and trusted zones — this model is also called castle-and-moats — to zero-trust network. This shift is necessary because the obvious man-in-middle vulnerability is even more exacerbated by the facts that:

  • More and more companies have data spread across hybrid infrastructure, making it difficult to have a single security control zone,
  • Financial optimisation concerns imply consolidation, sharing and elasticity of the application resources, making it even more difficult to continuously adapt and monitor the security zones.

Luckily we have got some new tools to address that! A Service Mesh is a key component of container and distributed architectures as it is all about addressing Fallacies of distributed computing issues and implementing zero-trust network policies. At a high level, a service mesh ensures communication between application services. It provides features such as traffic routing, load balancing and also service discovery, encryption, authentication, and authorization.

And this is the concern we add in Adding security layers to your App on OpenShift — Part 5: Mutual TLS with Istio : applying Mutual TLS service identity check and encryption between our application Pod and the database. Here again it’s just a matter of adding the sidecar.istio.io/inject: "true" annotation to your deployment manifest:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
[...]
spec:
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "true"
[...]

This will trigger the correct MutatingAdmissionWebhook that will add the correct Sidecar container that will act as a proxy on both sides of the application/database communication link to apply the MTLS hand-check and protocol:

Step 3 – Adding MTLS security in Service Mesh

We are completing the picture here with a new concern : the in-app mesh communication security. We added it to our application as a new aspect with no coupling, in a declarative fashion avoiding to tangle with application development concerns.

PKI as a service for apps

This facet can be seen as an extension of the very first one but it was indeed easier to introduce it after the previous ones… In the preamble of this article, we talked about how we can easily secure access to the application via a Route with TLS support. However, we did not dive into the details on how TLS certificates were issued and dispatched… We assume the default configuration that uses a wildcard certificate — a pretty bad idea for mission critical platforms.

However, managing certificate issuing, revoking and renewal for each and every application — specially in a highly volatile environment — can be really a huge task… if not automated. And that’s the point where Cert Manager enters the picture! Cert Manager is automation software for automating certificates management in a cloud native world. It allows issuing, refreshing and revoking certs and integrates very well with Hashicorp Vault where we would like to store our Root and Intermediate Certification Authorities.

Using this combo it is really easy to build a fully automated Private Key Infrastructure as a service so that very fast moving application like cloud native or even existing ones always have a dedicated, automatically renewed certificate. Such a service makes your developer fully autonomous for requesting and using certificates tailor-made for their application. Moreover those requests and usages are managed through regular Kubernetes resources that can be easily versioned and secured using a GitOps approach!

This is what we have details in Adding security layers to your App on OpenShift – Part 6: PKI as a Service with Vault and Cert Manager if you want implementation and deployment details. Here again it’s a matter of creating new Kubernetes custom resources like a Certificate request and adding the correct annotations on the Route or Ingress objects. Here’s below an example of an annotated Route:

kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: fruits-catalog
  annotations:
    cert-utils-operator.redhat-cop.io/certs-from-secret: fruits-catalog-route-secret
spec:
[...]

The same MutatingAdmissionWebhook are used here again to inject the correct Secret containing certificate details issued by Cert Manager and Vault into our Route.

Step 4 – Adding a PKI as a Service for our Route

We just complete the picture adding a new «  Certificates as a Service » facet to our Kubernetes Ingress resource. You can add this facet after the resource creation with no impact on how the application is designed. These smart components leveraging Kubernetes Controllers or Operators are automating all the process.

Wrap-up

In this article we walk through a set of stages – or recipes – that allow adding different security concerns to your application. The techniques leveraged Kubernetes primitives and cloud native patterns like ConfigMap, Secret, ServiceAccount, Init container, Sidecar container, PreStop lifecycle hook, MutatingAdmissionWebhook and Service Mesh .

The main benefits of this approach is the ability to manage this additional facets in a declarative way, with no coupling to the business logic of the application. Moreover there is no to very little adherence to the technology used. Your business logic will always be unique and change at its own pace ; the added security capabilities are independent, specialized and make extensive reuse of bullet-proof libraries and solutions. Business logic and added capabilities have indeed no visibility and control on each other. They can be managed by different personas within an organization.

While working on this series these last months, it came to my mind that what we are doing here at a distributed system level is actually very analogous to what we were trying to achieve « in the small » with Aspect-oriented programming. Those of us that were coding in the early 2000s’ may have already met that AOP paradigm. It was one of the foundations of Annotations driven development as we find it in frameworks like Spring. For the others, let’s say that AOP’s motivation was to implement an application by assembling its different cross-cutting concerns – implemented as Aspects – using some weaving or injection techniques. Aspects were meant to be easier to implement because focused on specific concerns and thus reusable.

Recently, Bilgin Ibryam – a colleague at Red Hat – released a brilliant article and coined the term of Multi-Runtime Microservices Architecture or Mecha runtime Architecture. This terms are describing this new trend of adding off-the-shelf mechanics to an existing micrologic runtime in order to enrich its capabilities. Its demonstration and illustrations were focused on the integration capabilities. This principles should also apply to security concerns like I explain in this article. This architectural trend applied to large distributed system is very powerful and adapted to both:

  • new microservices applications designed for change to rapidly bring them seurity capabilities
  • existing applications you have to modernize and include into new security standards.

Where AOP adoption was made difficult being too restrictive and too complex – all aspects in the same technology, obscure invocation flow, Mecha runtime Architecture has its chances for being more successful. The general spread of distributed system requirements, rapid pace of innovation, multi-technology support, simple and convenient tooling (it’s all about YAML ) adapted to more coarse-grained aspects are the reasons I think this trend is here to stay. Moreover it can be applied to both monoliths, microservices and functions models. It’s here, it’s getting easier to apply. You have no reason not to have secured app anymore!

Categories
Containers, Istio, Kubernetes, Security
Tags
, , , , ,