Subscribe to our blog

TL;DR: Securing your app with Istio, SSO, Vault. Step-by-step without coding! Assembling security aspects using cloud native patterns.

Today, securing your apps is a “must have” but it’s difficult to introduce it without modifying code if you didn’t think about it at the very beginning. Luckily, the new cloud native patterns brought by containers and platforms like Openshift/Kubernetes offers simple ways to address security concerns without touching code.

When talking about containers platforms, the focus is usually put onto the security requirements and features that are involved in running containers. We talk a lot about: capabilities restrictions, resources scheduling and isolation, network zones, infrastructure certificates issuing and renewal, registry segmentation and so on… On top of all of that we may sometimes neglect the application layers that could be enhanced from a security point-of-view using new techniques and infrastructures services that are now easily available using container platforms.

In the last few months I have explored some of this application security facets and experienced with customers how to apply them to existing applications. I wrote a series of blog posts on how to secure an existing application made by Average Joe and where security was clearly an afterthought 😉. This article is kind of a summary on how to, one step at a time, add new security facets to achieve state-of-the-art secured application. After reading it, and going through the detailed posts, you will be able to cherry pick recipes to apply security where it makes sense to your own apps and context.

 

Some application security facets

Beyond some advanced features we use in Red Hat products among others, we will conclude this post on some thoughts on new arising architectural trends. Since we now perceive the power of cloud native patterns, we will discuss how concepts used in organic design of applications can now be used to design distributed systems « in the large ».

 

Preamble: Containerize and deploy your app with TLS Ingress

A prerequisite to all the rest of the discussion is - of course - the containerization of an existing application. And it’s worth mentioning OpenShift has plenty of advantages for easily realizing this: from the extensive container images catalog, to the Source-to-image mechanism, including the Buildah, Podman and Skopeo tools included in the distribution as well as the huge ISV community (see Red Hat catalog).

Benefits of this first transformation from the security point-of-view can be summed up that way:

  • Components deployed as containers can take advantage of multi-tenancy, isolation and resource densification features of the underlying host system. Have a look at Ten Layers of Container Security white paper for full details on that topic,
  • Kubernetes / OpenShift allow fine grained control on what’s exposed to other services and to the outer-world. Deployments units (aka Pod) are no longer exposed nor directly addressable,
  • Database credentials are managed as secrets independently from the application deployment. They can be viewed/edited from dedicated Ops guys using a powerful RBAC model,
  • Exposition to the outer-world is controlled via an OpenShift Route with TLS support.

 

 

Step 0: Containerizing application and TLS transport

The detailed post on how to achieve that on your application - as a warm-up for the next parts - can be found here: Adding security layers to your App on OpenShift — Part 1: Deployment and TLS Ingress

Authentication and authorization

Adding proper authentication to an application may be cumbersome as correctly implementing standards and interoperable authentication flows like OpenId Connect are really a challenging task. So Keycloak is here to the rescue! For those of you who’d rather know the productized version: Keycloak is the upstream community project of RH SSO.

Compared to traditional ways of implementing security through embedded frameworks and UI, Keycloak is a game changer as it is lightweight, can be easily deployed and provides an extensive set of adapters for applying security interceptors and authentication flows. For any new application using microservices architecture, it is a no-brainer to delegate all the authentication and authorization concerns to a building block such as Keycloak. Moreover it can also be easily used to leverage an already existing application with no change to existing code, just by embedding new dependency.

Considering Authentication and authorization, adding it to our application is as simple as:

  • Adding a new dependency on the backend side to include a Keycloak adapter that will intercept calls and protect backend services,
  • Doing the same thing on the frontend side to include a Keycloak JS adapter that will implement the authentication flow,
  • Writing a configuration file for connection details and endpoints authorization regarding user’s roles. That configuration file can be easily externalized using a ConfigMap in a Kubernetes deployment of your app.

The detailed post on how to achieve that on your application can be found here: Adding security layers to your App on OpenShift — Part 2: Authentication and Authorization with Keycloak

 

 

Step 1: Authentication & Authorization facet

As you may figure it out, the Authentication & Authorization facet is quite decoupled from the core business logic of the application. It can be done without touching the code and all the configuration can be externalized using the `ConfigMap` Kubernetes primitive. We have started adding a new cross-cutting concern to our containerized application in a way that is maintainable and sustainable.

Secret management

Second facet that can be enhanced in our application is the way we manage secrets and sensitive data. When talking about security, people mostly think of two topics: IAM and communication management. They do not naturally think of the secrets management part — usually minimizing the degree of sprawl or leakage of sensitive data to different applications and people in the organisation. They tend to think that secret management is not a concern ; however they are far from knowing how to efficiently create, store, renew and revoke them.

Secret Management solutions like Hashicorp Vault have become increasingly popular lately being easy to deploy and integrate in a Kubernetes world. Vault is an open source solution for protecting sensitive data and managing secrets. Vault can serve multiple purposes when used in an organisation. It can be used: to store application secrets into a Key/Value Store, API keys or infrastructure tokens and credentials like AWS IAM credentials, database credentials, …

In Adding security layers to your App on OpenShift — Part 3: Secret Management with Vault we have detailed how to deploy and integrate your application with Vault, delegating the management of Secrets to it with no impact on the code nor the packaging. For this facet, we are extensively using new cloud native patterns and components. Precisely, we used:

  • ServiceAccount to make sure the account running the app is authenticated and authorized to access the sensitive data stored in Vault,
  • Init Container to make sure the sensitive data is retrieved and placed in a known location before the app main container is started.

A year ago, implementing this kind of integration was a little bit like doing high flying trapeze to be honest 😉 . But things are changing fast and since few months it is now just a matter of including some `vault.hashicorp.com/*` annotations into your deployment manifest:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
[...]
spec:
 template:
   metadata:
     annotations:
vault.hashicorp.com/agent-inject: "true"
       vault.hashicorp.com/agent-init-first: "true"
       vault.hashicorp.com/agent-inject-secret-application.properties: "database/creds/fruit-catalog-role"
       vault.hashicorp.com/agent-inject-template-application.properties: |
         {{- with secret "database/creds/fruit-catalog-role" -}}
         spring.data.mongodb.uri=mongodb://{{ .Data.username }}:{{ .Data.password }}@mongodb/sampledb
         {{- end }}
       vault.hashicorp.com/secret-volume-path-application.properties: "/deployments/config/"
 vault.hashicorp.com/agent-pre-populate-only: "true"
       vault.hashicorp.com/role: "fruits-catalog"

These annotations are indeed processed by a `MutatingAdmissionWebhook` - another cloud native technique - that injects the correctly configured Vault Agent container in our app Pod. So you see that adding this new security facet into our application can be easily done through annotations, in a declarative way and decoupled from our core business logic. `Init Containers` are powerful components that may embed useful primitives.

Dynamic credentials

As an extension of the previous facet, one can use Hashicorp Vault to take care of all the things related to creation, renew and revocation of database credentials. Vault has an advanced feature called Dynamic Secrets that are ephemeral, programmatically generated when accessed, immediately revoked after use of database credentials. Using this feature drastically reduces the risk of someone stealing and reusing credentials to access the database.

In Adding security layers to your App on OpenShift — Part 4: Dynamic secrets with Vault, we detailed how to enrich the previous static configuration to manage the renewal of the credential lease and its revocation. For that we’ve used two other cloud native techniques and components that are:

  • Sidecar container that is a container that runs along-side our app main one and makes sure the dynamic secret lease is renewed,
  • PreStop lifecycle hook to make sure that just before the application main container is stopped, credentials will be revoked.

Things are changing fast and applying these configurations is now just a matter of annotations placed on the deployment. Remember the previous vault.hashicorp.com/agent-pre-populate-only: "true" annotation? Well you just have to change its value to false, add a new vault.hashicorp.com/agent-revoke-on-shutdown: "true" and the MutatingAdmissionWebhook will take care of injecting an additional Sidecar container to preserve the credential after init phase and revoke it on shutdown

 

Step 2: Secret Management facet

To sum up this part and the previous one, I just represent in the schema above a second security aspect added to our application with loose coupling, declarative configuration and no impact on the application code nor design. The separation of concerns is fully respected so far.

Zero-trust network with Service Mesh

Cloud native patterns are not only about containers! Cloud native patterns come from the inherent highly distributed nature of applications that may be hosted on hybrid infrastructures — different cloud providers or on-premise — and architectural style like Service Oriented Architecture and its new evolution step called Microservices . From the security point-of-view, cloud native adoption also means that there’s a shift in the security model to apply: from perimeter security dissociating untrusted and trusted zones — this model is also called castle-and-moats — to zero-trust network. This shift is necessary because the obvious man-in-middle vulnerability is even more exacerbated by the facts that:

  • More and more companies have data spread across hybrid infrastructure, making it difficult to have a single security control zone,
  • Financial optimisation concerns imply consolidation, sharing and elasticity of the application resources, making it even more difficult to continuously adapt and monitor the security zones.

Luckily we have got some new tools to address that! A Service Mesh is a key component of container and distributed architectures as it is all about addressing Fallacies of distributed computing issues and implementing zero-trust network policies. At a high level, a service mesh ensures communication between application services. It provides features such as traffic routing, load balancing and also service discovery, encryption, authentication, and authorization.

And this is the concern we add in Adding security layers to your App on OpenShift — Part 5: Mutual TLS with Istio : applying Mutual TLS service identity check and encryption between our application Pod and the database. Here again it’s just a matter of adding the sidecar.istio.io/inject: "true" annotation to your deployment manifest:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
[...]
spec:
 template:
   metadata:
     annotations:
       sidecar.istio.io/inject: "true"
       [...]

This will trigger the correct `MutatingAdmissionWebhook` that will add the correct `Sidecar container` that will act as a proxy on both sides of the application/database communication link to apply the MTLS hand-check and protocol:

 

Step 3: Service Mesh security facet

We are completing the picture here with a new concern: the in-app mesh communication security. We added it as a new facet with no coupling, in a declarative fashion avoiding to tangle with application development concerns.

PKI as a service for apps

This facet can be seen as an extension of the very first one but it was indeed easier to introduce it after the previous ones… In the preamble of this article, we talked about how we can easily secure access to the application via a Route with TLS support. However, we did not dive into the details on how TLS certificates were issued and dispatched… We assume the default configuration that uses a wildcard certificate — and which may be a pretty bad idea for mission critical platforms.

However, managing certificate issuing, revoking and renewal for each and every application — specially in a highly volatile environment — can be really a huge task… if not automated. And that’s the point where Cert Manager enters the picture! Cert Manager is automation software for automating certificates management in a cloud native world. It allows issuing, refreshing and revoking certs and integrates very well with Hashicorp Vault where we would like to store our Root and Intermediate Certification Authorities.

Using this combo it is really easy to build a fully automated Private Key Infrastructure as a service so that very fast moving applications like cloud native or even existing ones always have a dedicated, automatically renewed certificate. Such a service makes your developer fully autonomous for requesting and using certificates tailor-made for their application. Also those requests and usages are managed through regular Kubernetes resources that can be easily versioned and secured using a GitOps approach!

This is what we have details in Adding security layers to your App on OpenShift - Part 6: PKI as a Service with Vault and Cert Manager if you want implementation and deployment details. Here again it’s a matter of creating new Kubernetes custom resources like a `Certificate` request and adding the correct annotations on the `Route` or `Ingress` objects. Here’s below an example of an annotated `Route`:

kind: Route
apiVersion: route.openshift.io/v1
metadata:
 name: fruits-catalog
 annotations:
   cert-utils-operator.redhat-cop.io/certs-from-secret: fruits-catalog-route-secret
spec:
 [...]

The same MutatingAdmissionWebhook is used here again to inject the correct `Secret` containing certificate details issued by Cert Manager and Vault into our `Route`.

 

 

Step 4: Certificate as a Service facet

We just finalized the picture adding a new « Certificates as a Service » feature to our Kubernetes Ingress resource. This concern can be added after the resource creation with no impact on how the application is designed: all the provisioning processes are automated encoded into smart components leveraging Kubernetes Controllers or Operators.

Wrap-up

In this article we walk through a set of stages - or recipes - that allow adding different security concerns to your application. The techniques used here leverage Kubernetes primitives and cloud native patterns like ConfigMap, Secret, ServiceAccount, Init container, Sidecar container, PreStop lifecycle hook, MutatingAdmissionWebhook and Service Mesh .

The main benefits of this approach is the ability to manage these additional facets in a declarative way, with no coupling to the business logic of the application and no to very little adherence to the technology used. Your business logic will always be unique and change at its own pace ; the added security capabilities are independent, specialized and make extensive reuse of bullet-proof libraries and solutions. Both business logic and added capabilities have indeed no visibility and control on each other. They can be managed by different personas within an organization.

While working on this series, it came to my mind that what we are doing here at a distributed system level is actually very analogous to what we were trying to achieve « in the small » with Aspect-oriented programming - Wikipedia. Those of us that were coding in the early 2000s’ may have already met that AOP paradigm. It was one of the foundations of Annotations driven development as we find it in frameworks like Spring. For the others, let’s say that the motivation of AOP was to implement an application by assembling its different cross-cutting concerns - implemented as Aspects - using some weaving or injection techniques. Aspects were meant to be easier to implement because focused on specific concerns and thus reusable.

Recently, Bilgin Ibryam - a colleague at Red Hat - released a brilliant article and coined the term of Multi-Runtime Microservices Architecture or Mecha runtime Architecture for describing this new trend of adding off-the-shelf mechanics to an existing micrologic runtime in order to enrich its capabilities. Its demonstration and illustrations were focused on the integration capabilities. These principles should also apply to security concerns like I explain in this article. This architectural trend applied to large distributed system is very powerful and adapted to both:

  • New microservices applications designed for change to rapidly bring them additional integration capabilities
  • Existing applications you have to modernize and include into new security standards.

Where AOP adoption was made difficult by being too restrictive - all aspects in the same programming language - and too complex because of the obscure invocation flows, Mecha runtime Architecture has its chances for being more successful. The general spread of distributed system requirements, rapid pace of innovation, multi-technology support for easy reuse, simple and convenient tooling (it’s all about YAML 😉) adapted to more coarse-grained aspects are the reasons I think this trend is here to stay. And it will apply to both monoliths, microservices and functions models.

Whatever your architecture model, adding security concerns to an application may now just be a matter of some annotations. No excuse for not producing secure applications!

 


About the author

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech