Introduction

This is the fourth post of the blog series on HashiCorp Vault.

The first post proposed a custom orchestration to more securely retrieve secrets stored in the Vault from a pod running in Red Hat OpenShift.

The second post improved upon that approach by using the native Kubernetes Auth Method that Vault provides.

The third post showed how the infrastructure can provide the Vault secret functionality for an application that cannot communicate directly with the Vault.

It has been almost two years since the latest blog, and a lot has changed and improved. It is not anymore about legacy applications but this new agnostic architecture can be extended to all applications that require secret management.

The Vault Agent

As shown in the previous post, Vault provides a Vault Agent, and the latest release has been enhanced with the Template functionality.

The Vault Agent performs three functions:

  1. It authenticates with Vault using a configured authentication method using the Kubernetes authentication method.
  2. It stores the Vault token in a sink file like /var/run/secrets/vaultproject.io/token, and keeps it valid by refreshing it at the appropriate time.
  3. The latest feature from Vault Agent is the template, which allows Vault secrets to be rendered to files using Consul Template markup.

Example Agent Configuration

The following example shows the template secret feature:

 vault {
       ca_path = "/vault/ca/service-ca.crt"
       address = "https://vault.hashicorp.svc.cluster.local:8200"
   }
   pid_file = "/var/run/secrets/vaultproject.io/pid"
   auto_auth {
           method "kubernetes"  {
                   type = "kubernetes"
                   mount_path = "auth/kubernetes"
                   config = {
                           role = "example"
                           jwt = "@/var/run/secrets/kubernetes.io/serviceaccount/token"
                   }
           }
           sink "file" {
                   type = "file"
                   config = {
                           path = "/var/run/secrets/vaultproject.io/token"
                   }
           }
   }
   template {
       source      = "/vault/config/template.ctmpl"
       destination = "/var/run/secrets/vaultproject.io/application.properties"
   }

Example Template File

Below is a sample template.ctmpl used to render the Vault secret:

 
    {{ with secret "secret/example" }}
   password = {{ .Data.password }}
   {{ end }}
 

It uses the secret function of the Consul Template.The agent retrieves the secret from Vault, and will create an application.properties file.

You can configure the Vault Agent to run as an init / sidecar container and to share the directory in which the token is retrieved with an application using an in-memory shared folder.

The architecture would look similar to the following:

 

Here is a fragment of how an application pod would be instrumented to use the described approach:

initContainers:
# Vault Agent Init
     - image: vault:1.3.2
       name: vault-agent-init
       ports:
       - containerPort: 8200
         name: vaultport
         protocol: TCP
       args:
       - agent
       - -log-level=debug
       - -config=/vault/config/agent.config
       - -exit-after-auth
       env:
       - name: SKIP_SETCAP
         value: 'true'
       volumeMounts:
       - mountPath: /vault/config
         name: vault-config
       - mountPath: /vault/ca
         name: vault-cabundle
       - mountPath: /var/run/secrets/vaultproject.io
         name: vault-agent-volume
       resources:
         requests:
           memory: 256Mi
           cpu: 250m
         limits:
           memory: 256Mi
           cpu: 250m 
     containers:
# Vault Agent
       - image: vault:1.3.2
         name: vault-agent
         ports:
         - containerPort: 8200
           name: vaultport
           protocol: TCP
         args:
         - agent
         - -log-level=debug
         - -config=/vault/config/agent.config
         env:
         - name: SKIP_SETCAP
           value: 'true'
         volumeMounts:
         - mountPath: /vault/config
           name: vault-config
         - mountPath: /vault/ca
           name: vault-cabundle
         - mountPath: /var/run/secrets/vaultproject.io
           name: vault-agent-volume
         resources:
           requests:
             memory: 256Mi
             cpu: 250m
           limits:
             memory: 256Mi
             cpu: 250m
         lifecycle:
           preStop:
             exec:
               command:
               - /bin/sh
               - -c
               - sleep 5 && kill -SIGTERM $(pidof vault)
# App container ...

Automating the Injection of the Sidecar Containers

As the previous example illustrates, the init and sidecar container definitions can be quite long and add a bit of noise to the pod manifest.

While there is nothing wrong with that approach, you can make improvements by automatically injecting the sidecar containers using a Kubernetes mutating admission controller.

The Mutating Webhook Vault Agent Sidecar Injector can be used for this purpose.

The Vault Kubernetes (vault-k8s) binary includes first-class integrations between Vault and Kubernetes.

This mutating admission controller monitors for newly created pods and will inject the above sidecars to the pods that request it via the following annotation:

vault.hashicorp.com/agent-inject: 'true'

The improved architecture looks similar to the following:

 

Installation

To install this solution in your own environment, first clone this repository (which contains more details on this process as well as more examples):

git clone https://github.com/openlab-red/hashicorp-vault-for-openshift
cd hashicorp-vault-for-openshift

For this purpose, you will install a single instance to explain all the components involved. Nevertheless, inside the repository, there is High Availability deployment, which is the recommended approach for a production environment.

More information, can be found here: Vault Reference Architecture | Vault.

(Note: The official way to install Hashicorp Vault is to use vault-helm charts.)

oc new-project hashicorp
oc apply -f ./vault/standalone/install/

The following Kubernetes components will be created:

  • vault-server-binding ClusterRoleBinding
  • vault ServiceAccount
  • vault-storage PersistentVolumeClaim with 10Gi size
  • vault-config ConfigMap
  • vault Service
  • vault Deployment
  • vault Route
  • vault NetworkPolicy

Initialize Vault

These steps should be manually executed for increased security:

POD=$(oc get pods -lapp.kubernetes.io/name=vault --no-headers -o custom-columns=NAME:.metadata.name)
oc rsh $POD
vault operator init --tls-skip-verify -key-shares=1 -key-threshold=1

Save the generated key and token that were provided by the previous command:

Unseal Key 1: vMIVXLRMgK3duZnjTbPQVerJKHzus+/EIsgbnYLajSk=
Initial Root Token: s.dHqf2R7ql3gOOp9wDDkvZPkE

Export the saved keys as environment variables for later use:

export KEYS=vMIVXLRMgK3duZnjTbPQVerJKHzus+/EIsgbnYLajSk=
export ROOT_TOKEN=s.dHqf2R7ql3gOOp9wDDkvZPkE
export VAULT_TOKEN=$ROOT_TOKEN

At this point, unseal Vault, which configures it and makes it usable:

vault operator unseal -tls-skip-verify $KEYS

However, this manual process can become unwiedly with so many Vault clusters, somany different key holders, and so many different keys. For this reason, Vault introduced the option to Auto unseal using Transit Secrets Engine.

Kubernetes Auth Method

JWT=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
KUBERNETES_HOST=https://${KUBERNETES_PORT_443_TCP_ADDR}:443

vault auth enable --tls-skip-verify kubernetes

vault write --tls-skip-verify auth/kubernetes/config token_reviewer_jwt=$JWT kubernetes_host=$KUBERNETES_HOST kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

 

Configure a Vault Secret

From your local environment:

export VAULT_ADDR=https://$(oc get route vault --no-headers -o custom-columns=HOST:.spec.host)

Create a simple Vault policy to represent a set of permission to read and write secrets:

vault policy write -tls-skip-verify policy-example policy/policy-example.hcl

Bind this policy to principals that authenticate via the previously configured Kubernetes authentication method. In particular, this example restricts the policy to the service accounts named default and coming from the app namespace:

vault write -tls-skip-verify auth/kubernetes/role/example \
bound_service_account_names=default \
bound_service_account_namespaces='app' \
policies=policy-example \
ttl=2h

Finally, initialize a sample secret protected by the above policy:

vault write -tls-skip-verify secret/example password=pwd

At this point, you need to install the Mutating Webhook Vault Injector.

Vault Injector

The Vault Injector project code has been forked to make two improvements:

  • Upgraded to MutatingWebhookConfiguration v1 API. It is GA from Kubernetes 1.16, which OpenShift 4.3 is based on.
  • Changed vault agent RunAsUser:. It has the same default value that the RunAsUser defined in the application container.

Meanwhile, we are working on the upstream project to make it part of the official release.

To install the modified Vault Injector:

oc project hashicorp
oc apply -f ./vault/injector/install/

The following Kubernetes components will be created:

  • vault-injector ClusterRole
  • vault-injector ClusterRoleBinding
  • vault-injector ServiceAccount
  • vault-injector Deployment
  • vault-injector Service
  • vault-injector NetworkPolicy
  • vault-injector MutatingWebhookConfiguration

Deploy the Application

At this point, you can finally deploy the application.

In this example, you are going to use Quarkus, but other runtimes are available within the repository:

oc new-project app

Label the app namespace with vault.hashicorp.com/agent-webhook=enabled to enable the injection:

oc label namespace app vault.hashicorp.com/agent-webhook=enabled

Note: On the MutatingWebhookConfiguration, a namespaceSelector has been added to limit which requests for namespaced resources are intercepted.

Build the application:

oc new-build --name=quarkus-example registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift~https://github.com/openlab-red/hashicorp-vault-for-openshift --context-dir=/examples/quarkus-example

Deploy the Application:

oc apply -f examples/quarkus-example/quarkus-inject.yaml

Notice how the agent template is automatically applied simply via annotation configuration:

vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/ca-key: |
 '/var/run/secrets/kubernetes.io/serviceaccount/ca-bundle/service-ca.crt'
vault.hashicorp.com/agent-inject-secret-application.properties: secret/example
vault.hashicorp.com/secret-volume-path: /deployments/config
vault.hashicorp.com/agent-inject-template-application.properties: |
 {{- with secret "secret/example" -}}
   secret.example.password = {{ .Data.password }}
 {{- end }}
vault.hashicorp.com/role: example

The /deployments/config will be the shared volume used by the Vault Agent containers for sharing secrets with the other containers in the pod.

Using vault injector and the annotations, you do not need to explicitly define the init and sidecar containers anymore.

Conclusion

In this article, you explored how to enable every application to retrieve secrets from Vault without worrying about how to connect with it.

Applications need to read a file in which the secrets are stored and the nature of the file can be decided by the application, thanks to the agent template functionality.

This application runtime-agnostic approach can enable broader adoption of Vault and help simplify the management of credentials. This is true especially for hybrid cloud deployments, in which Vault can also be used to share secrets between applications deployed in multiple OpenShift clusters as well as applications deployed outside of OpenShift.

Last but not least, if you are wondering how these nice diagrams were made, look at this awesome Open Source project: Diagrams · Diagram as Code.