This is the third post of our blog series on HashiCorp Vault. In the first post, we proposed a custom orchestration to more securely retrieve secrets stored in the Vault from a pod running in Red Hat OpenShift.

In the second post, we improved upon that approach by using the native Kubernetes Auth Method that Vault provides.

Both of the previous approaches assumed that the application knew how to handle the renewals of vault tokens and how to retrieve secrets from Vault. In all of our examples we used Spring Boot, which, we believe, has a sophisticated and out-of-the box Vault integration.

In this post, we are going to add further improvements with the purpose of enabling applications that cannot integrate directly with Vault.it. We will assume that these applications (henceforth referred to as  legacy applications) can read a file to retrieve their secrets.

The Vault Agent

A recent release of Vault introduced the Vault Agent.

The Vault Agent performs two functions:

  1. It authenticates with Vault using a configured authentication method (we are obviously interested in using the Kubernetes authentication method)
  2. It stores the Vault token in a sink (a directory), and keeps it valid by refreshing it at the appropriate time.

We can configure the Vault Agent to run as a sidecar container and to share the directory in which the token is retrieved with our application using an in-memory shared folder. The architecture would look similar to the following:

The Vault Secret Fetcher

Legacy applications would not be able to retrieve secrets from Vault, even if they had a valid token, because they were not designed to integrate with it. We need another piece of functionality to retrieve the secrets having a valid token. The Vault Secret Fetcher is a program written in golang that can be used for this purpose.

The Vault Secret Fetcher can use a Vault token to retrieve Vault-secrets and store them in a file.

As previously described, we can use the sidecar pattern to keep this functionality out of our code and share the retrieved secrets with our application using an in-memory folder. The architecture would look as follows:

 

Here is a fragment of how an application pod would be instrumented to use the described approach:

      Containers:
# Vault Agent
       - args:
           - agent
           - '-log-level=debug'
           - '-config=/vault/config/agent.config'
         env:
           - name: SKIP_SETCAP
             value: 'true'
           - name: VAULT_ADDR
             value: 'https://vault.hashicorp-vault.svc:8200'
           - name: VAULT_CAPATH
             value: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
         image: 'vault:latest'
         imagePullPolicy: Always
         name: vault-agent
         resources: {}
         terminationMessagePath: /dev/termination-log
         terminationMessagePolicy: File
         volumeMounts:
           - mountPath: /vault/config/agent.config
             name: vault-config
             subPath: agent.config
           - mountPath: /var/run/secrets/vaultproject.io
             name: vault-agent-volume
# Secret Fetcher
       - args:
           - start
         env:
           - name: LOG_LEVEL
             value: DEBUG
           - name: VAULT_ADDR
             value: 'https://vault.hashicorp-vault.svc:8200'
           - name: VAULT_CAPATH
             value: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
           - name: VAULT_TOKEN
             value: /var/run/secrets/vaultproject.io/token
           - name: VAULT_SECRET
             value: secret/example
           - name: PROPERTIES_FILE
             value: /var/run/secrets/vaultproject.io/application.json
           - name: PROPERTIES_TYPE
             value: json
         image: vault-secret-fetcher
         imagePullPolicy: Always
         name: vault-secret-fetcher
         resources: {}
         terminationMessagePath: /dev/termination-log
         terminationMessagePolicy: File
         volumeMounts:
           - mountPath: /var/run/secrets/vaultproject.io
             name: vault-agent-volume
# App container ...

Automating the injection of the sidecar containers

As we can see from the previous example, the two sidecar container definitions can be quite long and add a bit of noise to the pod manifest. While there is nothing wrong with that approach, we can make improvements by automatically injecting the sidecar containers using a Kubernetes mutating admission controller.

The Mutating Webhook Vault Agent can be used for this purpose. This mutating admission controller monitors for newly created pods and will inject the above sidecars to the pods that request it via the following annotation: sidecar.agent.vaultproject.io/inject.

The improved architecture looks similar to the following:

Installation

To install this solution in your own environment, first clone the repository (in it you can find more details on this process as well as more examples):

git clone https://github.com/openlab-red/hashicorp-vault-for-openshift

cd hashicorp-vault-for-openshift

Then install Vault:

oc new-project hashicorp-vault
oc adm policy add-scc-to-user privileged -z default
oc create configmap vault-config --from-file=vault-config=./vault/vault-config.json
oc create -f ./vault/vault.yaml
oc create route reencrypt vault --port=8200 --service=vault

Then we need to initialize Vault:

(Note: these steps should be manually executed for increased security)

export VAULT_ADDR=https://$(oc get route vault --no-headers -o custom-columns=HOST:.spec.host)
vault operator init -tls-skip-verify -key-shares=1 -key-threshold=1

Save the generated key and token that were provided by the previous command:

Unseal Key 1: NRvJGYdLeUc9emtX+eWJfa+JV7I0wzLb2lTlOcK5lmU=
Initial Root Token: 4Zh3yRX5orXFqdQUXdKrNxmg

Export the saved keys as environment variables for later use:

export KEYS=NRvJGYdLeUc9emtX+eWJfa+JV7I0wzLb2lTlOcK5lmU=
export ROOT_TOKEN=4Zh3yRX5orXFqdQUXdKrNxmg
export VAULT_TOKEN=$ROOT_TOKEN

At this point, we can unseal Vault which will configure it and make it eligible for use.

<span>vault operator unseal -tls-skip-verify $KEYS</span>

Configure the Kubernetes Auth Method:

oc create sa vault-auth
oc adm policy add-cluster-role-to-user system:auth-delegator system:serviceaccount:hashicorp-vault:vault-auth
reviewer_service_account_jwt=$(oc serviceaccounts get-token vault-auth)
pod=$(oc get pods -lapp=vault --no-headers -o custom-columns=NAME:.metadata.name)
oc exec $pod -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt > /tmp/ca.crt
vault auth enable -tls-skip-verify kubernetes
export OPENSHIFT_HOST=https://openshift-master.openlab.red
vault write -tls-skip-verify auth/kubernetes/config token_reviewer_jwt=$reviewer_service_account_jwt kubernetes_host=$OPENSHIFT_HOST kubernetes_ca_cert=@/tmp/ca.crt

Create a simple Vault policy to represent a set of permission to read and write secrets:

<span>vault policy write -tls-skip-verify policy-example policy/policy-example.hcl</span>

Bind this policy to principals that authenticate via the previously configured Kubernetes authentication method. In particular, we restrict the policy to the service accounts named default and coming from the app namespace:

vault write -tls-skip-verify auth/kubernetes/role/example \
   bound_service_account_names=default        bound_service_account_namespaces='app' \
   policies=policy-example \
   ttl=2h

Finally, we initialize a sample secret protected by the above policy:

<span>vault write -tls-skip-verify secret/example password=pwd</span>

At this point, we need to install the Mutating Webhook Vault Agent. Clone this repo:

cd ..
git clone https://github.com/openlab-red/mutating-webhook-vault-agent
cd mutating-webhook-vault-agent

Build the Mutating Webhook Vault Agent container:

oc project hashicorp-vault
oc apply -f openshift/webhook-build.yaml

Create the configuration:

<span>oc apply -f openshift/sidecar-configmap.yaml</span>

Process the webhook template:

pod=$(oc get pods -lapp=vault --no-headers -o custom-columns=NAME:.metadata.name)
export CA_BUNDLE=$(oc exec $pod -- cat /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt | base64 | tr -d '\n')

oc process -f openshift/webhook-template.yaml -p CA_BUNDLE=${CA_BUNDLE} | oc apply -f -

At this point, we can finally deploy our application. We are going to use Thorntail in this example, but other runtimes are available within the repository:

cd ..
cd hashicorp-vault-for-openshift
oc new-project app

Vault needs to be accessible outside of its project, to later be used by the sidecar agent.

With SDN Multi Tenant:

<span>oc adm  pod-network make-projects-global hashicorp-vault</span>

With SDN Network Policy

<span>oc apply -f vault/app-allow-vault.yaml -n hashicorp-vault</span>

Label the app namespace with vault-agent-webhook=enabled to enable the injection

<span>oc label namespace app vault-agent-webhook=enabled</span>

Build the application

<span>oc new-build --name=thorntail-example  registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift~https://github.com/openlab-red/hashicorp-vault-for-openshift --context-dir=/examples/thorntail-example</span>

Deploy the application

<span>oc apply -f examples/thorntail-example/thorntail-inject.yaml </span>

Conclusion

In this article we have explored how we can enable applications to retrieve secrets that were not originally designed to work with Vault. The only requirement of these applications is that they can read a file in which the secrets will be stored.

This application runtime-agnostic approach can enable a broader adoption of Vault and help simplify the management of credentials, especially for hybrid cloud deployments, in which Vault can also be used to share secrets between applications deployed in multiple OpenShift clusters as well as applications deployed outside of OpenShift.  

 


About the authors

Raffaele is a full-stack enterprise architect with 20+ years of experience. Raffaele started his career in Italy as a Java Architect then gradually moved to Integration Architect and then Enterprise Architect. Later he moved to the United States to eventually become an OpenShift Architect for Red Hat consulting services, acquiring, in the process, knowledge of the infrastructure side of IT.

Read full bio