Introduction

This article addresses the concerns of many organizations for container image integrity as images are moved around an organization. Specifically, organizations that have an air-gapped cluster for production environments, with limited external connectivity, will want to be certain that the images they use come from a trusted source, such as their own development team. The development team may work within a cloud environment or a separate isolated network in which their clusters have different security requirements. Such clusters are used for innovation and experimentation to facilitate the investigation of new frameworks and technology. A development cluster may also be used by third-party contractors and partner organisations. As a result, the development clusters need a reasonable degree of openness and access to external resources, while maintaining a sensible security posture.

Creating a mechanism for the safe handover of images from development to production, with the assurance that images are genuine, will improve the security of containers and will reduce the amount of validation work required within the production environment. This article aims to show how image signatures can be created, transported, stored, and used for the validation of images within a production environment. The starting point for this article is an excellent blog post by Luis Javier Arizmendi Alonso, Container Image Signatures in OpenShift 4, which can be found here. Some of that article is repeated below in the building blocks of the process, but for the detail of each step, please read Luis’ article, which goes into much more detail.

Images in the Development Environment

The diagram in figure 1 (below) shows the process for creating images and getting them ready for the migration to the production environment. The process begins with source code, taken from a secure repository, which is used as an input to a build process. For the containerized application, the process includes the injection of a base container image into the build pipeline of activity. The base image receives the built application and the resulting image begins the journey to production.  The built image will be unsigned at this stage, and it is stored in an image stream in the development cluster ready for development-centric testing. Many images created this way will be disregarded because they fail tests. Eventually an image will be selected to go forward as a production candidate, and it is this image that will go through the process of being signed and moved to the development image repository. This repository is for long-lived releases of containers ready to be pulled into the production environment. The signing process is performed by copying the image from the image stream within the development cluster to the development image repository. This step results in two outputs: the signed image, which will be stored in the development image repository, and the signature, which will be stored in the local file system on the machine where the image copy (and signing) took place. This process is shown in figure 1. The next phase of the process is to pull the signed image into the secure environment while validating that the image was signed by the development team.

Figure 1 : Image and signature creation

Creating and Managing Signatures

Before describing the process for moving the image and signature towards the production environment, it is important to describe the software used for image signing and the signing process.

Container image signature keys

To sign and validate container images requires an OpenPGP key pair combination that is created, in this example, using the Linux gpg2 utility: 

gpg2 --gen-key

The user is asked for further information, including an email address and full name. It is the email address that will be used in subsequent commands to identify the key pair.

To list the key pairs use the command below:

gpg2 --list-keys

The result is shown below:

/home/mark/.gnupg/pubring.kbx
-----------------------------
pub   rsa2048 2020-12-16 [SC] [expires: 2022-12-16]
    788D24882A504100B64C548CD68A9C0D8DBF4FBE
uid           [ultimate] Mark Roberts <mroberts@redhat.com>
sub   rsa2048 2020-12-16 [E] [expires: 2022-12-16]

Managing Where Signatures Are Stored

When using skopeo for the management of signatures, a configuration file in the following location controls where the signatures are stored: /etc/containers/registries.d. The default file includes the following important setting:

sigstore-staging: file:///var/lib/containers/sigstore

There are (at least) three choices for how to make sure you can write to the above location:

  1. Change it to be a location to which the user can write.
  2. Change the permissions on the location so that the user can write to it and leave the file unchanged.
  3. Run the commands that write to it with ‘sudo -E’. The -E option is required to preserve the environment of the current user to retain the status of being logged into a registry to which the signed image is written.

Creating the Signed Image

To create a signed container image within the quay.io repository based on a previously unsigned version of the image, use the skopeo copy function as shown below:

skopeo copy --sign-by mroberts@redhat.com \ docker://quay.io/marrober/layers:latest \  docker://quay.io/marrober/layers:latest-signed

Note that as part of a build process, an image stored within an image stream on OpenShift could be pushed to quay.io and signed in a single action.

Signatures are stored separately to the image itself, located within the directory indicated in the default configuration file above (the sigstore-staging parameter). The first level directory is the name of the account on quay.io and the second level directory is the digest of the image. Within this directory, a single file exists, called signature-1, containing the image signature, as shown in the example in figure 2:

Figure 2 : Example signatures

The signatures need to be copied from the development server where they are created to the directory structure of an application server that will host the images for a consuming OpenShift cluster. The application server is described later. The directory structure must be preserved from the content that appears under the sigstore location. In a secure environment, a mechanism for transferring the files could involve a password protected zip file that is then copied over a VPN link or transferred using a physical media. Ultimately, an organization must create a mechanism for transferring and validating the content to satisfy security assessors.

Moving the Image to the Secure Environment

The transfer of the container image from the development image repository to a repository within the production environment can be done in a number of different ways. However, note that when a container image is copied from repository to repository, it will get a new sha256 image digest and repository location and, therefore, any signature created against the image in the old location will not be valid in the new location. The only current solution to this problem is to re-sign the image in the production environment. This is how the original signature assures the team that the image genuinely came from the development team. The DevOps team within the secure environment can then take control of the image, and if it passes the QA phase of testing, they can re-sign it to assure that it is acceptable for production.

The process is shown in figure 3 (below). An image is pushed to the development repository, and a signature (red box) is created at the same time. The image is pulled into the development cluster, and initial validation steps are performed. If the tests are successful, the signature is moved to the appropriate directory and pushed to the development signature Git repository. The signature is then served from the httpd server within the QA environment.

Since the QA cluster is configured to only accept signed images from the development repository, the image cannot be used in QA until the signature has been added to the development signature Git repository and refreshed into the QA cluster httpd server. This makes sure that even though the QA cluster has visibility of the new image once it has been pushed, the QA cluster cannot use the image until the signature has been moved through the process described above. This allows the team to create a human workflow process providing a level of approval or review as required.

Once the signature has been made available, the image has been pulled into the QA cluster, and QA tests have been performed, the image can progress through to the production image repository.

When the image is copied to the production image repository, a new signature is created (green box). This signature is copied to the production signature Git repository from which signatures are pulled by the signature server in the production cluster. The image may then be pulled and verified in the production cluster which is configured to only allow images to be pulled from the production image repository. As described for the QA cluster, a human process can be applied to the green signature movement to create a further review / approval process.

Figure 3 also shows the one-time transfer of the GPG public key used to create the image signatures. This needs to be transferred to the cluster that will consume the signatures and added to the cluster configuration process described later.

Figure 3 : Image progression to production

Moving the Signature to the Secure Environment

The OpenShift cluster is configured to look for signatures on a specific web server. As a consequence, the signatures need to be hosted on a web server that is accessible from the cluster that will pull the images for use. The Apache HTTP server (httpd) provided by Red Hat and delivered via the OpenShift catalog is used as the web server in this example. The web server is called ‘httpd’ and it is within a project called ‘signature-server.’ Files to be served from the web server are stored in the /opt/apt-root/src directory within the container image. Further configuration work is required to instruct the OpenShift cluster to look for signatures on the web server. This configuration is described in a later section.

Assessing the Security Requirements of Signatures

A mechanism must be found to get the signatures onto the web server in a manner that is repeatable, automatable, and audited. When considering how to move and host the signatures, security requirements  need to be assessed. The signatures enable a cluster that requires signed images to use the image, provided that the cluster has been configured to use the public key associated with the image signatures. If a third party has access to the image, then they can use it on a cluster that does not require signatures. In short, the signature is not a key that unlocks the image for use on all clusters but only on those clusters that need the validation of the signature and that have the associated public key. So this may call into question the value of signed images. The signature simply proves that the image has been signed by the organization that has the private key of the signing key pair. If this private key is managed appropriately, it is possible to assert that a signed image came from the identified organisation. Protection over what images are signed using the private key is provided by the fact that a password must be entered to use the key pair to sign an image. This devolves the signing authority to an individual or team of individuals within an organization that have the password for the key pair. In accordance with all good practice, this password should be stored in a secure password vault.

As a consequence of the above, it is fair to store the signatures on a GitHub repository. This will provide an immediate audit trail of all operations on the signatures, together with webhooks from GitHub that can be used to update the server hosting signatures when any signatures are added or removed (a push action). The GitHub repository will have a security profile that governs which users are allowed to use it. Following a GitOps mode of storing all configuration data in a GitHub repository and driving application or configuration change within the environments from GitHub webhooks make the process fit into the working practices that are being adopted by more and more teams.

Copying Signatures to the GitHub Repository

Copy the files from the sigstore directory shown in figure 2 to a local clone of the GitHub repository as shown in figure 4:

Figure 4 : Copy signatures to the git repository

Step 1 - Create the signature at the default location

Step 2 - Copy the signature to the local clone of the Github repository

The command below was used to copy from a signature store created under the id ‘marrober’ into a Git repository cloned to the directory signature-repo with existing directories for images and the username ‘marrober’:

cp -R /var/lib/containers/sigstore/marrober/* \

 /home/mark/data/git-repos/signature-repo/images/marrober

Step 3 - Commit and push the changes to the Github repository

Step 4 - Create or update the signature  server using the actions described below

Step 5 - The signature will be stored at the indicated physical location

Step 6 - The signature will be served using the indicated URL

Creating the Signature Server

Image signatures need to be on the signature http server that is accessible from the consuming cluster.

The commands below will create the httpd cluster using the indicated GitHub repository as a source of content to be served:

oc new-project signature-server
oc new-app httpd~https://github.com/marrober/signature-repo.git \
--name signature-server
oc expose service/signature-server
oc scale deployment/signature-server --replicas=4

The above commands will do the following:

  1. Create a new project to host the application.
  2. Create the httpd application server pulling content from the Github repository.
  3. Expose the service associated with the application to create an accessible route URL.
  4. Scale the deployment so that there are four pods running the application. This will provide resilience to the service since any updates will be performed under a rolling mode and the application will always be available on at least three pods.

Directory Structure on the Signature Server

The directory structure show above in figure 1 must be reproduced on the signature server so that the signatures are stored in the following locations for the httpd server to serve them, and for the cluster configuration to be able to find them:

/opt/apt-root/src/images/marrober/layers@sha256=4172b83ff61d4e75cad042d2b99854cf283daf8e59d521796dd86ee8917efa37/signature-1

The steps described above for the creation of the server based on the content held within the Github repository will achieve this.

Automation of the Application Update

It is useful to automate the process of rebuilding the signature server whenever new signatures are committed to GitHub. For this, Tekton can be used, and an example configuration of the pipeline assets required is available here. If you want to take a look at this content, either browse it on GitHub or clone the repository locally. The resources presented and their interaction is shown in figure 5:

Figure 5 : Tekton pipeline tasks for manual and automated execution of the build

To use this process, install the OpenShift Pipelines operator on an OpenShift cluster. It is recommended that the build automation and trigger resources be created in the same namespace as the signature-server application.

Build Automation

The build pipeline (automation/pipeline/pipeline.yaml) uses a cluster task that is available in OpenShift called ‘openshift-client’. This will execute the ‘oc’ command line utility, with input parameters supplied by an array of command arguments. The arguments are supplied by the pipelineRun resource (automation/pipelines/pipelineRun.yaml) with the specific arguments to run the build for the signature- server deployment shown below:

  params:
  - name: args
     value:
    - start-build
    - signature-server

Trigger the Build From Webhook

To trigger the build from the GitHub push action, create the resources for the trigger in the directory automation/triggers. This directory contains a Kustomization file that can be used to create the assets with a single command:

oc create --save-config -k .

The above command will create the event listener, event listener route, trigger template, and trigger binding resources. The route created by the above commands to expose the event listener application is called ‘signature-server-listener-route’. To identify the full url of the route to put into the webhook, execute the command:

oc get route/signature-server-listener-route \
-o jsonpath='{"http://"}{.spec.host}{"\n"}'

Copy the result and add that to a new webhook for the GitHub repository to which you are planning to push signatures. The example Git repository here has a directory called images to which the image signatures are pushed using step 2 in the section ‘Copying Signatures to the GitHub Repository’ above.

Using Tekton pipelines is described in the blog post here, and the details of triggering builds is explained here.

Configuring OpenShift to Require Signed Images

Changes must be made to the OpenShift nodes using the MachineConfig operator. This allows cluster administrators to make changes to the nodes in a controlled and structured manner, making sure that all nodes are updated in a single action. For consistency and efficiency, this is preferable to connecting to each node individually to make a change. In this example, quay.io hosts the images to be used. This is a registry-as-a-service facility provided by Red Hat that can be used to host public images for free, or customers can opt for a paid option that allows them to create a private registry in the Red Hat-managed and supported infrastructure. Users are able to use role-based security, image vulnerability scanning, and automated image building to create a complete image management system. Further information on Red Hat Quay can be found here. In a secure environment, it is expected that private image registries will be within the same private data centers as the clusters.

The complete process for creating the configuration files used in this process is explained in Luis Javier Arizmendi Alonso’s blog post here.

To simplify the process as much as possible, most of the required files have been stored in the GitHub repository here within the directory cluster-config. The public key (signer-key.pub) created using the gpg commands described above must be added to the files in this directory, and each of the files, called default.yaml and policy.json, should be reviewed to see if they require any changes.  Certainly, the route in the default.yaml file will need to change.

The PERL script cluster-config/create-machine-config.pl has been created to automate the processing of the files described in Luis’ blog. Run the script and send the output to a file called machine-config.yaml using the command below, from within the cluster-config director:

perl create-machine-config.pl > machine-config.yaml

Apply the above file to the cluster as the cluster administration user with the command:

oc apply -f machine-config.yaml

This will take a few minutes to propagate out to the worker nodes. Progress can be checked using the command ‘oc get machineconfigpool’.

Testing the Image Pull Process

An Unsigned Image

To test the process, first try to pull a container image that has not been signed. This can be done using the command line interface or the web user interface of OpenShift. When using the web user interface, select the developer view then select ‘Add’ from the left-hand side menu. Select container image, and then provide the pull URL for an unsigned image. Provide a name for the application, and press the blue ‘Create’ button. Right-click on the application from the topology view, and then select the ‘Resources’ tab on the pop out menu. Select the pod and then select the events sub menu for the pod as shown in figure 6:

Figure 6 : Attempting to pull an unsigned image

As shown in figure 5 above, the cluster fails to pull the image because a signature is required but a signature does not exist. The logs for the Apache server hosting the signatures will show an entry for the attempt to use a signature that does not exist. This results in a http 404 error as shown below in the extract from the logs:

10.131.0.5 - - [22/Dec/2020:15:50:36 +0000] "GET /images/marrober/simplerest@sha256=ffbfad39fe1fc93948b0bbb0e141e7b8d7e77e7cc9286eac8c2332d56ae7b6f2/signature-1 HTTP/1.1" 404

A Signed Image

Repeat the above process for a signed image.

The events for the pod creation should be similar to that shown in figure 7 (below) showing a successful pull of the container image from the registry.

Figure 7 : Successfully pulling a signed image

The logs for the Apache server hosting the signatures will show an entry for the attempt to use the signature that does exist. This results in a http 200 response as shown below in the extract from the logs:

10.131.0.5 - - [22/Dec/2020:16:02:48 +0000] "GET /images/marrober/simplerest@sha256=0f74f80f22418492d99af44223ba0db888bf58f4b35ba6ff351e1188cc4b616e/signature-1 HTTP/1.1" 200