Red Hat OpenShift Container Platform 4 has introduced Red Hat Enterprise Linux (RHEL) CoreOS as a base operating system for the platform. Compliance scanning of traditional RHELis well understood; however, there are some procedural nuances when it comes to dealing with CoreOS. In this post, we will explore a mechanism for quickly scanning an CoreOS host powering your OpenShift 4 cluster.

RHEL can be scanned for compliance using OpenSCAP, a tool included in RHEL to evaluate Security Content Automation Protocol (SCAP) content. The good news is that OpenSCAP provides security profiles for CoreOS as well.

Prerequisites

First, you will need access to an OpenShift 4 cluster. Don’t have one yet? Not a problem: Visitthe Red Hat CodeReady Containers page and spin up an OpenShift 4 cluster on your local machine. You will also need administrator access to the cluster.

Scan a Host and Generate an HTML Report

OpenShift 4 introduces a new paradigm when it comes to managing the underlying host machines (nodes). Rather than manually using SSH to log into nodes and change their configurations (which is error prone and leads to configuration drift), the new philosophy is to let OpenShift maintain the hosts’ configurations. This is accomplished through the use of MachineConfigs and the MachineConfig Operator:

In OpenShift Container Platform, the Machine Config Operator handles operating system upgrades. Instead of upgrading individual packages, as is done with yum upgrades, rpm-ostree delivers upgrades of the OS as an atomic unit. The new OS deployment is staged during upgrades and goes into effect on the next reboot. If something goes wrong with the upgrade, a single rollback and reboot returns the system to the previous state. RHCOS upgrades in OpenShift Container Platform are performed during cluster updates.

Red Hat Enterprise Linux CoreOS documentation

To maintain this philosophy, SSH access is disabled to CoreOS nodes by default. Furthermore, due to the nature of CoreOS, we are not able to install OpenSCAP and its associated tools to perform a scan.

Fortunately we can leverage the oc debug node command to get around these issues. The oc debug command allows us to mount the CoreOS node’s root file system within a pod. By installing the OpenSCAP utilities in this pod, we can perform a scan and export the results. The following procedure describes this in detail:

 1. Log in to the cluster as an admin user, via oc login:

[user@bastion ~] $  oc login --insecure-skip-tls-verify https://api.myocp4.com

Authentication required for https://api.myocp4.com (openshift)
Username: admin
Password: *******************
Login successful.

You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects'


Using project "default".

 2. Create a new new namespace for running the scan via oc new-project node-scan:

[user@bastion ~] $  oc new-project node-scan

Now using project "node-scan" on server "https://api.myocp4.com".

 3. Use the oc get nodes command to retrieve the list of nodes in the cluster:

[user@bastion ~] $  oc get nodes
NAME                                         STATUS    ROLES     AGE       VERSION
ip-10-0-131-103.us-east-2.compute.internal   Ready     master    22h       v1.16.2
ip-10-0-132-192.us-east-2.compute.internal   Ready     worker    22h       v1.16.2
ip-10-0-151-88.us-east-2.compute.internal    Ready     master    22h       v1.16.2
ip-10-0-156-1.us-east-2.compute.internal     Ready     worker    22h       v1.16.2
ip-10-0-171-117.us-east-2.compute.internal   Ready     master    22h       v1.16.2

 4. Use the oc debug node/mynode-fqdn --image=ubi7 command to gain access to the node’s file system, replacing mynode-fqdn with a node name retrieved from the previous step.

Make sure you are using version 4 of the oc command. We are using the --image=ubi7 flag to spin up a Red Hat Universal Base Image (UBI) 7 container to mount the node’s filesystem. This will allow us to install the OpenSCAP tools in the pod. The CoreOS node’s filesystem will be mounted at /host:

[user@bastion ~] $ oc debug node/ip-10-0-131-103.us-east-2.compute.internal --image=ubi7

Starting pod/ip-10-0-131-103us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.131.103
If you don't see a command prompt, try pressing enter.

 5. Verify the node is running CoreOS using cat /host/etc/system-release:

sh-4.2# cat /host/etc/system-release
Red Hat Enterprise Linux CoreOS release 4.3

 6. Install the latest OpenSCAP content into the UBI 7 container:

sh-4.2# curl -L \
http://copr.fedoraproject.org/coprs/openscapmaint/openscap-latest/repo/epel-7/openscapmaint-openscap-latest-epel-7.repo -o \
/etc/yum.repos.d/openscapmaint-openscap-latest-epel-7.repo

 7. Install the required openscap packages on the debug pod (fedora container) via yum install -y openscap openscap-utils scap-security-guide --skip-broken:

sh-4.2# yum install -y openscap openscap-utils scap-security-guide --skip-broken
.
.
.

Complete!

 8. Perform scans using the appropriate security guide content and profile. The oscap-chroot command allows us to scan the /host directory, which mounts our CoreOS operating system root directory. A few example scan types are listed in the table below:

 

Scan Description

Command

[DRAFT] DISA STIG for Red Hat Enterprise Linux 8

oscap-chroot /host/ xccdf eval \
--profile xccdf_org.ssgproject.content_profile_stig \
--fetch-remote-resources \
--report report.html \
/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml

Criminal Justice Information Services (CJIS) Security Policy

oscap-chroot /host/ xccdf eval \
--profile xccdf_org.ssgproject.content_profile_cjis \
--fetch-remote-resources \
--report report.html \
/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml

Health Insurance Portability and Accountability Act (HIPAA)

oscap-chroot /host/ xccdf eval \
--profile xccdf_org.ssgproject.content_profile_hipaa \
--fetch-remote-resources \
--report report.html \
/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml

PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8

oscap-chroot /host/ xccdf eval \
--profile xccdf_org.ssgproject.content_profile_pci-dss \
--fetch-remote-resources \
--report report.html \
/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml

NIST National Checklist for Red Hat Enterprise Linux CoreOS

oscap-chroot /host/ xccdf eval \
--profile xccdf_org.ssgproject.content_profile_coreos-ncp \
--fetch-remote-resources \
--report ocp_report.html \
/usr/share/xml/scap/ssg/content/ssg-ocp4-ds.xml

Open Computing Information Security Profile for OpenShift Master Node

oscap-chroot /host/ xccdf eval \
--profile xccdf_org.ssgproject.content_profile_opencis-master --fetch-remote-resources \
--report report.html \
/usr/share/xml/scap/ssg/content/ssg-ocp4-ds.xm

 

sh-4.2# oscap-chroot /host/ xccdf eval \
--profile xccdf_org.ssgproject.content_profile_stig \
--fetch-remote-resources \
--report report.html \
/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml

Downloading: https://www.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml ... ok
.
.
.
Title   Disable SSH Root Login
Rule    xccdf_org.ssgproject.content_rule_sshd_disable_root_login
Ident   CCE-80901-2
Result  pass

 9. Retrieve the reports with the following commands (corresponding to the report file names in the example above). Perform these in a separate terminal, while leaving the debug pod running:

[user@bastion ~] $ oc cp $(oc get pods -n node-scan --no-headers | awk '{print $1}'):report.html report.html

 10. Open the report.html file in your favorite browser and review the results:

 

What’s Next?

We have demonstrated a technique for performing OpenSCAP scans on CoreOS nodes within an OpenShift 4 installation. The next logical question would be how do we remediate these findings?

While there are manual steps that can be performed to remediate the nodes (for example, generate the bash remediation scripts from OpenSCAP, copy them to the nodes, and run them), a more elegant approach would be to scan the nodes on a regular schedule and automatically remediate them when they fall out of compliance. Fortunately, such an approach is in the works in the form of the OpenShift Compliance Operator. Stay tuned!


Categories

How-tos, OpenShift 4, Compliance

< Back to the blog