This is part three of our four-part OpenShift security blog series. Don’t forget to check out our previous blog posts in the series:
Adhering to best practices for running your workloads in OpenShift is critical to keeping the cluster and all its workloads safe. While Kubernetes provides several capabilities that can help protect your workloads, it’s up to you to use them to safeguard your cloud-native applications.
Follow our guidance below to protect your running workloads on Red Hat OpenShift.
Use Projects and Namespaces
Kubernetes namespaces provide scoping for cluster objects, allowing fine-grained cluster object management. Kubernetes Role-based Access Control (RBAC) rules for most resource types apply at the namespace level. Controls like network policies and many add-on tools and frameworks like service meshes are also often scoped to the namespace level.
OpenShift expands on the Kubernetes namespace functionality with OpenShift projects. A project is simply a Kubernetes namespace with additional annotations, and just like namespaces, users must be granted access to create and use project resources.
Projects allow the user to add more metadata to a Kubernetes namespace and provide more context in the OCP web console. Projects can have a separate name, displayName, and description, with displayName and description being option inputs to give a more specific name and description in the web console.
When creating OpenShift projects, it is best to plan out how you will separate the projects in the cluster. Since RBAC and network policies integrate with the Kubernetes Namespaces, it is essential to evaluate how their implementation will affect your applications. To start, configuring one namespace per application provides the best opportunity to implement excellent security protocols. The fine-grained control using network policies and RBAC will add slightly more management and more secure scalability. Also, avoid using the default namespace in any cluster outside of a development cluster. All applications require their namespace and should not be deployed into default due to ease of deployment.
Use Kubernetes RBAC
Kubernetes Role-Based Access Control provides the standard method for managing authorization for the Kubernetes API endpoints. The practice of creating and managing comprehensive RBAC roles that follow the principle of least privilege and provide some of the most critical protections possible for your OPC clusters. By utilizing a principle of least privilege and regular audits, we can limit bad actors, internal misconfigurations, and accidents.
RBAC is enabled by default in OpenShift clusters and includes the same set of default cluster roles that can be found in Kubernetes clusters. Configuring RBAC effectively and securely requires some understanding of the Kubernetes API. You can start with the official documentation, read about some best practices, and you can also find an in-depth explanation from the OpenShift documentation.
When working with OCP, we want to create all necessary RBAC resource objects for the cluster workloads and test them in a non-production environment. Once your team has a solid working knowledge of RBAC, create some internal policies and guidelines. Make sure you also regularly audit your Role permissions and RoleBindings. Pay special attention to minimizing the use of ClusterRoles and ClusterRoleBindings, as these apply globally across all namespaces and to resources that do not support namespaces. (You can use the output of kubectl api-resources in your cluster to see which resources are not namespace-scoped.)
Limit Container Runtime Privileges
Most containerized applications will not need any special host privileges on the node to function correctly. By following the principle of least privilege and minimizing your cluster’s running containers' capabilities, you can significantly reduce the level of exploitation for malicious containers and of accidental damage by misbehaving applications. With Kubernetes, the PodSpec Security Context is used to define the exact runtime requirements for each workload. With RHOCP, Security Context Constraints (SCCs) are used to restrict privileges for pods.
Similar to how RBAC resources control user access, administrators can use SCCs to control permissions for pods. SCCs are OpenShift resources; they define a set of conditions (or rules) that a pod must satisfy to be created (or admitted in the cluster). These controls can limit the resources, system calls, and filesystem access of the pods running in the cluster. Using SCCs, the level of privileges are controlled for the application, and if needed, give them more permissive or more restrictive privileges.
Red Hat outlines the capabilities of SCCs in their documentation:
- Whether a pod can run privileged containers.
- The capabilities that a container can request.
- The use of host directories as volumes.
- The SELinux context of the container.
- The container user ID.
- The use of host namespaces and networking.
- The allocation of an FSGroup that owns the pod’s volumes.
- The configuration of allowable supplemental groups.
- Whether a container requires the use of a read-only root file system.
- The usage of volume types.
- The configuration of allowable seccomp profiles.
By default, for authenticated users, resources deployed in a project inherit a default security context associated with the authenticated users role. An OpenShift cluster contains eight default SCC’s that can be applied to authenticated users:
Make sure not to tamper with these default SCCs since they are used for essential cluster functions. Instead, create new SCCs for specific users and limit their capabilities.
Some guidelines when creating new SCCs include:
- Do not allow containers to run as root. Running as root creates the most significant risk since root access in a container is equal to root access on the underlying node.
- Do not use the host network or process space. Again, these settings create the potential for compromising the node and every container running on it.
- Do not allow privilege escalation.
- Use a read-only root filesystem in the container.
- Use the default (masked) /proc filesystem mount.
- Drop unused Linux capabilities and do not add optional capabilities that your application does not require. (Available capabilities depend on the container runtime in use on the nodes. GKE nodes can use either Docker or containerd, depending on the node image.)
- Use SELinux options for more fine-grained process controls.
- Give each application its own Kubernetes Service Account rather than sharing or using the namespace’s default service account.
- Do not mount the service account token in a container if it does not need to access the Kubernetes API.