Egress IPs is an OpenShift feature that allows for the assignment of an IP to a namespace (the egress IP) so that all outbound traffic from that namespace appears as if it is originating from that IP address (technically it is NATed with the specified IP).
This feature is useful within many enterprise environments as it allows for the establishment of firewall rules between namespaces and other services outside of the OpenShift cluster. The egress IP becomes the network identity of the namespace and all the applications running in it. Without egress IP, traffic from different namespaces would be indistinguishable because by default outbound traffic is NATed with the IP of the nodes, which are normally shared among projects.
To clarify the concept, we can see in this diagram above containing two namespaces (A and B), each running two pods (A1, A2, B1, B2). A is a namespace whose applications can connect to a database in the company’s network. B is not authorized to do so. The A namespace is configured with an egress IP so all the pods outbound connections egress with that IP. A firewall is configured to allow connections from that IP to an enterprise database. The B namespace is not configured with an egress IP so its pods egress via using the node’s IP. Those IPs are not allowed by the firewall to connect to the database.
However, to enable this feature requires some manual steps to be properly configured. Also, when running on cloud providers, additional configuration is needed.
Reasoning about this question with a customer we realized that there was an opportunity to automate the entire process with an operator.
The purpose of the egressip-ipam-operator is to manage the assignment of egressIPs (IPAM) to namespaces and to ensure that the necessary configuration in OpenShift and the underlying infrastructure is consistent.
IPs can be assigned to namespaces via an annotation or the egressip-ipam-operator can select one from a preconfigured CIDR range.
For a bare metal deployment, the configuration would be similar to the example below:
apiVersion: redhatcop.redhat.io/v1alpha1 kind: EgressIPAM metadata: name: egressipam-baremetal spec: cidrAssignments: - labelValue: "true" CIDR: 220.127.116.11/24 topologyLabel: egressGateway nodeSelector: matchLabels: node-role.kubernetes.io/worker: ""
This configuration states that nodes selected by the nodeSelector should be divided in groups based on the topology label and each group will receive egressIPs from the specified CIDR.
In this example, we have only one group which in most cases will be enough for a bare metal configuration. Having multiple groups can occur when nodes are dislocated in multiple subnets, where different CIDRs are needed to make the addresses routable. This is exactly what happens with multi AZs deployments in cloud providers (see more about this below).
Users can opt in to having their namespaces receive egress IPs by adding the following annotation to the namespace:
So, in the case of the example from above the annotation would take the form:
When this occurs, the namespace is assigned an egress IP per cidrAssignment.
In the case of bare metal, a node is selected by OpenShift to carry that egress IP.
It is also possible for the user to specify which egress IPs a namespace should have. In this case, a second annotation is needed with the following format:
The annotation value is a comma separated array of IPs. There must be exactly one IP per cidrAssignment .
The egress-ipam-operator can also work with Amazon Web Services (AWS). In this case, the operator has additional tasks to perform because it needs to configure the EC2 VM instances to carry the additional IPs. This is due to the fact that like in most cloud providers, the cloud provider needs to control the IPs that are assigned to VMs.
For the AWS use case,the EgressIPAM configuration appears as follows:
apiVersion: redhatcop.redhat.io/v1alpha1 kind: EgressIPAM metadata: name: egressipam-aws spec: cidrAssignments: - labelValue: "eu-central-1a" CIDR: 10.0.128.0/20 - labelValue: "eu-central-1b" CIDR: 10.0.144.0/20 - labelValue: "eu-central-1c" CIDR: 10.0.160.0/20 topologyLabel: topology.kubernetes.io/zone nodeSelector: matchLabels: node-role.kubernetes.io/worker: ""
Here, we can see multiple cidrAssignments, one per availability zone, in which the cluster is installed. Also, notice that the topologyLabel must be specified as topology.kubernetes.io/zone to identify the availability zone. The CIDRs must be the same as the CIDRs used for the node subnet.
When a project with the opt-in node is created, the following actions occur:
- One IP per cidrAssignent is assigned to the namespace
- One VM per zone is selected to carry the corresponding IP.
- The OpenShift nodes corresponding to the AWS VMs are configured to carry that IP.
For detailed instructions on how to install the egress-ipam-operator, see the github repository.
Everytime there is an automation opportunity around and about OpenShift, we should consider capturing the automation as an operator and, possibly, also consider open sourcing the resulting operator. In this case, we automated the operations around egress IPs.
Keep in mind that this operator is not officially supported by Red Hat and it is currently managed by the container Community of Practice (CoP) at Red Hat, which will provide best effort support. Feedback and contributions (for example, supporting additional cloud providers) are welcome.