Introduction

In our experience of implementing Red Hat ServiceMesh, we found that the mesh behaves like a special zone within an enterprise network. Inside of it, configurations are clear and strictly enforced. We have no control on what is outside of it, and we should assume that there can be anything.

With proper training and education, you can quickly master how to configure traffic policies inside the service mesh using the standard Istio API, and apply them via CI/CD pipelines and/or GitOps workflow.

Perhaps unexpectedly, we noticed that most of the time was spent around designing the edge of the mesh. The edge of the mesh is where coexistence with the external world needs to be configured.

In this article, we present a set of design patterns around inbound and outbound traffic to and from the mesh.

General Considerations on Inbound and Outbound Traffic

To better understand the next paragraphs, it’s useful to recall how traffic routing decisions are made in Istio (the model that we are about to present works well for HTTP traffic). Consider the following diagram:

Moving from the left to the right:

  1. When a connection hits a member of the mesh (let’s imagine an ingress gateway, but it works the same for every member), all the routing decisions are made based on the hostname (host header field).
  2. Ingress gateways are configured to listen for connections on certain ports and for certain hostnames based on Gateway objects. A gateway configuration selects the gateway pods to which it’s applied based on a label selector. Gateway objects should be defined in the same namespace where the gateway pods reside.
  3. By default, ingress gateways are not aware of the services in the mesh. To make an ingress gateway aware of a service in the mesh, a VirtualService must be defined and applied to the gateway object. When the VirtualService is not in the same namespace as the gateway object, which should be the case in most situations, the gateway object should be referenced in NamespacedName format (<namespace>/<gateway>).Here is an example.
  4. A VirtualService may be coupled with a DestinationRule for fine-grained traffic management.
  5. A VirtualService will then route to an active member of a Kubernetes service (auto discovered) or to a ServiceEntry. ServiceEntries (here is an example) provide the ability to manually define endpoints that cannot be auto-discovered and may represent destinations outside of the mesh (location: MESH_EXTERNAL).

Designing Ingress Traffic

For inbound traffic, it’s generally a good practice to drive traffic through one or more ingress gateways before letting it hit the services.

When designing how to deploy the ingress gateways of a service mesh, two main considerations are needed:

  1. How many ingress gateways are needed?
  2. What is the relationship between the ingress gateways and OpenShift routers?

One ingress gateway per mesh (OpenShift ServiceMesh supports multiple service meshes deployed in a single OpenShift instance) should be enough in most cases.

However, you might have scenarios in which additional gateways are needed. Scenarios include when two radically different configurations need to be supported or when two kinds of traffic need to be kept physically separated (here is an example of this configuration). Another instance when you might have more than one ingress gateway is when ingress gateways need to be owned by individual tenants within their namespace.

The below diagram captures three ingress gateway deployment patterns:

 

The last configuration setup is not currently supported by the OpenShift Service Mesh operator and requires manual setup by tenants.

The other important decision is whether to expose the ingress gateway mediated by an OpenShift router or directly exposed to external traffic via LoadBalancer service.

 

In this diagram, we can see two scenarios, one with the OpenShift router an ingress gateway chained together and one with the ingress gateway directly exposed.

Conceptually, the router is the entry point for traffic to enter into the OpenShift SDN and the Service Mesh ingress gateway is for traffic to enter into the mesh. The chaining of both the router and ingress gateway may introduce too many hops and may add latency to service calls.

The former scenario is more appropriate when the traffic is HTTP(s) or SNI+TLS, as this is the type of traffic supported by the router and when the added latency is not an issue. The Service Mesh control plane can be configured to create automatically routes (ior_enabled: true) consistently with the Gateway objects that are being defined.

The latter scenario is more appropriate in situations where the traffic is of a type not supported natively by the router or it’s important to have good latency performance. In these scenarios, the mesh administrator needs to configure the LoadBalancer service along with the proper DNS records. DNS records creation can be automated with the aid of the externalDNS operator (here is an example).

Enforcing Ingress Traffic

The above considerations help to design the shape of our ingress traffic, but we might also want to enforce that the ingress pathways we have created are the only form of allowable traffic. NetworkPolicy is the right tool for the job at hand (a good, but not 100% accurate, mental model is to think of NetworkPolicies as ways to enforce traffic rules at layer 4 and Istio configurations as ways of enforcing traffic rules at layer 7). The service mesh control plane already deploys a set of network policies that allows traffic only from the mesh or from the default router. We need to ensure that traffic comes only from the mesh for the tenant namespaces. There does not seem to be a way to configure the service mesh control plane to disallow traffic from the router (RFE).

We can still enforce that traffic only originates from the Service Mesh by removing the permission to create routes via RBAC. We will also need to make sure that tenants cannot change the created network policies (again via RBAC).

When configuring these enforcing rules, also keep in mind that in Openshift Service Mesh, tenants in a service mesh namespace still need to opt-in to the Istio sidecar being injected. Without the injected sidecar, a pod is essentially out of the mesh and therefore not subject to the policies enforced by it. Kubernetes-level policies can still be configured (for example, via Open Policy Agent) to enforce that all pods be part of the mesh.

Designing Egress Traffic

For egress traffic, we, again, have to decide how many egress gateways we need. In some cases, the answer could be zero. However, in most cases, having one egress gateway will be useful:

Outbound traffic from a pod in the mesh always traverses through the envoy gateway, so there is a level of control on that traffic, even if it doesn’t flow through an egress gateway. However, egress gateways can be used to achieve the following:

  • TLS origination and trust domain transition. This allows for two PKI domains to coexist. We can use the egress gateway to terminate the TLS connections from the service mesh internal PKI and initiate new connections using the certificate from the external PKI.
  • Using a known egress IP. If outbound connections from services in a mesh need to originate from a known IP so that firewall rules can be applied, an option is to have all outbound connections diverted to an egress gateway and then to define an egress IP on the namespace where the egress-gateway is defined.
  • Similarly to the egressIP use case, an organization might have requirements by which all outbound traffic needs to originate from a specific set of nodes. Containing the traffic to go through an egress gateway and making sure that the egress gateway pods are deployed onto those nodes is a way to meet that requirement.

Enforcing Egress Traffic

EgressNetworkPolicies can be created to enforce that no traffic leaves the cluster except from the namespace where the egress gateways are deployed. We can also enforce that traffic leaving the mesh pods stays in the mesh using network policies with egress rules (again, we need to guarantee that the users cannot manage network policies).

Additionally, Istio can be configured to forbid the routing of addresses unknown to the mesh. Normally, if an application attempts to open a connection to an address that is unknown to the mesh, Istio would use DNS to resolve the address and execute the request. With the global.outboundTrafficPolicy mode option set to REGISTRY_ONLY, we can configure Istio to only allow connections to known addresses (that is, addresses for which a VirtualService is defined).

Configuring OAuth Authentication for Edge Traffic

Istio can verify the validity of an OAuth token as part of its end-user authorization policy. However, it cannot handle an OIDC authentication workflow for requests that do not carry an OAuth token.

Istio assumes the initial authentication (where the token is created) will be facilitated outside of the mesh, but clearly the two use cases (authentication flow and token validation) are closely related.

In fact, for many applications, the expected behavior is that when a token is not available, the user should be redirected to the authentication flow.

One way to handle this use case is to add an OAuth-proxy capable of handling the authentication workflow in front of the ingress gateway. This container could even be deployed as a sidecar as depicted in the following diagram:

An example of this scenario in practice can be found here.

You could add more gateways to this service mesh deployment in case you need to handle unauthenticated traffic or traffic using a different authentication method.

Once the token is created via the authentication workflow, you can configure the ingress gateway to verify it. Additionally, you can configure all of the services in the mesh to re-verify the token, increasing your security. In order for this to work, your services need to forward the token at every step.

Configuring TLS and mTLS for Edge Traffic

Ingress and egress gateways can be useful to configure how TLS connections should be handled between different PKI trust domains. Istio manages its own PKI by default with an internal trust domain and it is safe to assume that outside of the mesh there are other trust domains.

The following is a diagram capturing this scenario:

In the diagram above, we can see that the application consuming the service within the service mesh is in the trust domain A. In this case, mTLS has been configured between the consumer and the ingress gateway (here are instructions on how this is accomplished). Then, the application makes an outbound call to an external service, which belongs to trust domain B. mTLS is configured by deploying the correct certificates on the egress gateway (here are instructions on this portion).

By managing the certificates at the gateway level, we achieve the following:

  1. Easier configuration: There is not an easy way to deploy an additional certificate to a sidecar, but it is relatively easy to add certificate material to the gateways.
  2. Centralized configuration: The certificate configuration can be reused by all the services in the mesh.

Naturally, we could have set up a normal TLS deployment by deploying CA bundles instead of client certs, this decision does not impact the service mesh tenants.

These kinds of setups are highly simplified if there is a mechanism to automatically provision certificates. The Cert-manager operator is an ideal option in this space.

Detailed instructions on how to setup TLS/mTLS for gateways can be found here and here.

Configuring Rate Limiting for Edge Traffic

In certain scenarios, it might be necessary to rate limit the outbound traffic from the mesh. These types of traffic controls are useful when an upstream service may have imposed limits based on pricing tier, or legacy systems that may only be able to handle a certain amount of requests or concurrent connections over a period of time.

Furthermore, different SLAs might be applied for traffic originating from different sources and creating a need to rate limit these traffic types in different ways.

Destination rules and traffic policies can be used in conjunction with circuit breakers to manage how inbound requests of different SLAs can be prioritized when propagating out of the mesh.

In the above diagram, we demonstrate how two different inbound requests that are assigned different SLA classes (one such way can be assigning a header) can be used to apply different destination rules and corresponding traffic policies (here is an example). These methods can be used to maintain a healthy upstream system and allow for the services in the mesh to continue functioning or apply circuit breakers patterns when the limits are reached.

Conclusions

Besides the use cases that were described in this article, there seems to be significant work being done “at the edge of the mesh” in the larger service mesh ecosystem.

The ability to extend the mesh beyond one Kubernetes cluster or to external endpoints will likely use some level of gateway federation. For example, Istio supports federating meshes in this fashion, and a recent release of Hashicorp Consul mesh also gained this level of support via the introduction of WAN mesh gateways.

Also, the ability to federate meshes from different vendors will likely make use of the concept of gateways. Mesh-hub from solo.io is a project that is experimenting in this space.

Finally, with the introduction of the ability to plugin new capabilities to the envoy proxy via the use of WASM proxy packages, we believe that we are going to see more and more features being deployed at the edge of the meshes in gateways.

For example, most of the capabilities deployed in a DMZ, such as IDS/IPS, WAF, DDoS mitigation systems, feeding access events to a SIEM, user authentication systems (we have seen the OIDC authentication example, but other mechanisms could be supported in the future), could simply become additional envoy filters. This could reduce the need for having DMZs and potentially even completely eliminate them.

Another example is around API gateway functionalities. Service mesh and API gateways have some overlap with regards to their functionalities, but there is a definite set of functionalities that are API gateway specific (for example, the developer portal and API pricing/monetization). We believe that the trend is that these API gateway capabilities will become available as WASM extensions, and it will be possible to use the mesh ingress gateways to create API gateways by enabling the right set of capabilities.


About the authors

Raffaele is a full-stack enterprise architect with 20+ years of experience. Raffaele started his career in Italy as a Java Architect then gradually moved to Integration Architect and then Enterprise Architect. Later he moved to the United States to eventually become an OpenShift Architect for Red Hat consulting services, acquiring, in the process, knowledge of the infrastructure side of IT.

Read full bio