In the upcoming OpenShift 4.8 release, our team will deliver complete provider networks support when deploying on Red Hat OpenStack. We had to introduce a number of new features to fully support provider networks, including:

  • bring your own networks (BYON)
  • additional networks plugged to the machines
  • additional security groups assigned to the machines
  • installations without floating IPs (FIPless)

You can learn more about these features in the OpenShift documentation. Together, they enable you to install OpenShift on a provider network as the primary network in the cluster, as well as a secondary one. Let’s discover what you need to know about it!

What’s a provider network?

Very often, provider networks are associated with a VLAN or a flat network in your data center and use a physical router as a subnet gateway. Therefore, Cloud administrators don’t need Neutron routers or floating IP addresses/NAT.

To learn more about provider networks, check out this upstream guide.


Provider networks vs. tenant networks

 

Provider networks

Tenant networks

Owner

Cloud administrator

Cloud user (tenant)

Model

Pre-created admin networks

Self-service

Segmentation (e.g. VXLAN, VLAN, Geneve, None, etc)

Selected by the admin, but in most cases, VLAN or flat

Selected by Neutron

Overlapping IP

Not possible within the network

Possible

Advantages for admins

Full control of the network architecture; can re-use an existing infrastructure network

No maintenance

Advantages for tenants

No maintenance

Flexibility in network architecture

Disadvantages for admins

Maintenance costs

Requires specific Neutron services and configs (depending on the driver)

Disadvantages for tenants

Restricted permissions on the network

Maintenance costs

Source of inspiration: Spot the difference: Tenant, provider and external Neutron networks - Superuser 

Use cases

There are many use cases where provider networks are useful, let’s walk through a few examples:

  • As a cloud provider for enterprise workloads, I would like to provide layer 2 connectivity between a provider network and OpenShift workloads running in pods. It is possible to plug the pods onto a provider network. For example, if you have smart devices that can be controlled over the network on a specific VLAN, you could control the device from an application in OpenShift by connecting the pod to the provider network that the device is connected to.
  • In telco edge space, some workloads need to run on dual-stack networks (both IPv4 and IPv6). Thanks to the provider network, you can plug your pods into these networks.
  • As a cloud provider for enterprise DC, I would like to have single provider network per tenant with multiple segment/subnets, where each segment is a rack, so that I can support Spine-Leaf deployments in the DC with L2 networks to Leaf (Top of Rack) and routed L3 networks from Spine to Leaf to connect Leaf subnets.
  • As a cloud provider for telco edge, I would like to have a single provider network with multiple edge sites and each site has a single segment and subnet. Subnets can be IPv4 or IPv6, which could provide dual-stack to the workloads running in OpenShift.
  • As a cloud provider, I would like to provide networking access to the OpenShift bare metal workers.

Requirements

To deploy OpenShift clusters on provider networks:

  • Provider networks must be owned by the OpenStack project that is used to create the OpenShift cluster. Otherwise Nova won’t be able to request a port from that network (since it’s a network of “external” type).
  • Neutron ports are not pre-created. The OpenShift installer will take care of it.
  • The provider network must be able to reach the Metadata IP address (169.254.169.254). Depending on which Neutron driver that you’re using, you don’t have to worry about it: the route is usually created when creating the subnet.

Primary or secondary interface?

When using the Bring Your Own Network (BYON) feature, you have to specify a `machinesSubnet` in your install-config. The machines will get an IP from that subnet, where the CIDR is set by the machineNetwork parameter. However, the pods will be assigned with IP addresses from the network range defined by serviceCIDR.

Most of the time, you will want to plug your machines onto one or multiple additional networks ,including provider networks, by using the additionalNetworkIDs parameter. When deploying your pods, you’ll be able to plug them onto these additional networks.

You have the choice between plugging the cluster onto one or multiple provider networks.

Let’s see two examples of what you could do:

  • Cluster nodes with primary network interfaces using tenant networking (OpenShiftSDN or Kuryr), and secondary interfaces (or more) plugged onto the provider network(s), used by the OpenShift workloads
  • Cluster nodes with primary network interfaces on a provider network (e.g. dedicated VLAN for machines in the datacenter), and secondary interfaces on another provider network forOpenShift workloads (one or more).

Provider network for the primary (and unique) interface

If you meet the requirements described above, you can deploy your cluster on a provider network.

There are 4 relevant parameters to set for the installer. In the `install-config.yaml` file, set:

  • `platform.openstack.apiVIP` to the IP address for the API VIP.
  • `platform.openstack.ingressVIP` to the IP address for the Ingress VIP.
  • `platform.openstack.machinesSubnet` to the subnet ID of the provider network subnet.
  • `networking.machineNetwork.cidr` to the CIDR of the provider network subnet.

Example of install-config.yaml (snippet):

platform:
openstack:
  apiVIP: <IP address in the provider network reserved for the API VIP>
  ingressVIP: <IP address in the provider network reserved for the Ingress VIP>
  machinesSubnet: <provider network subnet ID>
networking:
machineNetwork:
- cidr: <provider network subnet CIDR>

Note: Do not set externalNetwork and externalDNS, which are unwanted in the provider network for the primary interface and could cause installation errors.

Provider network for the secondary interface

In the case of plugging the secondary interface onto a provider network, the machines will be plugged on that network, but it’s up to the operator to connect their workloads to it.

Typically, the primary interface is connected to the tenant network that the installer creates, and the secondary interface is connected to the provider network.

To plug additional networks to the machines, you’ll need to add the provider network's UUID to the install-config.yaml file under `platform.openstack.additionalNetworkIDs`. You can find the network's UUID by running `openstack network list` on a command line.

Example of install-config.yaml (snippet):

platform:
openstack:
  additionalNetworkIDs: ['cc4e7c0b-e1f5-4e73-918f-d23a8bddadbb']

After you deploy OpenShift, you can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. You can follow the official documentation to do it.

To learn how to set up a secondary network for pods, refer to this excellent article about using Multus CNI in OpenShift.

Known limitations

For security reasons, OpenStack Neutron only allows you to  set port attributes like IP addresses or address pairs to the owner of a network.

We wouldn’t want another user from another tenant to create a port with allowed address pairs, which could cause a security breach on the network.

That is why one provider network can only be used by a single OpenStack tenant. However, if the OpenShift cluster is not deployed with the “admin” tenant, the provider network must be owned by the project. Otherwise, the installer will fail to create the ports and its advanced attributes, like predictable IP and address pairs.

This limitation means that you need one provider network per OpenStack project.

In other words, if you don’t plan to use the admin tenant to deploy your OpenShift clusters, you need one provider network that is dedicated to a single tenant for each VLAN. If you plan to have multiple provider networks for different VLANs, you need more tenants.

Kuryr or not Kuryr?

Kuryr is a CNI plugin that uses OpenStack Neutron to provide networking services to the containers that OpenShift manages.

It makes sense to use a provider network on a secondary interface when Kuryr orchestrates the networking via Neutron for the machine, service, and cluster networks. It will allow OpenShift workloads to communicate between the tenant networks managed by OpenStack and provider networks that can be plugged to other workloads in the data center.

Conclusion

Provider networks give cloud admins agility in the data center by allowing them to build their network architecture how they need it and let OpenStack consume it with OpenShift as a workload.

We hope that this article was useful to you. Please don’t hesitate to reach out if you have any feedback!

Also, we encourage you to take a look at the installer documentation for provider networks: https://github.com/openshift/installer/blob/master/docs/user/openstack/provider_networks.md

Thank you to the reviewers: Adolfo Duarte, Assaf Muller, Eric Duen, Martin André, Max Bridges, Pierre Prinetti, Tom Barron and Udi Shkalim.


About the author

Emilien is a French citizen living in Canada (QC) who has been contributing to OpenStack since 2011 when it was still a young project. While his major focus has been the installer, his impact has helped customers have a better experience when deploying, upgrading and operating OpenStack at large scale. Technical and team leader at Red Hat, he's developing leadership skills with passion for teamwork and technical challenges. He loves sharing his knowledge and often give talks to conferences.

Read full bio