With the advent of OpenShift 4, the installer-provisioned infrastructure (IPI) workflow has created a very smooth install process for OpenShift on AWS. Answer a few questions, and openshift-install will build you a fully working cluster in about 30 minutes. Nothing could be easier. If you have not yet done this yourself, or seen a deployment of OpenShift 4, it is well worth the seven minutes to see the ease of installation here.

The default set of configuration options are great for setting up a quick cluster, but most organizations will want to make changes to the install. To see a full list of the options, check out the AWS IPI install docs here. One such customization is to use an existing VPC for your OpenShift cluster install. This may be required to properly connect with their existing infrastructure, or fit into their corporation’s standards. We will talk about this type of install today.

A VPC (or Virtual Private Cloud) is a virtual network in the AWS Cloud. It resembles a traditional network that you would operate in your own data center with the benefits of using the infrastructure of AWS. By default, when using the OpenShift IPI workflow, the installer will properly provision you a VPC as well as all the corresponding subnets, gateways, and routes required to make your cluster functional. If you decide to use an existing VPC, the OpenShift installer no longer creates the following items:

  • Internet gateways
  • NAT gateways
  • Subnets
  • Route tables
  • VPCs
  • VPC DHCP options
  • VPC endpoints

In order to have a successful install, we will need to create these items ourselves, or where necessary, configure our setup to use existing items such as the Internet Gateway and VPC.

Before We Begin

There are a few things that must be present in your existing configuration when using an existing VPC:

  • The VPC’s CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines.
  • You can not reuse any existing IP address ranges currently in use within your VPC.
  • The VPC must not use the kubernetes.io/cluster/.*: owned tag.
  • You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC so that the cluster can use the Route53 zones that are attached to the VPC to resolve cluster’s internal DNS records.
  • If you use a cluster with public access, you must create a public and a private subnet for each availability zone that your cluster uses.

You should discuss these requirements with your AWS VPC administrator prior to attempting this type of install as it will NOT work without the above requirements being met.

The remainder of this post will be using Terraform to configure our AWS VPC and subnets. The base configuration as well as the additional subnets that we add could have just as easily been configured via the AWS Console or Ansible. Use the tools that you feel most comfortable with.

Let’s Begin

To show how this can work, we are going to start with an existing VPC that has been defined with a 10.0.0.0/16 IPv4 CIDR. This CIDR meets the needs of falling within our Networking.MachineCIDR range. We have also ensured that “enableDnsSupport” and “enableDnsHostnames” have been set to true. We will refer to this existing network as the “Corporate VPC.”

The “Corporate VPC” is based on one of the documented VPC scenarios from AWS. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html using the following terraform code: https://github.com/xphyr/ocpbyovpc. Our example Corporate VPC looks a little like this:

 

You may find that your existing VPC setup is more complex than this one; however, the general principle of what we will be setting up still applies.

Per the OpenShift documentation, we are going to need public and private subnets in “between 1 and 3 availability zones.” We will create one private and one public subnet for this example and leverage the existing Internet Gateway to show component reuse. The Public Subnet will be used for the NAT Gateway that we will create and will also be used for the Load Balancers that will be created by the OpenShift installer. The Private network will contain all our OpenShift hosts, including the Master and Worker nodes. Our New VPC configuration will look like this:

 

NOTE: This is NOT a good production configuration. You should use multiple availability zones when creating an OpenShift Cluster to ensure high availability in your deployment. The steps that we do below can be duplicated to create multiple subnets in multiple availability zones for optimal configuration.

So how do we accomplish this? With a little Terraform. In order to use the terraform scripts, we will need to gather some information about the existing VPC configuration. The easiest way to do this is to log into the AWS console and select the VPC application. Record the VPC ID as shown below:

 

We are also going to need your existing Internet Gateway. While still in the VPC area select “Internet Gateways” from the VPC menu and record the Internet Gateway for the VPC you are using like so:

 

If you have not already, go head and clone the following repo: https://github.com/xphyr/ocpbyovpc. We will use the files in the “ocpNets” directory to create our new Public and Private subnet, as well as the required gateway and security groups.

Start by editing the ocpNets/variables.tf file. We need to update two variables in this file with the information you gathered above. Update the “vpc_id” and “aws_internet_gateway_id” with the information you gathered earlier. Also review the public and private subnet CIDRs to ensure that they are in the proper range you will be using:

 

Now let’s take a look at the ocpNets/newNets.tf file to see how this is going to be built. The first section will create a “Public” subnet and an associated route and route table to point to the existing Internet Gateway. The second section will create our Private Subnet, route and routing table, as well as stand up a NAT gatwate to allow the machines built on the private subnet to access the Internet to do things like pulling down container images.

The final section will set up a new security group, which will take care of the final configuration settings that we need with respect to incoming and outgoing network traffic. The rules listed here are based on those that are documented in the “Network access control” section of the install document here. We will be creating inbound rules for ports 80 and 443 for the OpenShift console and API access as well as SSH.

We can go ahead and apply these changes to our VPC now by running the following command:

$ terraform apply

Terraform will output a large amount of data. We are looking for the subnet IDs that are created for our Public and Private subnets. See the example below, and record this information. We will need it shortly for the OpenShift configuration.

 

Now that we have our two new subnets, let’s create our new OpenShift cluster leveraging these new subnets. Following the default instructions, create an install-config file using the following command:

$ ./openshift-install create install-config --dir=<installation_directory>

If this is your first time using the openshift-install command, see the detailed instructions here.

Once your base install-config file is created, we will update the file to use the new subnets. Using your favorite editor, open the file <installation directory>/install-config.yaml and find the “platform” section. Add a new subsection under the aws heading called “subnets” and then, following the format shown, add the two new subnet IDs that we got from running the terraform command:

 

We are now ready to install OpenShift using the existing VPC on the new subnets that we allocated for our use.

Run the OpenShift Installer and point it to the install-config.yaml file that we just updated and wait (usually about 30 minutes):

openshift-install create cluster --dir=install
INFO Credentials loaded from the "default" profile in file "/Users/markd/.aws/credentials"
INFO Consuming Install Config from target directory
INFO Creating infrastructure resources...
INFO Waiting up to 20m0s for the Kubernetes API at https://api.testcluster.aws2.xphyrlab.net:6443...
INFO API v1.18.3+b74c5ed up
INFO Waiting up to 40m0s for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 30m0s for the cluster at https://api.testcluster.aws2.xphyrlab.net:6443 to initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/markd/source/ocpbyovpc/testcluster/install/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.testcluster.aws2.xphyrlab.net
INFO Login to the console with user: "kubeadmin", and password: "byKIN-63ftx-tzf2a-iytZA"
INFO Time elapsed: 25m51s

That is it! You can now log into your OpenShift cluster.

Conclusion

As you can see from the post, it is possible to install OpenShift using an existing VPC on new subnets that we allocated for our use.