The assisted installer is a project to help simplify OpenShift Container Platform (OCP) installation for a number of different platforms, but focuses on bare metal deployments. The service provides validation and discovery of targeted hardware and greatly improves success rates of installations. It also provides some of the benefits of an installer provided infrastructure (IPI) installation including: simplification of required DNS, VRRP for ingress application points and API entry. But lacks other benefits: machine-api and cloud-provider configuration(storage and load balancer features), these items could be added as day2 cluster operations. The assisted installer application can be hosted on podman, an existing OpenShift deployment, or is accessible via the SaaS portal offered by Red Hat.

This article utilizes the Red Hat’s provided SaaS portal to deploy an OCP cluster utilizing VMware virtual machines for control plane nodes and bare metal Dell PowerEdge for compute resources. The benefits of virtualization include high availability of control plane nodes via affinity rules, resource consolidation for physical hosts, ease of backup and restore. This allows saving bare metal assets for compute workloads where application performance is the focus.

Pre-requisites

  1. Shared VLAN: The assisted installer requires all nodes to be present on the same VLAN. This is a requirement of utilizing a virtual IP for both ingress and API.
  2. DHCP: Much like an IPI installation DHCP is required in the VLAN above.
  3. DNS: Records for ingress and API are required for accessing the cluster. The assisted installer supports creating these records for the user via route53, but in this article the following DNS records have been created.
[root@rh8-tools ipi]# host -l e2e.bos.redhat.com | grep hybrid
api.hybrid-ocp.e2e.bos.redhat.com has address 10.x.y.40
*.apps.hybrid-ocp.e2e.bos.redhat.com has address 10.x.y.41

Accessing the Assisted Installer service and Creating a New Cluster to Deploy

First, logon to the portal using Red Hat credentials. This will auto populate any pull secrets required to pull container images associated with the account. Once logged in, click the “Create New Cluster” button to start the specifics of the new OCP install.

 

Fill in the desired cluster name and the version of OpenShift that should be deployed. In the DNS example above hybrid-ocp is the name of the cluster. The next screen will be where the bulk of the configuration will be added for cluster installation.


The base domain will be the same used from the previous DNS query and once the hosts are being discovered the available subnets will be listed in the dropdown list. Enter the SSH key that should be added to the cluster nodes via the ignition process. Next, generate the discovery ISO using the button.

 

Once the discovery image is created there will be a wget command provided to download it. Stage this file into /tmp/images and name it installer-image.iso.

[root@rh8-tools images]# cd /tmp/images/
wget -O 'installer-image.iso'
‘https://s3.us-east-1.amazonaws.com/assisted-installer/discovery-image-5c8165b0-c973-4485-9765-5825f405d989.iso?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA52ZYGBOVI2P2TOEQ%2F20201005%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201005T155756Z&X-Amz-Expires=14400&X-Amz-SignedHeaders=host&X-Amz-Signature=37c8945e8eedd1a66c111489d551d7a250da9650e19554d9245bc646ae8658fc'

Discovering the Baremetal resources and Creating the target VMs

Clone the assisted-test-infra repository and create the Ansible inventory to discover and create the nodes. The playbook will also upload the discovery ISO to a VMware datastore specified in the inventory file and it will host the discovery ISO on a web server container for the Dell servers to utilize via the iDRAC. The bare metal hosts will take longer to show up in the inventory because of the reboots and hardware self tests. The virtual machines should show up very quickly.

[root@rh8-tools assisted-test-infra]# git clone https://github.com/openshift/assisted-test-infra
[root@rh8-tools assisted-test-infra]# cd assisted-test-infra/ansible-bm-install

A base Ansible inventory is included in the assisted-test-infra repo to use for booting the target hosts. Modify this inventory to include the required VMware variables and iDRAC IP addresses and login information for the Dell servers. Note the VMware VMs are in the masters group while the bare metal servers are in the workers group.

[root@rh8-tools assisted-test-infra]# cp hosts_without_os.sample ~/hybrid_cluster
[root@rh8-tools assisted-test-infra]# vim ~/hybrid_cluster
[all:vars]

vcenter_hostname="vcsa.vcenter.e2e.bos.redhat.com"
vcenter_username="administrator@vsphere.local"
vcenter_password="password"
vcenter_datacenter="Boston"
vcenter_datastore="aos-vsphere"
vcenter_cluster="e2e"

[assisted_installer]
# This will be the address of the deployment host
10.19.0.1

[masters]
hybrid-vm-master-0 role=master vendor=VMware
hybrid-vm-master-1 role=master vendor=VMware
hybrid-vm-master-2 role=master vendor=VMware
[workers]
hybrid-bm-worker-0 role=worker bmc_user=root bmc_password="drac-password" bmc_address="10.x.y.84" vendor=Dell
hybrid-bm-worker-1 role=worker bmc_user=root bmc_password="drac-password" bmc_address="10.x.y.85" vendor=Dell
hybrid-bm-worker-2 role=worker bmc_user=root bmc_password="drac-password" bmc_address="10.x.y.86" vendor=Dell

Now that the inventory is complete, run the command to populate and create the nodes. The playbook offers other automation, but the tags “host_iso” use the staged discovery image to set up containers to use for the “boot_iso” tag.

[root@rh8-tools ansible-bm-install]# ansible-playbook -i ~/hybrid_cluster playbook_assisted_installer_without_os.yml --tags "host_iso, boot_iso"

..omitted..

=======================================================================
okd.assisted_installer.host_iso : Remove the VMs if present to clear locks on mounted ISO --------------------------------------------------------------------------------------------------------------------- 45.10s
okd.assisted_installer.host_iso : Upload Discovery ISO to aos-vsphere --------------------------------------------------------------------------------------------------------------------- 21.76s
okd.assisted_installer.boot_iso : Discovery iDRAC versions for Dell hardware --------------------------------------------------------------------------------------------------------------------- 8.45s
okd.assisted_installer.boot_iso : Discovery iDRAC versions for Dell hardware --------------------------------------------------------------------------------------------------------------------- 8.43s
okd.assisted_installer.boot_iso : Discovery iDRAC versions for Dell hardware --------------------------------------------------------------------------------------------------------------------- 8.03s
okd.assisted_installer.host_iso : Install podman --------------------------------------------------------------------------------------------------------------------- 5.88s
Gathering Facts --------------------------------------------------------------------------------------------------------------------- 4.79s
okd.assisted_installer.boot_iso : Create a virtual machine to boot from the discovery ISO --------------------------------------------------------------------------------------------------------------------- 4.50s
okd.assisted_installer.boot_iso : Create a virtual machine to boot from the discovery ISO --------------------------------------------------------------------------------------------------------------------- 4.38s
okd.assisted_installer.boot_iso : Create a virtual machine to boot from the discovery ISO --------------------------------------------------------------------------------------------------------------------- 3.97s
okd.assisted_installer.host_iso : Start samba container --------------------------------------------------------------------------------------------------------------------- 3.54s
okd.assisted_installer.host_iso : Start RHCOS image cache container --------------------------------------------------------------------------------------------------------------------- 3.05s
okd.assisted_installer.host_iso : Enable samba and samba-client for podman container --------------------------------------------------------------------------------------------------------------------- 2.59s
Gathering Facts --------------------------------------------------------------------------------------------------------------------- 2.24s
okd.assisted_installer.boot_iso : Racadm container to mount and boot to discovery ISO --------------------------------------------------------------------------------------------------------------------- 2.13s
okd.assisted_installer.boot_iso : Racadm container to mount and boot to discovery ISO --------------------------------------------------------------------------------------------------------------------- 2.12s
okd.assisted_installer.boot_iso : Racadm container to mount and boot to discovery ISO --------------------------------------------------------------------------------------------------------------------- 2.11s
okd.assisted_installer.host_iso : Check for Discovery ISO in aos-vsphere --------------------------------------------------------------------------------------------------------------------- 2.06s
okd.assisted_installer.host_iso : Enable Services (firewalld) --------------------------------------------------------------------------------------------------------------------- 1.73s
okd.assisted_installer.host_iso : Open port 8080/tcp, zone public, for podman container --------------------------------------------------------------------------------------------------------------------- 1.50s

Once the playbook finishes and the inventory is finished, the inventory should resemble the following.

 

Assign the appropriate roles and hostnames for the systems, then scroll down and add the API and ingress VIPs below. The assisted installer can auto-assign roles based on hardware. In this case, specify the virtual machines as the control plane. There should be 3 control plane for high availability for the OCP cluster. Select “Validate & Save Changes” and the cluster will be ready to install. Now click “Install Cluster” and the bare metal/VM cluster installation will begin.

 

Finishing the deployment and exploring the new OpenShift Cluster

One of the masters will be used to bootstrap the cluster then converted into another master node. Once the cluster installation is complete the kubeconfig will be available to download as well as a link to the cluster console.

 

[root@rh8-tools ~]# oc get nodes
NAME                  STATUS   ROLES    AGE     VERSION
hybrid-bm-compute-0   Ready    worker   8m36s   v1.19.0-rc.2+f71a7ab-dirty
hybrid-bm-compute-1   Ready    worker   8m33s   v1.19.0-rc.2+f71a7ab-dirty
hybrid-bm-compute-2   Ready    worker   8m16s   v1.19.0-rc.2+f71a7ab-dirty
hybrid-vm-master-0    Ready    master   21m     v1.19.0-rc.2+f71a7ab-dirty
hybrid-vm-master-1    Ready    master   21m     v1.19.0-rc.2+f71a7ab-dirty
hybrid-vm-master-2    Ready    master   8m17s   v1.19.0-rc.2+f71a7ab-dirty

[root@rh8-tools ~]# oc version
Client Version: 4.5.5
Server Version: 4.6.0-0.nightly-2020-08-31-220837
Kubernetes Version: v1.19.0-rc.2+f71a7ab-dirty

[root@rh8-tools ~]# oc adm top node --heapster-namespace=openshift-infra --heapster-scheme=https
NAME                  CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%  
hybrid-bm-compute-0   606m         1%     4827Mi          3%       
hybrid-bm-compute-1   493m         1%     4648Mi          3%       
hybrid-bm-compute-2   439m         1%     4071Mi          3%       
hybrid-vm-master-0    1127m        32%    5235Mi          35%      
hybrid-vm-master-1    1442m        41%    5335Mi          35%      
hybrid-vm-master-2    1139m        32%    3991Mi          26%

[root@rh8-tools ~]# oc describe node hybrid-vm-master-0 | grep -A5 Capacity
Capacity:
 cpu:                4
 ephemeral-storage:  156734444Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             16410844Ki

[root@rh8-tools ~]# oc describe node hybrid-bm-compute-0 | grep -A5 Capacity
Capacity:
 cpu:                40
 ephemeral-storage:  585509828Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             131781812Ki

Above the difference in hardware between the metal and non-metal boxes is shown. The Ansible role for deploying the virtual machines has tunable variables if memory or CPUs for the VMs needs to be adjusted. Additionally, anti-affinity rules could be configured to keep the master resources on separate hypervisors.

In closing, the Assisted Installer is an excellent tool to assist in deploying OpenShift clusters. It bridges a gap between user and installer provisioned infrastructure. In the use case above, we have the benefits of leveraging an existing virtualization environment with the brawn and capacity of metal for intensive compute jobs.

To get started using the assisted installer today visit the portal and get to deploying! For more information regarding the assisted installer check out the following YouTube Video.