There are a number of ways to install a local OpenShift cluster. Minishift is a popular tool for provisioning and installing OpenShift on a VM. For a lighter weight solution, OpenShift can be started directly inside of your machine's docker engine using the oc client tool directly.

While these are great options for a quick and simple installation, lately I've found myself playing with more complicated installations, including multi-node clusters and custom DNS entries for each application. Not surprisingly, these sorts of installations require more in the way of prerequisites. One such prerequisite is the need for a DNS server.

At Kubecon earlier this year, the Cloud Native Computing Foundation featured all of the projects under their umbrella. Given OpenShift's relationship to Kubernetes, I wanted to continue playing with the CNCF catalog and give CoreDNS a try.

The Setup

My goal was to set up the following environment:

  • 1x DNS Server
  • 1x Master (will host infrastructure as well)
  • 2x Application Nodes

Because my manager won't give me a blank check for hardware, I'll be doing this using VMs on a single machine in my lab. Keep in mind that I work from home, so the term "lab" is used pretty loosely here:

Each VM is running CentOS 7 Minimal and is bridged to my home network for simplicity (details on the IPs are found below in the DNS configuration).

CoreDNS

CoreDNS installation is pretty straight-forward. I downloaded the v006 release (note: v007 has since been released and I've seen entirely too few James Bond references to it) from their GitHub releases page. The release contains a single binary, which is both kinda cool and very handy for a local cluster.

Not surprisingly, that binary doesn't do much without some sort of configuration. The main configuration is done in a file named Corefile that I keep in the same directory as the coredns binary. The contents of my Corefile are below:

doblabs.io:53 {
log stdout
file /root/core/doblabs.io
}
.:53 {
proxy . 8.8.8.8:53
log stdout
}

In this case, I'm defining the domain doblabs.io because:

  • .io domains are very fashionable right now
  • If Batman gets Wayne Tech and Ironman gets Stark Tech, it placates the geek in me to show the same narcissism and include a portion of my last name in the domain

The general breakdown of this file is to serve anything to doblabs.io according to the rules defined in /root/core/doblabs.io and forward all other requests on to the Google public DNS server.

That means that the more interesting parts are found in /root/core/doblabs.io:

$TTL    1M
$ORIGIN doblabs.io.

openshift IN A 192.168.1.40
master IN A 192.168.1.40
node1 IN A 192.168.1.41
node2 IN A 192.168.1.42

*.apps IN CNAME master

There aren't too many surprises in that file. My VMs each have static IPs and I've included an extra record to call my master node "openshift" because old habits die hard.

The CNAME entry is for serving my deployed applications. It's defined as a wildcard that will forward all requests to the .apps.doblabs.io domain to the master node where my router is deployed (for simplicity, I didn't split off a separate infrastructure node).

The last step is to run the executable to start the DNS server, having previously made sure the firewall was configured to allow incoming DNS traffic:

# ll
total 32272
-rwxrwxr-x. 1 1001 1001 33034304 Feb 22 16:24 coredns
-rw-r--r--. 1 root root 111 Apr 12 09:57 Corefile
-rw-r--r--. 1 root root 210 May 23 08:53 doblabs.io

# ./coredns
doblabs.io.:53
.:53
2017/05/23 10:05:24 [INFO] CoreDNS-006
CoreDNS-006

Being the disciplined engineer that I am (read: I don't trust myself), I ran a few tests on the server to make sure things were resolving correctly:

# dig @192.168.1.5 openshift.doblabs.io A +short
192.168.1.40

# dig @192.168.1.5 master.doblabs.io A +short
192.168.1.40

# dig @192.168.1.5 node1.doblabs.io A +short
192.168.1.41

# dig @192.168.1.5 batman.apps.doblabs.io A +short
master.doblabs.io.
192.168.1.40

# dig @192.168.1.5 ironman.apps.doblabs.io A +short
master.doblabs.io.
192.168.1.40

The first three tests show that my static A records resolve correctly. The last two show the CNAME wildcard resolution for the .apps.doblabs.io domain. That will allow all of my deployed applications to resolve to the OpenShift router, which can then forward to the appropriate service.

The last step is to ensure that all of the VMs are configured to use the CoreDNS server:

# cat /etc/resolv.conf
nameserver 192.168.1.5

OpenShift Ansible Installer

I won't go into a ton of detail on using the OpenShift Ansible Installer in this post. My full inventory configuration can be found on my GitHub account and more information on configuring the installer can be found in the OpenShift documentation.

In that file, there are two lines that are relevant to this post:

openshift_public_hostname=openshift.doblabs.io
openshift_master_default_subdomain=apps.doblabs.io

The first sets the DNS name that will be used to access the APIs and web console.

The second configures the default router that is created during the installation. When a new route is created, if the hostname is not explicitly set, OpenShift will generate one using the project name, service name, and the subdomain specified here. This subdomain should refer to the wildcard record defined in the CoreDNS configuration.

Deploying an Application

Once the installer has finished, I deployed a simple test application to verify that everything works.

# oc new-project web
Now using project "web" on server "https://openshift.doblabs.io:8443".

You can add applications to this project with the 'new-app' command. For example, try:

oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.

# oc new-app jdob/python-web
--> Found Docker image f983418 (5 months old) from Docker Hub for "jdob/python-web"

* An image stream will be created as "python-web:latest" that will track this image
* This image will be deployed in deployment config "python-web"
* Port 8080/tcp will be load balanced by service "python-web"
* Other containers can access this service through the hostname "python-web"

--> Creating resources ...
imagestream "python-web" created
deploymentconfig "python-web" created
service "python-web" created
--> Success
Run 'oc status' to view your app.

# oc expose service python-web
route "python-web" exposed

# oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
python-web python-web-web.apps.doblabs.io python-web 8080-tcp None

# curl http://python-web-web.apps.doblabs.io
Hello World

Notice that OpenShift generated the route hostname using the subdomain configured in the installer.

Conclusion

This is obviously a very simple, minimal DNS configuration and there is much more that can be done with CoreDNS. But for the purposes of a local, non-production cluster, it was quick to stand up and customize for my environment.