Why another blog about Kubernetes? What would a middle-aged Frenchman living in Australia have to say about it that’s not already been done to death?
Bonjour. A couple of years ago, through a lucky encounter, I started to get really interested in software and the open source ecosystem.
Before that, my journey in the IT industry had been very typical. I began 20 years ago, at the bottom of the OSI model as an optical network engineer (in the beginnings of WDM systems) and slowly and painfully climbed my way up the stack following where the technology was evolving. I learned about Ethernet and IP, then MPLS, and then data centers, virtualization, and storage technologies.
For most of my career, I had to rely on vendors to do the work for me, and had to compromise on designs, architectures, and implementations because specific product features were always missing (“It’s in the next release of the roadmap.”), or my use cases were supposedly too unique to be developed.
Then, through this one lucky encounter, I realized that by using open source software and developing or personally tweaking some of the bits for my use cases, I could do the whole lot myself! I was also able to do it a lot more quickly then what I had been used to for years.
The only secret was simply to get ready to learn, and to simply learn to learn again. I just had to be willing to take the first step!
So, as part of my recent move to RedHat (one month exactly today), I have decided to start writing blogs about my experience with technology and how I have gone about learning it. I hope this will be beneficial to other people. Maybe you face a situation similar to what I encountered a few years ago.
Today and for this first blog, I’d like to talk a bit about Kubernetes and the Red Hat OpenShift platform around it.
All right, so what’s this Kubernetes thing that everyone keeps talking about?
One of the key requirements of my current role at Red Hat is to explain and demonstrate the benefits of Kubernetes and the Red Hat platform built around it called OpenShift.
So let’s go for a very quick description of what Kubernetes is from the official documentation.
At a high level, Kubernetes is an orchestration platform for containers. It is split between worker nodes (which carry the containerised workloads/applications) and the master nodes (which are the “brain” and keep the environment running by managinh the workers).
Sitting on those nodes are PoDs (for example, groups of one or more containers). PoDs are effectively the containerized applications and the base elements that Kubernetes deals with by placing them as well as starting and stopping them.
Those PoDs are grouped and exposed to the rest of the Kubernetes environment via Services. A service is basically an abstraction that defines a logical set of PoDs and a policy by which to access them.
Finally, end users need to access the applications sitting on Kubernetes, and for this, we use Ingress(es). Ingress(es) expose HTTP and HTTPS routes from outside the cluster to services within the cluster.
This is described in the following figure:
What is interesting to note is that Kubernetes itself does not tell you how to deploy source code (from a repository like GitHub, for example) or how to build your application. It does not even dictate what to use for logging, monitoring, or alerting.
It basically leaves this all up to you!
Here comes OpenShift ...
And this is where the Red Hat OpenShift platform comes in handy. It is the wrapper around Kubernetes used to make Kubernetes management easy. It removes the need for the platform owners (for example, you) to integrate all the management tools required on their own.
It is entirely open source but has been tested and validated with various technologies and third-party solutions. It also contains an OperatorHub where Kubernetes operators (methods of packaging, deploying, and managing a Kubernetes application) are made available for deployment.
This is highlighted in the following figure (taken from here):
As you can see, it is built with Kubernetes as its core component, but it wraps around it many capabilities that would have to be done otherwise separately by the teams that need to manage the environment.
OpenShift also allows developers to bring their favorite tools and processes to the environment. They can pick from a choice of applications (look in the application services box) that can be deployed as well as the languages and software tools (look in the developer services box) they normally use to do their work.
Now, for the practical side, what do you need to install OpenShift?
The first two things you need to be able to install OpenShift are:
- A domain name: any domain will do; you can use Cloud Services like Route53 on AWS, CloudFare, GCP, anything really.
- Some infrastructure to deploy it upon: You can use any type of infrastructure (bare metal, virtual machines in a private environment (using KVM or ESX-i for example) or even cloud resources (like EC2 instances on AWS)).
Once you’ve got those two sorted, you then have to try and deploy it. And this is where the fun starts (at least for me).
For OpenShift, there are essentially two modes of deployment:
- IPI (Installer Provisioned Infrastructure)
- UPI (User Provisioned Infrastructure)
For the IPI, this is essentially supported on cloud environments (like AWS, Azure, or Google) and is an automated way of deploying OpenShift. You simply download an installer on your machine and follow the instructions.
The install itself can take somewhere between 20 to 40 minutes but is very intuitive, especially if you decide to do a “default option” install without customisations (number of nodes, networking, etcetera).
I’m sharing here an example of an IPI install on AWS (you’ll need a Red Hat account to access this URL) [https://cloud.redhat.com/openshift/install/aws/installer-provisioned]. I’m using the melbourneocp.net domain and I have named my OpenShift cluster ocpaws:
For those who do not have access to cloud services or simply would rather do a build on their own hardware, the UPI method is another alternative. It is a lot more complex, especially for people beginning with Openshift as it requires several steps to be followed.
Luckily, there are some environments that have been nicely automated. Check this one if you’re interested.
Following the successful deployment of an OpenShift cluster, a URL (https://console-openshift-console.apps.clustername.domainname) and an associated username (kubeadmin) and a password (a long strange looking string) can then be used for the initial login.
Access to it can be done via the CLI (for those of you who enjoy it) or via a Web Interface (GUI). For the CLI, you will need to install a client, while the GUI itself will not require the use of the client. My recommendation would always be to have the client installed, as it allows in some situations to have greater flexibility than via the GUI only.
All right, now I’ve got OpenShift up and running; what should I do?
I think one of the first things I did after setting up my first OpenShift cluster was to get familiar with the Web interface. You can change between the Administrator and Developer Modes (more on this in another blog). Because of my background (which is essentially around infrastructure), I have found myself more comfortable with the Administrator one as it displays all of the layers of Kubernetes that we talked about (the Nodes, the PoDs, the Services, etcetera).
I am sharing here some screen captures of the OpenShift console view. And just to show how it links back to the Kubernetes concepts, notice that the console itself is a set of PoDs running in the environment and exposed via Services and Routes (the Red Hat equivalent of Kubernetes Ingress):
Following the exploration of the console, I then tried to deploy some simple applications in the environment. I have added a link for what I think is a good set of applications to explore (it will also show you some of the developer concepts), but there is plenty of great content available on the web for you to get started. Go and explore! The world of Kubernetes is all yours!