Where is my server? This is the joke that I always hear from a colleague every time we talk about serverless. But all kidding aside, I believe that this is what most people think about when the serverless topic comes up, and this leads me to wonder if we all really understand the concept and the technologies that enable it.
In this article, I will try to provide some answers about what serverless is, and touch on some technologies behind the buzzword. In addition, I will introduce OpenShift Serverless, along with its features, and discuss why it should be considered as the preferred platform for your serverless workloads.
To understand OpenShift Serverless better, go to this article for an OpenShift Serverless demo.
What is Serverless:
Serverless computing as we know it is a concept where you can build and run an application without needing a server, hence the quote “where is my server?”
As per CNCF definition, It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at that moment.
But serverless computing does not mean that you really don’t need servers and other components like networking, firewall and storage to host and run your applications. Serverless computing still needs an operations team to maintain and manage the servers and networking components and to perform provisioning, maintenance, updates, scaling, and capacity planning.
Instead, serverless computing provides a platform that is capable of abstracting the operations requirement so the consumers of the platform - especially the developers - can focus on writing business critical applications while the operations team focus on business critical tasks.
Furthermore, serverless is actually a combination of Function as a Service (FaaS) and Backend as a Service (BaaS). FaaS, is simply event driven computing, using simple code created by a developer that can be deployed without depending on the implementation of related parts, while BaaS is defined as an API based service to provide infrastructure that can autoscale.
Looking at the major public cloud providers, all of them have their own version of serverless offerings and the options keep growing over time. This information is based on the following survey
IBM Cloud Functions: GA in 2016 - IBM Cloud functions supports JS, Go, Python, Ruby, Java, C# and can be triggered by HTTP requests, messages from Apache Kafka or Message Hub topics, changes in Cloudant noSQL DB tables, scheduled events, virtually any type of external system, provided an integration has been created for it.
So what is OpenShift Serverless and why should you consider it as your next platform for your serverless workloads?
OpenShift Serverless is based on the Knative project and supports almost any containerized application as it is designed to utilize many of the baseline features of OpenShift. Beyond auto-scaling for HTTP requests, you can trigger serverless containers from a variety of event sources and receive events such as Kafka messages, file upload to storage, timers for recurring jobs, and 100+ event sources like Salesforce, ServiceNow, email, etc, and is powered by Camel-K.
Unlike the serverless offerings by the major public clouds and similar, OpenShift serverless removes the lock-in concerns while enjoying the features developed by the open source community.
In addition, when you think about Serverless, you need to have an application that can startup quickly, respond quickly that requires low memory and small disk size. Quarkus is an ideal framework for this use case. Quarkus is a full-stack, Kubernetes-native Java framework made for Java virtual machines (JVMs) and native compilation, optimizing Java specifically for containers and enabling it to become an effective platform for serverless, cloud, and Kubernetes environments. More information about Quarkus can be found here.
OpenShift Serverless provides you a comprehensive serverless platform that enables speed and agility with low resource footprint. In short, OpenShift Serverless provides more than just the serverless primitives because all of OpenShift features inherent to the platform are automatically inherited, such as deploying new application features or revisions, performing canary, A/B or blue-green testing with gradual traffic rollout can be done easily.
Using OpenShift Serverless, developers will enjoy a simplified developer experience to deploy applications/code on serverless containers, abstracting infrastructure to focus on developing code that matters. This makes hybrid cloud readiness automatically attainable as well because you can run OpenShift on premise or on any public cloud. Most importantly with the use of operators, you can build loosely coupled and distributed applications which can then connect with a variety of built-in or third-party event sources or connectors.
The Serveless space is rapidly evolving. This means that the serverless capabilities will likely change and evolve with the growing innovations from the open source. To make sure you benefit from this rapid, open source innovation and to avoid lock-in, one should consider looking at OpenShift Serverless as the platform of choice for serverless applications.
With OpenShift and the immutable architecture that it brings to the table, one can attain a serverless implementation that can handle more complex orchestration and integration patterns combined with some level of state management. Essentially, serverless will just be another application since most enterprises will want to be running a combination of serverless and non-serverless workloads.