The OpenShift Commons Gathering at KubeCon Seattle, last week, was packed with information on the past, present and future of Red Hat OpenShift in all its forms. Over 350 people from over 115 companies from around the world to gathered at the event and hear about the future of the platform. The event even included the first live demo of Red Hat OpenShift 4.0, which is currently in development.
This was the first time the outside world got a glimpse of the OpenShift 4.0 platform in action. The goal for the platform, said Derek Carr, senior principal software engineer at Red Hat, is similar to the original goal of Kubernetes. While Kubernetes was built to enable a 10 fold increase in the velocity of application operations, the goal of OpenShift 4.0 is to provide a 10 fold increase in velocity for Kubernetes-based operations.
This can be seen through the refactoring of many platform services as operators. OpenShift 4.0 is built from the inside out with Operators, providing a platform that we hope will be more amenable to rolling updates without causing service outages. .
Chris Wright, Red Hat CTO, said that one of the new trends OpenShift 4.0 is tracking is the move inside the community away from smaller numbers of large clusters, towards more numerous smaller clusters. This change in the ecosystem has spurred work on multi-cluster federation and coordination within the OpenShift community. While this work is not yet complete, Wright noted that this is a planned focus for future releases.
Keeping the Hot Side Hot
Behind all of the plentiful talk of the future of OpenShift were some interesting use case presentations from OpenShift customers USAA, GE, and Progressive, among others. Perhaps the most unique of these use cases was from Vattenfall, the Swedish state-owned power company.
While this might not initially sound like a software problem, Schulze detailed the difficulties of prediction in power generation: the system must meet 100% of demand at all times, and storage of power for use later is simply not a carbon neutral option, due to the pollution created by the manufacturing of Lithium Ion batteries, said Schulze. Thus, the power consumption across Sweden must be made more predictable in order to provide reliable energy generation from less predictable sources, such as wind and solar.
Another factor affecting Vattenfall was the proliferation of data centers in Sweden. As the country is quite cold, data centers are easier to cool here. But that very thought sparked a remarkable discovery inside Vattenfall: CPUs generate an almost 1-to-1 amount of heat to power. That means every datacenter in Sweden was already generating megawatts worth of heat as part of their daily operations.
The resulting project at Vattenfall saw heat harvesting devices installed into their datacenter racks. In order to handle all this custom hardware, Vattenfall is running OpenShift on OpenStack. “We decided we would build a test bed,” said Schulze. “We first went to Red Hat and said, ‘Maybe we can solve this purely on software side.’ They immediately jumped on board because they could really align with our vision to build a sustainable digital infrastructure. They said, ‘You can do this with OpenShift,’ and we also found some other partners, like Cloud&Heat, and Nvidia was also excited: for them cooling these GPUs is a big problem.”
Building this heat harvesting system required some new metrics and goals. “How do we define a sustainable digital infrastructure?” asked Schulze. “For us, we try to reuse 80% of the heat. We cool the chips with hot water, and it flows at an incredible speed. It is 55F at intake, and it is 140 degrees at outflow. To manage this required distributing workloads, the datacenter doesn’t have a flat workload all the time. We had to manipulate the workloads. Sometimes when we really need heat, we ramp up artificial workloads to make heat. We want to solve this problem by concentrating workloads on the machines to generate heat.”
Finally, there was one last step in the process at Vattenfall, and it encompassed the last mile of datacenter technology: the physical servers. “In order to make the datacenter efficient you need to physically shut down systems,” said Schulze.
That means shutting down actual servers at the UPS level. The Vattenfall team has now accomplished this, and is able to physically shut down actual hardware on demand, via automated processes in OpenShift.
One of the speakers at OpenShift Commons Gathering who was, perhaps, a tad uneasy about the forthcoming OpenShift 4.0 was Jackie Chute, senior site reliability engineer at GE Digital. She said that, while she is excited about OpenShift 4.0, it may put her out of a job. That’s because Jackie now does the work of setting up virtual machines for use inside OpenShift. OpenShift 4.0 begins laying the groundwork for automating, and managing virtual machines, thus taking most of Jackie’s day-to-day work away from her and automating it.
She’s part of a small team at GE Digital that has spent the past few years bringing on-demand cloud style provisioned systems to the broader GE organization. At the OpenShift Commons Gathering, she took the stage alongside fellow SRE Timothy Oliver, and staff infrastructure architect Jay Ryan. The three detailed the GE journey toward hybrid cloud infrastructure.
Jay said that the implementation of OpenShift at GE Digital was done in a services model. “We have fully automated OpenShift on AWS. We’re using Amazon Elastic Block Storage. One of the things that made this such a great choice is that Red Hat lays out the architecture. They show you how it should run in production.”
Timothy said that their work is not the flashiest part of their day job. “Orchestration is not sexy: it’s just running containers. But that’s what you want. You want it back there doing the job,” he said, adding that reliable infrastructure for running cloud services enables innovation to happen inside each individual GE department. “Our customers are innovating,” said Jay. “The customers we have in the environment today are teaching us about Kubernetes, and asking about Operators and Helm and wondering how they can get in on the bottom floor.”
More Ways To Win
Other talks at the OpenShift Commons Gathering covered topics ranging from continuous deployment, to security, to turning a monolithic application into microservices. Ankur Lamba, technical architect at USAA, detailed some of the work his team has done to bring security to their cloud-based applications. This included a few steps along the way, but ended with OpenShift hosting services for the management of certificates across thousands of applications.
James McShane, on the other hand, said that HealthPartners has used OpenShift to speed up its software development processes. As a result, the company can now push an application to production in just 18 minutes, saving time for developers, operators, and everyone else who has a stake in said application.
You can find all of the slides and full videos of the presentations from this OpenShift Commons Gathering elsewhere on the OpenShift Blog. If you missed out on this gathering in Seattle, your next chance to attend in person is at the OpenShift Commons Gather in London on January 30 at Savoy Place. More information can also be found at Commons.OpenShift.org