Tushar Katarki, Katherine Dube, William Markito, Ramon Acedo Rodriguez, Scott Berens, Jamie Longmuir, Mark Russell, Robert Love, Rob Szumski, Marc Curry, Jake Lucky, Duncan Hardie, Ali Mobrem, Tony Wu, Miguel Perez Colino, Christian Heidenreich, Kirsten Newcomer, Siamak Sadeghianfar, Patrick Strick, Karena Angell, Anandnatraj Chandramohan, Serena Nichols, Mike Barrett, Peter Lauterbach
Chris! Is this thing on? We're doing this! Welcome to the presentation everybody. So some quick housekeeping tasks. Q&A, comments, do them in your YouTube do them in your Twitch, in your Facebook, do them in the Blue Jeans, doesn't matter where you're doing them, we're gonna get em answered. And if we don't know the answer, we're going to figure it out and send it to you after. This is a recorded call. So if you don't like that, go ahead and hang up. Next slide. So what is this presentation? Well, this presentation is the living release notes. It's the animated change log, it's the 'just telling me what's in the release.' And it's always great. Now, it's not just me, I'm just kicking us off, I brought some of my favorite people they're going to pop in, they're going to pop out. They're going to keep it interesting and lively. But I will warn you once they started talking, they love to talk. So we're going to do our best to give you an on time arrival, bear with us. Next slide. There's an awful lot in any given OpenShift release but, it's good to pull the gems out and just talk about the gems for a second. In this particular release, there's some amazing stuff in the installer, we tackle some really hard ground with the government clouds, we do this bare metal IPI that I tell you, when you hear what this thing does. It's unbelievable. OVN new network real time monitoring for the users. It's all coming together in 4.6. Next slide. In terms of planning, and helping you figure out where we are in the larger release map, we're right here at 4.6. Now 4.6 is going to hit probably the end of October beginning of November. So right in that time zone right there in quarter four. We'll have two releases in the first half of next year. And then we'll add to them into the second half. So definitely take your time, look at this slide. And then come back to us if you have any questions later on. Next slide. I will warn you that this is an exciting EUS release. So everybody understands the OpenShift 4 release cycle we support three releases at any given time. So if we're talking 4.6, traditionally, when 4.9 comes out, in the second half of next year, 4.6 would be pushed down support. With the EUS, we elongate that release cycle into 2022. So you get about 18 months. Now the layered products that sit on top, they'll either release a EUS release, or they'll make sure they still support 4.6 during the 4.6 lifecycle. So that's how that comes out. Next slide. Kubernetes, it's good to know the version of Kubernetes. This is Kubernetes 1.19. There's some really clever things that the community got into this particular release. I love the restricted admission controller, I love the pod topology spread, and I love the schedule profiles. But don't take my word for it, we're going to go into even more exciting ground in the coming slides. So let's not spend a lot of time here. Next slide. Spotlights! Chris, does this show not have music? Come on. This is the section where we're going to spotlight the features that you definitely need to know, just pay attention. And with that, I'll hand it over to Ramon.
Ramon Acedo Rodriguez
Thank you, Mike. As Mike was introducing in this release, we are GA'ing bare metal IPI for OpenShift, which essentially is going to allow us to use the installer as we are used to with other cloud providers but to deploy clusters on bare metal. Um, this is something that we are dealing in 4.6 but we have experienced during 4.3. And since then we have been running pre release candidates for bare metal IPI and improving the experience and the use case coverage. And as I was saying the process is very similar to other cloud providers, right. And what we are doing is effectively making a cloud provider out of plain bare metal nodes, right, we are exposing bare metal nodes as if they were on what machines to be used with machine API as you would use with any other platform. Okay, in this process, and you know, just to comment on on the diagram we have here, what you are going to do is given a have a provisioning nodes that will run the bootstrap VM on something that you're probably familiar with. We're going to run the installer and the installation process will finish with the provisioning of the nodes on some OpenShift cluster. Let's move to the next slides and talk about how we do this magic. And essentially bare metal management is power management and a few other operations for example change boot order, like when we're saying that we are managing a bare metal node. We are controlling these bare metal nodes out of bands And through the VMCs. Right? And how do we do these? Well, we use standard protocols. And we use underlying technologies that you probably have heard about. Metal Kubed is our bare metal operator, right. And then we have OpenStack ironic used by Metal Kubed, which is perfectly integrated in the bare metal operator and able to use. In fact, the experience that we gathered with this project the project Ironic over the years to manage bare metal nodes. And we will manage them using Red Fish, IPMI. The installation happens over the network. And what you can use provisioning over Pixie Pixie booting DHCP, etc. And you have a provisioning network, you get the nodes for provision in a very familiar way I'm sure for many of you. But also you can live with virtual media and digital media is actually amazing. Because you don't need a provisioning network. All you do, in fact, all the installer does is presenting the installation media remotely, locally by the nodes, there's something new through the BMC as well. And you probably have done this from your laptop when you're provisioning an node somewhere else right, remotely. And then you proceed with installation. The installer does that, um, this works in case, this is a very common question. That's why I mentioned it here explicitly on these connected installations like we are, we have documented for you how to release when you don't have direct access to the internet as well. And here on the right, you can have a look at how we do specify used to be so how we tell the installer, hey, these are my nodes. And this is how you manage them. This is the URL for IPMI, for example, and my credentials. And with that, I'm gonna hand it over to Katherine.
Thank you. Thanks, Ramon. So new for 4.6. US government agencies and their partners can now deploy OpenShift on AWS gov cloud, in particular, particular the US East and US West regions, the install process, that process is largely the same would deploy OpenShift. A region with two notable differences. The first one is at the AWS Gov cloud region must be manually configured in the install config. Since our costs, images aren't published in those regions at this time, so you can't select the regions from the guided workflow. And the second difference is the archives must be manually uploaded by the user prior to the playing OpenShift. All you do is you select that variable, use that ami ID that you had for the upload, I specify it in the install config or stall config. And then the process for importing images will be outlined in the OpenShift product documentation. Next slide. So in conjunction with support for AWS. gov cloud, going to be also enabling Microsoft Azure Government as well. Once again, the installation process is largely the same as playing OpenShift in a commercial region, but also two minor differences. The first one is that the Azure Government cloud instance must be manually set in the install config using a new parameter, old cloud name. And then the second one is that the region you'd like to deploy OpenShift to must be manually configured in the install config. And that's because we don't include a way to set the cloud instance from the guided workflow. However, you can still leverage both the installer provisioned as well as the user provisioned deployment methods. Once you customize the install config, so the installation can continue as normal. Next slide. So we're planning to introduce an on premise version of Red Hat's hosted update service called OpenShift update service. It's going to be released as an optional operator that you can deploy from the Operator Hub. And it will allow users to host update graph information for clusters residing in a restricted network. The services itself is comprised of two underlying services on the first one's called graph builder. This actually fetches OpenShift released peeler information that's hosted In your local container, and builds an upgrade graph based on valid edges that are available. On the second one is the policy engine, which is responsible for selectively serving updates to clusters based on a set of fill filters. So if you want to exclude an empty edge for instance, there's already a developer preview release available. Today that's been announced. In a recent blog post that is linked on the slide. It's also available on OpenShift blogs website, the GA release is planned to be released a few weeks after for about 60 days. So keep a lookout for that and it should be available soon. And then over to to Tushar.
Thank you. The next one I'm talking about is the specifications for remote worker nodes. As many of you know, real world remote locations are constrained by space and power and cooling to run servers and hardware. And therefore, as a result, OpenShift clusters are also constrained. This is common in Telecom, but also in retail and transportation. With 4.6, we are documenting the specifications for remote workers. Here you see a supervisor node that is in a central location, or a supervisor nodes that are managing workers at remote sites. The basic requirements for remote worker nodes are identical to any worker node, workers and supervisors should be part of the same routable network. And the network requirements have little to do with bandwidth or latency. But reliability has a huge impact. And that's where the caveats come into picture. Also, you could have at any site as many workers as needed, right. So really if a worker is disconnected from the supervisor, the workloads keep keeps on working locally until the connection is back, but when the connection is back, and if it comes back before a certain configurable time period, then the ports keep running and nothing happens. However, if after a certain period, parts are basically rescheduled, and therefore, you know, they may not be running on that particular remote location. To mitigate this behavior. We list here a list of options to consider things such as zones with disruption budgets, taints and tolerations daemon sets and static ports, each one of them have their own advantages and trade offs for you to consider and you find that it is in our documentation are the trade offs thing to note really is that the primary testing for this has been done with UPI, although IPI will also work and for bare metal. that head over to the next speaker.
Okay, thank you to Tushar. Next up is to announce the general availability and support for a new primary cluster networking plugin, open virtual network or OVN. So this plugin is not the default OpenShift networking plugin for six out of the box that remains the same as it was in four or five. However, OVN is easily implemented at install time by simply swapping out the plugin name that's in the install config for from OpenShift SDN to OVN Kubernetes. For bare metal deployments, where it's not as easy as the public cloud to stand up new host instances, we provide the migration tooling to go from the default OpenShift 4.6 networking to OVN. OVN will become the default networking plugin and a follow on release. There are many reasons to choose this particular Kubernetes networking plugin, some of which are listed here in the lower left of the slide. But ultimately, we were looking for a modern project with an active community that consolidated the networking across our different platforms. And that would also complement the existing capabilities of OVS. So is this a major change for our customers that they'll need to react to? Not really if you compare some of the technical highlights of the two solutions, as depicted in the table on the right side of this slide, you'll see there are a lot of similarities between the two, the biggest differences being that he uses the geneve overlay instead of VXLAN. And also the solution eliminates a known scaling limitation IP tables. So that's not a bad thing. Overall, OVN represents a vastly improved networking platform for our customers that have Advanced networking needs, especially and including the telco industry. And it's where all of our future development is happening. Okay, next slide, and Kirsten.
Hi, everybody, I'm really excited to announce that with 4.6, we will be shipping the OpenShift compliance operator, allowing you to kind of to automate, audit and control of technical controls that meet certain regulatory frameworks or security requirements, we'll be leveraging the same approach that you might be familiar with with RHEL. So we'll be the operator once deployed on your cluster will use OpenSCAP in combination with a compliance profile that you select, you scan your cluster, assess whether or not the technical controls in that compliance framework are configured properly, you will get a report back as to whether they are or are not configured properly. remediation of those technical controls are also automated through the compliance operator. At 4.6 GA, we will have a limited set of content available, focused primarily on technical controls for RHEL CoreOS, we will provide more details about exactly what those controls are kind of in, in future briefing, and we will be able to deliver additional compliance profiles incrementally in between releases, so you can expect to an update of new content new profiles about every six to eight weeks. And our next major target for a profile is a CIS benchmark for OpenShift. And I'd like to hand off to the next speaker.
Um, thank you. Um, so kicking off my list is probably one of the most desired extensions for our monitoring experience. We leverage our monitoring tools and infrastructure to monitor the workload running and using the finances. I'm really happy to announce that it will make January enabled for everyone involved and focuses, how are we doing this, we will extend our site with multiple new components and use that console to provide a multi tenant interface that allows you to share a few targets and _ everything as running your own leases. Before this customers had to implement their own stack for the monitor industry. And so basically what it means is if they have use cases where they have to constantly to correlate metrics across the different layers of the infrastructure components as well, they have to all buy it is today today so they couldn't get that full monitoring experience. Now we don't have to do that we provide an infrastructure for them, a huge plus for customers. The way how you enable that is by an admin, though, we still have the out of the box monitoring experience. But it will also will only deliver the platform experience that we always shipped in previous releases as well. An administrator now has to enable the new infrastructure that is responsible for taking care of workload monitoring in user defined namespaces. And there's only a flag that you need to add. And then automatically our operators spinning up every component that is necessary for you to use. So in the end, when you spin up an AI developer only has to add a new service monitor into a user defined namespace and we automatically scrape that information from metrics endpoint exposed by your application, then you can go easily to the developer perspective, switch to any of your namespaces and he can really metrics from that same experience and also looking at all other things and you can silence this is a truly multi tenant experience for your monitoring. And we may even making it a little bit easier. If you've never done this before with service monitors. We actually added a quickstart guide into this console that you can use to understand how we can easily add monitoring or scraping for your services itself. Next slide please. Logging has gone through significant changes since OpenShift four I'm really happy to announce that we will finally make our new log forwarding API generally available as well, which includes a lot of exciting features such as Kafka support, and support for forwarding messages to an external ES, elastic. And we also added better and more secure Cisco support as well. And, you know, to just closing all these new great features, we also added a very simple mechanism that you can specify which locks on which namespaces go to its own location, that has also been very much a frequent request. The goal for this API is to provide an easier, less failure prone option to configure lock forwarding, and hacking through fluentd previously, you had to go through go to fluentd directly, you had to write some fluentd. Syntax code. That was not easy. We had to learn what you do. And obviously, they are no guarantees, the fluentd changes its protocol, that, you know, there is some backwards compatibility. All that we will ensure with our new API, making it really easy for you to just keep using it even if we change protocols, underlying technologies or anything else. So I'm pretty excited about that. And that's it from my side. And I hand it over to William. Thank you.
Thank you, Christian, we are very excited to announce the general availability of the events model of OpenShift serverless, coming with OpenShift serverless 1.11, released alongside OpenShift 4.6. The idea here is really to provide a mechanism for developers to build event driven applications that can connect microservices functions, or pretty much any containerized workload. Eventing provides that through I'd say this powerful constructs. So one of them is that event source, which is the thing that translates the event from whatever native format, it is today, to a cloud event, CNCF spec that makes sure that the event format is very consistent. And then from there, you can implement some different patterns consuming those events through a broker, which allows you to do routing and filtering based on event types and attributes. And it can handle multiple event types in its multi tenant out of the box. And then you have another construct that is a channel that essentially allows you to receive multiple events, but then fan out tend to send the same event types to all your applications represented here by a sink. Next slide, please. Here, you're going to see a very short animation of that experience. Again, you have a list of event sources provided out of the box that is extended also by Camel K. So I'm picking telegram here as one example. And then from there, you can receive those events and wire those events through a channel which can be backed by any memory implementation, very good for development, or any implementation like Kafka, which is of course more production ready and can be easily deployed your OpenShift cluster through AMQ streams. With that, again, you can just drag and drop using the UI into the developer console. Or use the CLI to KNCLI to build the same event driven pipelines for your microservices, we're very happy to announce that it's been a great pleasure to work on this feature, and I'm happy to see what you built with it. Next, now, I will transition to Scott from the ACM team
Hey, thanks, William. Very exciting format, we're thrilled to to announce our version two dot one release comes out in about a month. And in that release, we'll be including an observability pillar, which is an exciting new opportunity for cluster health in the multi cluster space. Next slide please to Tushar. there's so much coming you and I've tried to narrow it down to just one slide and the multi cluster lifecycle space for bringing the GA for vSphere and bare metal. That means deploying multiple clusters out using the hive API with a fully demo back model in the policy, space with governance, risk and compliance, we're bringing an enhanced OPA integration and look for that to continue. With more work under the two two release coming in February. We have an open source policy report repository that helps you get the day 2 configurations right out of the box, things like OAuth, role binding, and I am already there ready, ready to go in a multi cluster way. In the advanced application space, so many new things. The new and improved application experience really simplifies the entry point where defining an application with placement with rules to define how it spreads out to multi cluster and the subscription and channel really speaks to the Git ops opportunity we have in a moment. cluster way. Very nice entry point for any customers looking to start off with their GitOps and rack them. Looking at a broader set of features in the portfolio integration side, we bring Ansible, pre and post hook into the same application space. So as you're building an application, you have context for load balancers, ServiceNow tickets for things that touched your traditional IT and have Ansible tasks ready to go, we can integrate that in our tech preview as part of the two dot one release. Stay tuned next week, as Ansible fest has a live demo and recordings that you can check out for more information on that Ansible integration. And then finally, our fourth pillar, which is a revamp of our cluster health monitoring brings in a well architected and well proven capability with Thanos to handle the scalability and long term, election of metrics, as well as optimization and capabilities to view in the Grafana dashboard. We're bringing all that together in the two dot one release. Again, that's coming out in the early part of November. Next slide, please, Tushar. Just to blow your mind with an architecture slide. This is what it looks like in the pre and post hook as we build Ansible into the application lifecycle. As you can see, those resources are located in a git repository, the subscription and channel model brings that out to the managed cluster. And in this method, the OpenShift platform really gets to start speaking to Ansible in a way that makes sense from it from a standard infrastructure without having to focus on just a Kubernetes layer to bring together these pieces with pre and post allows you to execute that tower job in the same context of the point and application in a Kubernetes construct. This is a beautifully architected solution. Again, you'll see that demo next week during Ansible fest. And we look forward to your feedback as we continue to roll this into the other lifecycle aspects or recommend the future. With that, I'll hand it over to Patrick, who's going to take you through the managed services.
Thanks, Scott. Next slide, please. Do we have some really exciting news to share about manage OpenShfit offerings. This includes the full spectrum OpenShift Dedicated, Azure Red Hat OpenShift, Red Hat OpenShift on IBM Cloud and the upcoming Amazon Red Hat OpenShift service, the big change is that all the worker nodes will be sold on the vCPU basis rather than per node. Additionally, the price the total price has been reduced by up to 75%. More details will be shared on a blog post over on blog.openshift.com. So you'll be able to read about it there. And you can also see details at the link that I have in the slide openshift.com slash pricing. For OpenShift Dedicated we're also reducing the minimum number of worker nodes required to provision a cluster. In combination with the price change. It's now more affordable to get started with OSD. And then finally we're increasing the SLA to 99.95%. These changes have consistency across the full suite of our managed service offering. Next slide please. For OpenShift, Dedicated and the upcoming OpenShift, or Amazon Red Hat OpenShift, where they're getting a few new features as well. So first is a new UI so that you can schedule cluster upgrades to occur the time and date that works best for you and your company. So you'll be able to schedule an upgrade, you pick the upgrade version, and you can schedule when you want that upgrade to kick off on your cluster. We're also adding the ability to create machine pools so these are availability zone availability zone aware machine sets. And they allow you to have mixed instance types in your OpenShift Dedicated cluster. The cluster history log that you may have already seen and OpenShift cluster manager is now integrated with our email notification service to help keep cluster admins aware of important cluster events. And then last, you also have the ability to use your own encryption keys for disk used by the cluster. And with that, I'm going to hand it over to Jake Lucky
Thanks, Patrick. So with OpenShift 4.6, we'll be bringing the same easy to install, manage open OpenShift experience into the Azure Government cloud. Along with Microsoft Azure Government support, we're going to be looking at enabling features that support enterprise deployments. So that includes allowing customers to lock down outbound traffic via firewall rules. Bring your own key disk encryption for both persistent volumes and operating system disks. And larger VM sizes including for the first time dedicated instances. And finally, we'll be making OpenShift clusters even easier to deploy by bringing a new cluster create user interface into the Azure portal. And with that, I'm going to hand it over to Rob to talk about the operator framework.
Thanks, Jake. So moving into our workload section, the big change on the operator front is a new bundle format, you go to the next slide. This moves metadata about operators and their catalogs into a format that is built and shipped as a container image. This is versus the old app registry API that we use before. This makes it super easy to build these catalogs with our OPM tool, which you can see an example on the slide. And all the default operators that ship with OpenShift have been moved over to this new format. Last with this change, we've made a shift in paradigm from a shared catalog that's available across all operator versions, to a catalog that is specific to each operator. And in an OpenShift release. This makes it really intentional when a partner or a Red Hat team, or a community ships an operator on OpenShift, they know that it's supported well. And it's tested well on that, versus this shared catalog that was shared across all of the OpenShift versions before. So we're really excited about that, it'll just bring a little bit more control for our partners, Red Hat teams and other folks. With that, I'm gonna hand it over to Karena.
Thanks, Rob. So in continuing on, are bringing you more workloads, we've added Helm three supports OpenShift, it was GA-ed and OpenShift, for about four. And now in four dot six, we bring you Helm three dot three support, as well as support for multiple repositories in the developer catalog. Also, now you can select a chart version on install, and we will continue to add Helm features to the console and to the back end. So please stay tuned. Next slide please. And Red Hat application services. They are continuing they're on the path to bring you cloud native support for Java through Quarkus. And Quarkus has now GA-ed native compilation support. Now everybody's been waiting for that. And remember to go to Quarkus.io for all things Quarkus. Another key feature is OpenShift extensions have not have now GA-ed. So OpenShift extensions, you can add them directly into your Maven project. And also the extension by default it is configured use your container image source to or your container image source to image. However, it can also be continued configured to use non SY builds. So that's really important as well. And Java flight recorder, you can use it with Java Mission Control to more effectively monitor your and profile your applications. So that's now in the Red Hat build of Open JDK, the Red Hat integration team. They are one of the key features is bringing you air gap support or your deployments. So now you can deploy three scale API management and FUSE into your air gapped environments, whether your government finance or using it in another way. Also now, you can monitor and backup and restore your three scale API Management deployments. Another key feature is Camel K for serverless. That's tech preview, and now will be covered in the serverless section. The Process Automation enhancements, the employee rostering for Optaplanner, you're using Optaplanner, that's awesome. You can now select different rotations for your employee rostering. And then also there's a standalone dashboard builder. So go play with the dashboard builder. You can create multiple dashboards. So that's, they put a lot of work into that. So go check it out. And now on to you Miguel for migrations.
Miguel Perez Colino
hello, everyone. Thank you, Karena. And we have launched some weeks ago the migration took for applications to analyze Java applications where the binary or source code and be able to bring them to new platforms, whether it is Open JDK JBoss for enterprise application platform containerize them bring them to Linux. So if you want to buy the nicer applications, this is a tool to go. It will renew your application, it will tell you Okay, what changes you need to do to it to bring it To set Linux Open JDK OpenShift, or JBoss EAP. And and you will be able to modify your application yourself on on be able to modernize it, we have also added new rules for Camel 2 and Camel 3 you have, we have seen quite a lot of things from Camel 3 and Camel K in several ways. And probably you have your own rules in Camel 2 and you want to modernize it. So we have added 147 rules in 13 rule sets to be able to help there. So do you want to consume this you can do it in a in a web console that can be deployed on your laptop or on OpenShift. And also, we have a CLI in case you want to automate the text and the reviews, we have a Maven plugin to embedded in in Maven itself. So whenever you do a build, you have all this analysis done. And of course, your own IDE, whether it is Eclipse CodeReady Studio, CodeReady workspaces, Visual Studio code, or clips here, which is included as CodeReady workspaces in OpenShift. So that's it. And now, I back to you Peter.
Thanks, Miguel OpenShift Virtualization is the ability to run virtual machines inside of OpenShift. This is a feature we introduced in OpenShift 4.5, we're getting a lot of traction with this, the team's been super busy. And I want to highlight a couple of things that they've been focused on. The first one is part of the core platform, we actually have a very dedicated and very awesome, I must say Performance Team that's looking at the performance of virtual machines on every Red Hat platform. So OpenStack, RHEV. And now OpenShift are we're making sure that all of the things that we use to make sure you get the best performance and lowest latency out of the platform applies to virtual machines that you migrate into OpenShift. for networking, there's a couple of changes, we now support two new bonding modes for virtual machines. So you can now have a more elaborate or complex set of networking configurations to integrate in with your virtual machines and your containers. And we've also extended the CNI certifications, we bet our network partners such as Tigera, Cisco qualified to make sure that VMs since they're first class citizens inside of OpenShift, are treated as part of the certification test suite to make sure that virtual machines and containers work well, with the partners, network operators. The storage team has been super busy, probably the most visual thing or the most notable thing that you'll see is the workflow when you deploy a common either RHEL or Windows image and using templates, making sure that workflow is much more smooth and much more natural when you use virtual machines inside of OpenShift. And then there's a lot of work done in terms of just making certain storage operations faster. Either imports, or doing cloning, we're actually leveraging the CSI abstractions and efficiencies that we get from the storage providers there. One last thing is offline snapshots. As you can imagine, data protection inside of Kubernetes is a fairly dynamic space. We're actually working very closely with the OpenShift container storage team to make sure that not just our product, our full platform works. But all of the snapshotting that you can do with any of the CSI providers that Red Hat partners with will work well with virtual machines in your environment. But one last thing to mention that OpenShift virtualization is a separate operator that shifts just a couple of weeks after OpenShift itself. gerth is generally available, so you can make sure that you look for us right after we land in four dot, right after four dot six. Now I'm going to turn it over to Jamie for service mesh.
Thanks, Peter. Just to introduce myself quickly, my name is Jamie Longmuir and I'm the new product manager for OpenShift service mesh taking on the role from Brian Harrington who's taking on another role at Red Hat. We have a big release coming up in the form of OpenShift service mesh two dot zero. This release will be based on Istio one dot six and includes some pretty big changes. The control plane has been completely re architected around a single daemon known as Istio D. There have been significant improvements in how certificates are distributed and rotated among amongst the proxies in Istio using envoy secret Discovery Service. One of the big benefits of service mesh is the ability to automatically obtained service metrics. This functionality has been completely rearchitected as well moving to Istio's new telemetry v2 architecture. Finally, webassembly extensions is the new way of extending Cirrus functionality. This replaces the mixer component which has been deprecated and will be removed in the future two dot one release. For service mesh two dot zero. We're introducing web assembly extensions as a tech preview feature. Next slide. The slide shows the consolidated Isto control plane that I mentioned with the single Istio D binary that encapsulates the control plane components pilot Citadel and mixer. This change simplifies installation upgrades and management of the control plane. It also reduces resource consumption and improves the performance of the control plane. The Secret Discovery Service I mentioned is the new way of distributing certificates, the sidecar proxies. It's both more secure and performant than the previous method that used Kubernetes secrets. It also enables us to integrate with third party certificate manager such as vault which has been a common request. The new telemetry v2 architecture brings a substantial reduction in metrics collection latency and resource consumption. So it's another another good change. Next slide. On the user experience side, we've introduced a new version of the service mesh control plane resource, which will help streamline the configuration of OpenShift service mesh in Kiali. A significant addition is the addition of distributed tracing, which you can see here in the screen capture. This lets users visualize requests between services in this view, allows you to drill down into the more detailed tracing view that you would get with Jaeger. We also have a few new wizards that make it easier to configure service timeouts, retries, and run various fault fault injection scenarios. Finally, in Jaeger, we now support an external Elasticsearch cluster that you can use with Jaeger, and open telemetry, the open telemetry collector is in our a tech preview feature. This allows you to this allows developers to instrument their code with vendor agnostic API's and avoiding vendor lock in when when doing instrumentation. I'll now pass it on to William who will discuss serverless.
Thank you. Yeah. So with serverless, we continue to integrate with the portfolio right. So here on this short animation, you can see how one can import a project from Git, and will automatically also have a template for pipelines created so that you have your CI/CD solution already integrated out of the box at ease. Again, this is just again, starting the the pipeline using the console. Other than that, we have brought back the integration with OpenShift service mesh, which allows you to inject policies for authentication with jot or also configure some of your domains. So you have custom domains for your Knative services. Another important thing that we are enabling is, of course, the CLI commands for eventing. But really tying all things together is one of the key aspects of what we're doing in serverless right now. Next slide, please. Now, this is one of the most exciting features that we have to announce here. It's currently a developer preview of your our functions experience. This is essentially being built on top of everything we have built with Knative so far. But it's up leveling the user experience to a point where again, you have a local developer experience that you can use to build and test your applications locally. It's based on view packs out of the box, we are enabling three runtimes Quarkus, node, and go. And of course, once you are done with the development locally, as you build and deploy those applications, they are going to be deployed as serverless applications with Knative. Next slide, please, that's going to show a brief animation of that entire workflow. So again, you can very quickly get a project started trigger to build. And I'm going to show very quickly, the source code, which again, it's very simple, very intuitive, but this is a node app. But on the left side, as we deploy the application, you see that pretty much reusing all the experience that have built with OpenShift serverless so far. So you can, for example, simply now wire that channel to that function now. And with that you're going to start receiving events, leveraging the same event sources that we have in OpenShift serverless. We are very excited about this. Looking forward for your feedback on this. Now I'm passing to the next speaker. That is Siamak.
Thank you, William. We'll be talking about CI/CD. And GitOps on OpenShift 4.6. Our first news first release is OpenShift pipelines. 1.2 is still take preview on we'll see before six will be released in the preview channel of the operator and that this month, a couple of new things available within this release are the addition of templates for Knative like William displayed these are modifiable by the customer. so customers can have their own default templates. So when they onboard new projects, new applications, they can generate pipelines automatically for those applications. And you're associating our templates that have the shape also to workspace instead of pipeline resources. And there is also the improvement of being able to define a default workspace or Docker run so users don't have to select it. It works because wherever Python one every time we want to execute a pipeline. At the task library that the ship is expanded, the helm has Scorpio and a task for being able to trigger a Jenkins job as through a Tekton pipeline job is added to the libraries for use cases where customers have their migrating jobs from Jenkins to Tekton and they want to be able to continue the rest of their CI/CD to flow in in Jenkins. as support for disconnected clusters added in this release as well will start to gather metrics in Prometheus for use in cluster monitoring from pipeline execution is about the time, the average time for example, the number of pipeline broadcasts and so on. It quickly started added in Dev console to help familiarize the users with the pipeline capabilities in the in console and new enhancements into CLI as new releases Tekton come along are added to the CLI as well. Next slide please. The VS code extension is also updated to match to give you is in the pipeline or cultural release there is a new wizard for starting a pipeline directly from VS code. So it gives you a visual way to form to add to interdepartmental and start pipeline can add triggers to a pipeline and you have direct access to the documentation of Tektons. with human you're modifying the YAML or altering a pipeline in VS code and there's a restart Python action that you can rerun and a pipeline did have executed before. Next slide please. Another exciting addition to OpenShift which is plantable is on OpenShift 4.6 point that the GA later in the year is OpenShift GitOps is a new add on alongside OpenShift pipelines and push it builds at several less and service mesh it aims to enable teams and customers to adopt get ops practices for application delivery and cluster configuration. It builds on top of Argo CD it gives customersextensive support Argo CD there is an opinionated way to do GitOps as well as Application Manager CLI that helps customer bootstrap an entire project generate CICD configure Argo CD they from nothing they would get in a number of repos we generate our artifacts in then they can start coding and push code over applications through that that GitOps code that we are advocating for them. And this will be integrated into the console. This is available as part of OpenShift SKU, a separate product but available in the same, the same entitlement. Next slide please. And this edition really brings our DevOps portfolio and OpenShift to another level. It's really comprehensive offering for a hybrid cloud. The combination of OpenShift builds which are for simpler automation from source code to building an image and deploying it to CICD more and more complex flows. For application delivery, we Tekton and Jenkins, add in openshift pipelines and OpenShift GitOps really takes it all the way to production with a GitOps way of continuous delivery on OpenShift. Next slide please. talking more about the CodeReady portfolio of tools and dev tools on OpenShift. Next slide please. We'll have a tech preview release. So service binding operator this operator is a replacement for the binding capability that existed in the service brokers it allows you essentially to make a link between a service provider that will be a database or AWS operator as your operator do an application consuming these credentials. And those credentials automatically get injected into into those application based on the label and client and this is actually going beyond just deployment, you can do it also in Knative. And even at the point secrets and configs, there's no operating Helm chart on the outer site to generate these, these, these credentials for you. With that, I will hand it over to Serena.
Great, thanks, Siamak. I'll continue on with some more of our developer tools. On the next slide, we're talking about CodeReady workspaces, which is scheduled to really be released in early November. Two dot five includes D support as well as experimental support for using IntelliJ as an IDE client running locally connecting to a remote workspace. This is a huge win for most of the since most of the Java community is using IntelliJ. So it's another example of us meeting the devs where they are. To go on to the next slide, we'll talk about OpenShift developer focus, CLI Odoo. 2.0 was recently released, and it's now aligned on dev file support which originated from Eclipse Che. It's worth highlighting that this new model showcases rapid iterative development. The new deployment model is available for a number of languages and enables Quarkus, which is a big one. Dev files also provide the ability to leverage sample starters to scaffold new projects is really easy, and people should definitely go try that out. Odoo's integration with Kubernetes provides a consistent development experience. You can write applications from scratch, iterate the development inter loop, and commit your code to Git all within the same environment. And with this release, out the Odoo debug command also graduates from Tech preview. Going on to the next slide, we're talking about containers, CodeReady containers. So this allows developers to run on openshift on your to run OpenShift on your laptop. The main thing here to highlight is the integration with the VS code OpenShift connector extension. With the this extension developers now have an easy guided flow to create and start in OpenShift single node cluster on your laptop workstation using Red Hat container code ready containers. So CRC for OCP Four dot six is targeted to be released on October 22. Now I'll pass it over to Ali for the console discussions.
Hey, everybody, I'm over here. So as you've already seen, we've had a ton of great enhancements to the console. There's a couple other ones we want to showcase you. So right here we've improved the over there update experience. On the left hand side, you can see we've added visuals to really show and give you the best recommendation upgrade path, you'll be able to see how many releases you're behind to get will see all your release release notes, or each of the versions. Next to that you see that we've broken out the progress checklist dashboard for when you're actually upgrading. The reason we did this is for OpenShift not only do we upgrade your Kubernetes but we see the system as a holistically. So we wanted to show you when the cluster operators are getting upgraded that versus the master nodes or the worker nodes. The last part that we had here I want to show is that we've added alerts to alert you when there's a new patch minor release or a channel available. So the cool thing about this is you can come and set up alert receivers, you send notifications not only to your notification dashboard in the console, but to slack email or pager duty. Next slide please. I'm now going to hand it off to Tony Wu.
Thanks Ali. So on the operator UI found the goal is always to help users manage operator at ease. First highlight in this release a new annotation introducing operators CSV files allows an operator to specify an unique custom resource. After installing the operator using the console, the user will be prompted to create that unit CR so user can easily make the operator back services fully functional. Second in the middle of the operation instant page, the console improved the UI zonings. By better grouping properties per CRD's schema structure. The new popover on each property field also shows more schema information so user can directly see them on this UI without the need to check in CRD manifest. Lastly, on the right, the console now beta support user to see if a Kubernetes resource is owned or managed by an operator. This way user can easily go ahead to apply changes directly to the owners without being reverted. With that. I'll hand it over to Serena to talk more about the dev console side.
Great. Thanks, Tony. So let's talk about what we've done to improve the Getting Started Experience in four dot six. To onboard developers. We've introduced a number of new features. In the past we've heard there's a discoverability issue with the perspective switcher and being able to go back and forth from admin to dev to help them prove this the first time a non privileged user or non cluster admin user logs into the OpenShift console, they'll be brought directly to the developer perspective by default. In addition to that, we brought back and improve guided tour, which is offered the first time any user enters the developer perspective. Users who opt into this tour are guided through specific areas of the UI to help with onboarding and discoverability. We've also added samples to the add page, which you can see on the right hand column. In the dev perspective, this provides an easy and extremely efficient flow for users to quickly set up a get sample app created so that they can kick the tires with an app running on OpenShift. And then now I'm going to focus on the middle section, which is around quickstarts. quickstarts are being introduced as the focal point of our onboarding process in the OpenShift console, both on the admin and the dev sides. These quickstarts help guide customers through user flows focusing on educating the users how to best utilize the console. In four dot six, this console was shipped with seven quickstarts. Two of these are specifically for the admin focused on installing the app OpenShift pipelines and OpenShift serverless operators. And then we also have five additional quickstarts available for the developer focusing on creating a service app, adding a pipeline to your app, creating sample app, adding health checks and monitoring your sample app. Keep your eyes open for what will be coming in 4.7 in this area as we've got some great improvements lined up already. And we are really excited to see the value of brings to our customers. On the next slide, we're going to focus on application topology, we've made some big changes there. We now have two modes in the application topology graph view, the connectivity mode allows devs to focus on the application composition both on how it's managed, whether it was installed by Helm charts or operators, etc. as well as how things are connected. And if there's service binding connectors if there's multiple revisions inside of a Knative service, etc. So alternatively, in the center column, you can see the consumption mode that that allows the developers to focus solely on resource consumption. So how many pods Do I have up and running, what's the status of those pods, etc. But there's really no connectors or groupings shown in that specific mode. We've also focused on providing parity between topology, graph and list views in support for additional resource types. So in 4.6, we add support for jobs and cron jobs as well as the sum of the events in resources like brokers and channels, as William spoke about earlier. As you know, scalability is a concern with a view like topology. So based on your application, size, and number of components, the filter and find features are paramount. These features now persist when switching between the graph and the ListView. The find feature increases the discoverability of components inside of that application. And it highlights the components whose names match the search string. This feature helps the scalability of the project with a large number of resources. And then the filter feature also helps with scalability. This is also a new one, where it allows you to indicate which resource types to display in the view while the rest are hidden. Finally, on the admin side, the workload tab of the projects Detail page has an increased pink feature set, it's now sharing the implementation with the list view of topology and the dev side. So all the new features that we've got, we've added in the dev side on a ListView are now available in the admin side, post four dot six, we're also talking about enabling access to the topology graph view from here as well. And the next slide, let's move on to application stages view. As Siamak mentioned earlier, this dev preview feature is just to start some of the Git ops work that we'll be doing in the future. This view empowers developers with the visibility of applications across all their environments. So once configured, developers are provided with a view of the applications they have access to inside of this application stages view, they're able to drill into an application and have visibility into that app across all of the environments. The available details for each environment may differ based on if that specific user has access to the namespace that environment is reading in or the cluster the environment is in. Now with that, I'll hand it off to Christian to cover observability.
Thank you, Serena. Um, we already talked about a lot of critical, highly important features inside the spotlight section, but there's more that we actually exposed in 4.6 plugin. The first feature is tuning fluentd. Basically, what that means is that we now expose specific fields inside our cluster logging CR that you can use to optimize the performance of fluentd specifically around how we deliver messages. Our logging stuck to a third party system. This feature is really for advanced administrators that really, truly understand what those fields actually mean. And when you set up specific knobs, what it actually what impact on a stack actually is usually, you know, out of the box, we have default values for all of this should be completely enough. And this is really just to really fine tune if necessary. all the possible settings of usually you won't reach twice the memory usage and flushing output behavior. Next slide, please. And something else be improved in 4.6 isthe overall observability of logging. So observability on the different components, we ship with our stack, that means fluentd, Elastic Search. Those are the two critical components, that we now added a few more observability tools like, you know, dashboards, into our monitoring dashboard sections inside the administrator perspective, as well as improved alerts. And we overhauled our metrics that we exposed with the logging status itself. It is important to note that we removed some index level metrics, since they introduced very high cardinality, which was, which is obviously not good for the monitoring stack. We will continue to improve metrics editing routes with you know, later releases, we will bring back some of these index level metrics as we go, as we have to figure out what the best way is for us to expose these without, you know, impacting the monitoring stack too much. That's it for my site. And I hand it over to Katherine. Thank you.
Thanks, Christian. Next slide. So for OpenShift four, there's two installation experiences. If you're not aware yet, we have our full stack automation, where the installer controls all areas of the installation, including the infrastructure provisioning, with an opinionated best practice deployment of OpenShift. Then we have our pre existing infrastructure deployments where administrators are responsible for creating managing their own infrastructure that allows them to have greater customization and operational flexibility. For four dot six, the supported provider list largely stays the same as four dot five with the exception of extending bare metal support for a installer provision, infrastructure workflow, the new government region support for AWS and Azure that I mentioned earlier, and support for deploying OpenShift on VMware vSphere 7.0, this would be in addition to six dot five and six dot seven. I'll hand it over to Ramon to talk about OpenStack. Now next slide.
Thank you. So OpenStack in 4.6, we have a number of new features. And this release is supported in Red Hat OpenStack platform 13 and record OpenStack platform 16 dot one. And I would like to highlight the first one that I mentioned here OpenStack bare metal ironic integration, because with this feature now you can deploy with IPI actually, also with UPI, so you want on on bare metal nodes provided by OpenStack. You know that OpenStack thanks to ironic as a capability of managing bare metal nodes. And you can deploy with installer on OpenStack. And now regardless of you deploy to VMs, or bare metal nodes, this is super cool. You can even have a mixed environment between virtual machines and bare metal nodes. And beyond that we have worked on a number of new features highlighted here on you can see that we now the installer now understands what OpenStack zones are. And you can specify OpenStack availability zones to the installer so that you can deploy you know, where you want Kuryr IPv6 support. And we're working on this. We haven't finished integration with installer, but you know, this is something that's been taking a long time for, for us to develop and next, floating IPs if any of you is familiar with deploying OpenShift on OpenStack, you will have noticed that we always require a floating IP server for the 6 and starting for the sixth, you don't need them anymore then so so you said, so there are some users that can't use floating IPs, maybe the admin hasn't set them up in their external network or you know, for other reasons, or they are simply not available. And now, we help them get out to install OpenShift and OpenStack. And with that, I'll pass it over to Peter.
Thanks. So OpenShift and Red Hat virtualization, a whole stack automation is something that we introduced back in OpenShift, 4.4, it's very popular. But one of the things that we wanted to add is dynamic storage provisioning, which is, was one of the one of the bigger gaps that we had, that's now fully available. It's the operator and drivers are automatically installed when you deploy using the full stack automation. And that's just something that will allow you to allocate RHEV controlled storage domains to your OpenShift clusters. The other thing that we've also added an extended is the ability to auto scale, by adding worker nodes as the as the workload within the cluster becomes busier, you can actually do automatically deploy new infrastructure components to take those workloads on on new machines. The last thing that we've also added is disconnected or restricted installations where you're not connected to the internet. That's something that really is important for public sector and customers that just don't have direct connections from their internal infrastructure to the main internet. Since, as we're coming in OpenShift, 4.6, we've also been very busy with new versions of rev as well. And I want you to pay attention here. Currently, we support both OpenShift supports being deployed on RHEV 4.3, and RHEV 4.4, which went generally available a few months ago. But in terms of timing and sort of compatibility, what we're going to need to do is really focus on going with RHEV 4.4 going forward. So customers that are currently running OpenShift, four dot five, on a RHEV four dot three cluster will need to upgrade their RHEV cluster first before upgrading to OpenShift four dot six, that's really a to get into a supported configuration. As far as testing goes. One of the other things to note is we did have a user provision infrastructure thing that was actually targeted for four dot six, that didn't quite make the release that's currently targeted for OpenShift, 4.7. And now Katherine, I think I'm turning it back over to you.
Yep. Thanks, Peter. Um, so I'm going to talk about the cloud credential operator changes. For those of you who don't know, it is designed to satisfy credential requests from OpenShift components by granting them with fine grained credentials, or a specific cloud provider instead of leveraging the admin credential, which has elevated permissions. So there's now a new field in the install config that defines how the credential requests are handled. The four components requiring cloud API access to be applicable to AWS, Azure and GCP. So there's three new modes. The first one is mint mode, which is what is the default today. So where it'll actually mint, new fine grained credentials, as that are a subset of the user provided credential, the admin credential, we have a pastor mode now, where credentials are used as is. So you're taking the user provided credential and you're passing that to the components. And finally, we have a manual mode. This is where credential requests are manually created and supplied by the user. So cases where this would be useful would be where you may have access restriction problems to the cloud identity and access management so IAM. So that would be important where you have to provide it yourself since you can't look that up with an API endpoint, or where the admin prefers not to store the administrator level credentials in the cluster namespace. If any of these are set in the install convicts are not left as default. The installer will also not attempt to check the credential of the provided credential for proper permissions. This is useful in certain situations where There may be additional policies on the user credentials that the cloud policy simulator can't properly test. And we've seen that in certain cases. That's happened with SDP. So this is this will allow you to get around that by skipping that policy simulator check. Next slide. So we have a new field service endpoints in our config, which allows users to specify a list of custom endpoints to override the default service endpoints on AWS. Um, custom endpoints can be specified for any of the services that we use s3 IAM, elastic load balancer tagging, route 53, and STS. It's worth mentioning that this is really only needed for very specific cases where you want an AWS endpoint. An example would probably be like the EC2 endpoint, for instance, it's not needed for deploying OpenShift to any of the known regions since all these endpoints are actually found in the AWS SDK, which OpenShift already utilizes. So you shouldn't need to set this to define to install a new region. The AWS SDK already has these, you just need to specify them. Next slide So the OpenShift installer now supports user defined routing, the egress strategy on Azure. This allows users to choose their own outbound routing, for Internet access, leveraging pre existing network rather than defaulting the OpenShift recommended way of using public IPs and public load balancers. This can be important for customers who want who don't want to expose any external endpoints cluster so for restricted environments. For use it for user defined routing to work properly, users have to have a pre existing network in place where the outbound routing has already been configured prior to deploying OpenShift. The installer, the OpenShift installer itself is not responsible for configuring this as part of the deployment workflow. You must have this setup on your own for this to work and use a different egress strategy than what we default to. Next slide. So in addition to these things support we've already had this type and size for both the control plane and compute nodes, Azure, and GCP. So this change introduces two new fields in the install config, a one for the disk size I specified in gigabytes and the disk type the next slide, and I'll hand it over to Marc.
Great, thank you, Katherine. So one of those seemingly small and the surface improvements we made in OpenShift 4.6 was to optimize recovery time of master or supervisor node that experienced a whether purposeful or not hard shutdown. So there will always be some delay in the recovery of a supervisor, node and Kubernetes. But prior to this release, it could be 15 minutes or more before the endpoints were reconciled and the cluster was able to detect and adapt to the loss of that supervisor node. So simply put in OpenShif 4.6 we've dramatically reduced the recovery time of the control plane for these scenarios from 15 minutes or more down to about 90 seconds. Next slide please.
Alright, I'll cover the part topologies spread constraints. This allows pods that are spread across a cluster among various failure domains such as regions zones or nodes, you know, to be spread by specifying the topology spread constraints in our ports pack so that you can see here in this example, you know, to illustrate this we have a four node cluster where ports are labeled as pod are located on nodes One, two and three respectively. And when you see an incoming pod that also has the same match you if you want to if you want it to be spread evenly the across the zones, and then you can specify the spec that you see here on the right where you are saying the port with Introducing the core topologies spread constraints back. Part topology spread constraints feature is useful as a tool in your toolset to achieve application, high availability, as well as efficient resource utilization.
Thanks Tushar um, hi, everyone for the cluster infrastructure. We've covered some of these areas already. So let me take our time together just to go into one feature. In this release, we're adding support for spot instances or preemptable instances, as Google calls them for both GCP and Azure. For those of you not familiar with spot instances, they're a way to get access to cheaper resources from the cloud providers, though those resources can be terminated at short notice. So user beware. And for GCP, and Azure, you can still configure this just like you do for AWS, why they make machines that YAML file, but there are few differences. Naming is different just to differ between the different providers. And the Google option itself is actually a Boolean operators you just set true or false. Also, make sure you watch out for the shorter Google and Azure notification handlers. They're 30 seconds each compared with two minutes for AWS. And with that, I'm going to hand you over to Mark who is going to take us through Red Hat Enterprise Linux CoreOS.
Hey, thanks, Duncan. Next slide. So as mentioned 4.6, will be an extended update release with a stable supported feature set for 18 months, that extends down to the RHEL core OS layer. And to that end, 4.6 will be based on RHEL 8.2 content for the entire release. Please note that 4.4 and 4.5 will also stay on RHEL 8 to content until the end of their support lives. CoreOS 4.6 is the first version that will support the use of the v3 ignition spec. Among other things, it locks the it unlocks the ability to place all of slash bar on a separate disk. This allows users to place the read only root Fs on one device and the rewrite bar partition on a different physical or virtual device that may have different performance or liability characteristics and the Extension System has landed. This allows us to ship certain carefully selected packages for CoreOS outside of the base image. And 4.6 you now have the option to enable usbguard utility to manage and disable USB ports and environments with higher physical security requirements. Next slide. For those of you installing by bare metal UPI, we have some really nice improvements around the ISO and Pixie images, and the core OS installer itself. few of the features I want to call out here are one the new ability of the installer to optionally preserve existing data partitions through a redeployment of the node, the default behavior will stay the same. And two, the ability to embed the ignition configuration inside a custom ISO environments were having an addiction endpoint is not allowed. Finally, the ISO and fix the images are now a RHEL CoreOS live environment, which you can boot into interactively for installation or troubleshoot. That also allows you to discover hardware things like interface names. One particularly useful reason to do this is next slide. The improved networking experience for UPI installs in the live environment, you can run an nmtui or nmcli configure bonds, static networking settings, and then persist that configuration into the installed node. And for VMware, we now have the ability to do static networking with OVA files, bypassing IP equals syntax through the guest info fields. And with that, I'll pass it over back to Marc for more networking.
Great, thank you, Mark. So on this one in OpenShift 4.6, we've made many networking improvements, a small number of which are highlighted over the next few slides. So first up, we've further enhanced our SR-IOV support. So for our customers with high throughput low latency internodal cluster traffic requirements like AI/ML applications tend to have and the flexibility of solution and something other than RDMA over converged Ethernet or rocky support. We now provide native InfiniBand support. Second, we support a new plugin named whereabouts. Whereabouts specific purpose is to provide a mechanism for assigning IP addresses via a standard Kubernetes API for multis fueled pod secondary interfaces. So IP addresses can be assigned from a range that's predefined, any overlaps are handled and no DHCP server is required for those secondary interfaces. Next slide please. Some other networking enhancements I just want to let you We made it OpenShift 4.6is to move OVS from a container that ran inside the cluster to the host. The reason we did this is to eliminate any network flow disruption during cluster upgrades, or restarts since OVS remains active during those events. Also, we have some large customers that quickly consume the number of available node ports in the cluster. So, we made an enhancements also to provide a mechanism to easily expand the service node port range for Kubernetes services of type node port, of course, you'll still need to expand the number of open ports at the infrastructure layer when you're doing this, but the ability to modify that range exists. And lastly, a similar improvement was made for egress firewall, and simply put the number of rules per policy was increased from 50 to 1000, to accommodate accommodate our high end customers. Next slide, please. Okay, HA proxy configuration enhancements. So customer requirements and use cases, as always, were the driving factor for several improvements, we made HA proxy to allow for specific customizations. So without going into a lot of specifics on each individual one, you'll be able to get these specifics from our 4.6 product documentation. Let me try to briefly address each one of these. So the first one HTTP forwarded header policy. This basically helps customers that have the requirement that our Ingress controller pass along unmodified, a set x forwarded for header for an applications route. The second one there Ingress TLS termination policy, so we now support two Ingress policy options re encrypt and pass through, or encrypted traffic types. The third one HTTP cookie capture. So we have customers that use specific named cookies in their HTTP traffic for reasons of business analytics, and auditing. So we now support logging those cookies to satisfy those requirements. The next one HTTP header capture. We also have customers that for very similar reasons to the cookie enhancement I just mentioned, wants a log specific HTTP request and response headers for routes. Next one HTTP unique ID header. So this enhancement provides the ability to configure the Ingress controller to inject an HTTP header with a uniquely defined request ID into their HTTP requests that our customers can then use to trace cluster traffic and it provides them with an improved understanding and observability of their internal cluster traffic. And the last one on this slide, HTTP path rewriting. This simply provides support for path rewriting on incoming traffic to direct that traffic as required. Next slide, please. Um, configuring Ingress controllers AWS nlb. So in OpenShift, four, six, we now support the ability to modify the load balancer type used when deploying to AWS from a type of classic load balancer to AWS is network load balancer or NLB. The NLB is just simply another AWS offer tool that can be used in deployments to to distribute traffic across multiple cloud resources to reduce latency and provide higher application throughput. Next slide, please.
Thanks, Mark. And so we come to storage. Next slide, please. With 4.6, we continue to push to be ready to the eventual changeover to the world of all CSI plugins everywhere instead of our entry drivers. But it's worth noting upfront that nothing is changing as for our support storage in 4.6, we still support all the entry storage drivers you've come to know and love. And a couple of things worth mentioning on the new front. We've done a lot of work on plumbing, so it might not be too visible. But things like the CSI operator library are really gonna help us to quickly develop those CSI drivers we need. And second, while CSI snapshots will remain in tech preview, we're enabling this feature to be fully supported when it's used in conjunction with CMV and OCS environments. And this is something that is the well known term of brash, consistent snapshots. Next slide, please. The OCS team or the OpenShift container storage team have also been extremely busy with their release. You can see all the main features listed on the slide but again, let me highlight a few for you. The first is encryption support. This is a frequently asked for highly desirable feature which will land in the 4.6 release. But that's not all. This is just not just ordinary encryption support. Its encryption support for the entire cluster. Secondly, you can see on the right hand side Some of the slides are much extended platform support available in OCS. And even though it's hiding at the bottom of our feature list, I should take time to mention that OCS will be following in ocps footsteps by making itself available on IBM Z and Power systems. And with that, I think we're ready for some stories about telco. So I'm going to hand over to Robert.
Thanks, Duncan. I'm pleased to announce that we are providing support for real time kernels and low latency workloads for ran use cases. Real Time kernels allow low latency consistent response time and workload determinism. This is going to be handled by something called the performance add on operator that will allow you to define a performance profile, which will install the real time kernel and tune your systems such the cores are isolated and dedicated for workloads such that the operating system will not interrupt them, but also do NUMA alignment for your device's memory, and all the other internals that you need to have a high performance workload. Next slide please. I'm also pleased to announce that we will be making our cloud native network function tests available. This is a container image that ensures your platform is ready to run your container network functions. This will check your precision timing protocol, your SR-IOV, your SCTP, your DPDK and the performance add on operator to make sure everything is as it should be, so that you can run your CNF. And with that said, I'll pass it over to Kirsten Newcomer for security and compliance.
Hi folks, next slide, please. So continuing our investment to automate security and compliance across the OpenShift cluster, including RHEL core OS. In addition to the compliance operator 4.6 will include the OpenShift file integrity operator. This will leverage the aid technology advanced intrusion detection engine. And this will help our customers meet compliance and security requirements. To do file integrity checking on the host, you'll be able to specify the list of files that you want to ensure there are no changes to that will create an a database with a hash, when you run the file integrity operator after that's been created, it will scan every node in the cluster and check to see whether those files have changed. Now the databases per node as there may be some differences on each node, some might have GPU etc. So admins can examine scan results for status can share those that information with auditors as needed, we will be looking to also integrate output from this solution into dashboards and alerts in the future. And just a quick note back on the compliance operator, many questions were asked about particular frameworks that we'll be addressing. Again, we'll be focusing on CIS for delivery in late October, early November, we will have with four dot six ga some REL core OS controls that are part of the FISMA moderate profile from the NIST 853 will be continuing after that to invest in FISMA, moderate, and then we'll be focusing on PCI DSS. So looking forward to continued feedback. Next slide. And so also, we will be integrating both sets of capabilities with ACM, so that you can leverage these across multiple clusters, and ACM Policy and Governance will be able to take advantage of the capabilities. The first focus, of course, is on the compliance operator. Next slide, please. And over to Anandnatraj.
Thanks, Kristen. So next couple of slides, I want to talk about all the latest and greatest security enhancements in OpenShift 4.6, especially for security minded customers, starting with customizing audit config. So now before dot six, you can now control the amount of information that's locked in the node audit logs by choosing the right audit log policy profile. So these profiles let you define how to log requests that come to the OpenShift API server, the Kubernetes API server and the OAuth API server. By default prior to 4.6 OCP used only the default audit log profile, and with 4.6 now you have two additional profiles, right request bodies and all request bodies. What the right request body will let you do is in addition to the default logging which logs only the metadata and not the Read and Write requests, the writer of this body will let you log request bodies for everything. Write requests to the API server to just create, update and patch the next profile on request bodies. What that lets you do in addition to logging metadata for all requests is it will let you log request bodies for every read and write requests that get into the API server, including get list create update and patch module that besides the default, the other two profiles, right request bodies and or request bodies will have some additional resource overhead in terms of CPU and memory. But it is not too much is what our performance studies have shown us. That essentially, you can modify the Can you go back to the previous lectures? Yeah, on the you can modify the API server object, and you can just, you know, select whatever profile you want. And mind you that we will not log anything beyond the metadata level for security sensitive resources, such as secrets, routes, and OAuth lines. Next slide, please. The next one is sitting token inactivity timeout for OAuth server. Essentially, if you want to set a token inactivity timeout after a specific period of time. For instance, let's say you log into the console, you perform some activities, and then you know, you don't do some some, any actions with a particular you know, interval of time, you're going to try to do something it should, you know, log you out. So that's essentially what this is, you can essentially, you know, modify the OAuth object by adding this value specter token config dot access, token inactivity, to do whatever time you want. And then like I said, you go to the console, you perform certain set of actions, you wait for 400 plus seconds, you go back and try to do some actions, it should give you an error saying you must be logged into the server to perform these actions and your unauthorized. Next slide please. The left side is securing OAuth resource storage. So OAuth access token and OAuth authorization token, use the object name for security sensitive token information. So encrypting the value only in etcd means that the token itself is stored as plain text in the etcd database, and encrypted backups. That is it's unprotected. So this enhancement is about migrating to a different storage format where the token object name is insensitive, and therefore can be stored in plain text without any risk. The key thing to note is if you're upgrading from prior to 4.6 let's say migrating from 4.5, you will still have some of those, you know, tokens lying around. And if the admin has changed the expression of those tokens from critical roles to something greater, that is still something you need to be aware of the sensitive data could be exposed at that point. So essentially, you'll have to wait until all the old tokens from 4.5 have either expired or have been deleted by the administration. Next slide, please. Okay, so windows community operator. So this is about bringing windows worker node supports to OpenShift we announced their preview sometime in April during the Red Hat, virtual summit. And we are now planning to ga this sometime at the end of this year, hopefully, in the December timeframe. And before that, we wanted our users, our community and our field to kind of get a flavor of how the windows operator is going to look like the next couple of days, maybe a week or so even, they will release a version called Windows Canary operators. So you can start you know, playing around with the windows operator, you know, prior to GA. And this is essentially an entry point for those OpenShift customers that want to run Windows workload on OpenShift. clusters. If you want to bring you know a Windows worker node run, you know windows containers and run Windows workload like you know, dotnet framework is web servers, SQL servers, whatnot, you should be able to do that using this new feature. And the intent of this feature is to allow cluster administrators to add windows compute node as a day to operation. And the prerequisite for this is going to be a 4.6 cluster configured with hybrid ob n. So if you already have a cluster, you know running with a different networking setup like OpenShift SDN or something, you will have to build a new cluster, or OpenShift 4.6 with hybrid OVN enabled. And in terms of the environments, we're going to support for GA it's going to include, it's going to be cloud first. So we're going to support AWS and Azure. You're trying to, you know, level best here to see if vSphere could be supported. The issue with vSphere is there's a lot of upstream dependency with the embedded so we working with them on things like storage components, working with Microsoft and logging and monitoring components. There's a little bit of upstream dependency here as far as vSphere is concerned. But we will see what best we can do to get vSphere supported, hopefully either by ga or a little after. And last but not least, the Red Hat certified operator As I mentioned will be available in December, you know, give it a couple of weeks into December. So I would say by mid December, but the community operator should be out, you know, by I would say mid October, and you can try the community operator from the cluster operator hub. And in terms of the different cycle, we will, you know, try to keep cadence with OpenShift releases as far as Red Hat certified operators concerned, the 4.8, 4.9 you know, so on and so forth. But the community operator will move faster paced right maybe every couple of Sprint's, which is every three weeks, they'll either every couple of Sprint's or every couple of months, definitely faster than that it has certified operator, this will give us a chance to you know, release faster to the operator and get, you know, valuable feedback from the users as to you know, what's working and what's not. Next slide please So that's just a differentiation between what committee operator and the Red Hat operator for Windows has to offer committee operator will be offered as empty, will be offered in cluster through the operator hub that had operated through the Marketplace. Available date for the committee operators mid October, it had operators mid December, platform supported AWS and Azure for community for Red Hat, it's going to be AWS, Azure and possibly vSphere. And like I said, prefer cycle that our operator will be you know, every Red Hat in a wide stream OCP release, and the community operator will be following a faster cadence every couple of sprints, you know, maybe a month or two. And that's going to be the workflow of what the Windows machine config operator does. This is really you know, the, the meat of this feature is the basically will, you know, pick up whatever Microsoft delivers upstream in terms of the kubelet the kube proxy, the CNI, hybrid ole whatnot, even nicely packaged everything up and operator and the operator will automate all the steps that is needed to get the windows node ready. Like you know, transferring the binary is configuring the kubelet setting up networking, you know, plumbing, like the hybrid overlay CNI, kube proxy whatnot, essentially prepare the windows nodes. So it can be bootstrap into the cluster and be happy citizen of the OpenShift cluster. And then it can, you know, communicate to our other Linux nodes communicate to you know, Windows nodes, get endless traffic from the outside, you know, make Ajax calls outside the cluster, so on and so forth. This is really the magic of the feature is the windows operator, which, you know, packages and automates all the necessary plumbing work that's needed for the windows worker node to be a part of a cluster.
Thanks. Thanks, and and, um, unless than by no means least, the multi architecture side, we continue to focus on IBM Power and Z systems. And there have been two men pushes in the group. The first in great news is that we're aligning with the x86 releases before you have to wait a little bit and pick us up in that stream. But that's no longer the case from the 4.6 release. Secondly, storage. Previously, we only had support for NFS as a storage option. And while that's okay, NFS isn't appropriate for all situations, that we're now expanding support to cover all the key storage types you would need. And the addition of non NFS storage gives us the additional bonus that it brings logging into the picture. Elastic Search isn't support NFS file systems with now we have other options, you can run it all in a supported configuration. We also fixed an OpenJDK version issue while we run it. And with that, I'm going to hand you back to the host with the most and give it back to Mike for a quick wrap up.
Thanks, Duncan. Wow, that was a lot of information. And don't worry, because we have strategically placed men and women all over the world that know OpenShift and know you. And we would love the opportunity to talk to you about how you can accelerate whatever project you're working on in OpenShift. Now, I will say that working in a community and an open source community on CNCF projects is what we like to do best because we get to move thoughts and ideas forward together and the community. There's no room in cloud for proprietary software in 2020 has been hard and working on projects like OpenShift help the people here at Red Hat get through those times because you're there.