×

Overview

Cartridges in OpenShift v2 were the focal point for building applications. Each cartridge provided the required libraries, source code, build mechanisms, connection logic, and routing logic along with a preconfigured environment to run the components of your applications.

However, with cartridges there was no clear distinction between the developer content and the cartridge content. For example, one limitation was that you did not have ownership of the home directory on each gear of your application. Another disadvantage was that cartridges were not the best distribution mechanism for large binaries. While you could use external dependencies from within cartridges, doing so would lose the benefits of encapsulation.

Dependencies

Cartridge dependencies were defined with Configure-Order or Requires in the cartridge manifest. OpenShift v3 uses a declarative model where pods bring themselves in line with a predefined state. Explicit dependencies that are applied are done at runtime rather than just install time ordering.

For example, you might require another service is available before you start. Such a dependency check is always applicable and not just when you create the two components. Thus, pushing the dependency into a runtime dependency enables the system to stay healthy over time.

Collocation

Whereas cartridges in OpenShift v2 could be collocated within gears, images in OpenShift v3 are mapped 1:1 with containers. Containers use pods as their collocation mechanism.

Source Code

In OpenShift v2, applications were required to have at least one web framework with one git repo. With OpenShift v3, you can choose which images are built from source and that source can be located outside of OpenShift itself. Because the source is disconnected from the images, the choice of image and source are distinct operations with source being optional.

Build

In OpenShift v2, builds happened in place as part of your application gears. This meant downtime for non scaled applications due to resource constraints. In v3, builds happen in separate containers.

Another change for builds is how those builds are transmitted between containers. In v2, build results were rsynced between gears. In v3, build results are first committed as an immutable image and published to an internal registry. That image is then available to launch on any of the nodes in the cluster or rollback to at a future date.

All of this means that builds are no longer part of the cartridges and are provided as distinct choices.

Routing

Routing is another area that has undergone a lot of formalization in OpenShift v3. In v2, you had to make choices up front as to whether your application was scalable, and a secondary choice as to whether the routing layer for your app was enabled for high availability. In OpenShift v3, routes are first class objects that are HA capable simply by scaling up your application component to 2 or more replicas. There is never a need to recreate your application or change its DNS entry.

The routes themselves have also been moved up a level and are disconnected from the images themselves. Previously, cartridges defined a default set of routes and you could add additional aliases to your applications. With v3, you can use templates to setup 0-N routes for any image. These routes let you modify the scheme, host, and paths exposed as desired, and there is no distinction between system routes and user aliases.