Introduction

The concept of platform based cloud providers in Kubernetes are being migrated to a more agnostic approach. An important part of any provider is the container persistent storage that the cloud provides. Kubernetes container storage interface or CSI provides an interface for exposing arbitrary block and file storage systems to containerized workloads in Kubernetes.

While the framework exists in upstream Kubernetes, the provider is in charge of their own drivers. This means timely updates can be included in the platform itself and not in Kubernetes. This article will explain the process of installing and configuring CSI for vSphere and will discuss some of the benefits of migrating to CSI vs. the in-tree cloud provider for vsphere-volume.

This installation can be in tandem with the existing vSphere storage class for vsphere-volume. In the event that the new storage class is preferred, it can be made the default replacing the vsphere cloud provider.

Platform Requirements

The required platform for vSphere CSI is at least vSphere 6.7 U3. This particular update includes vSphere’s container native storage which provides ease of use in the vCenter console. The first step for installing CSI on OpenShift is to create a secret which will be used for CSI’s configuration and access to the vCenter API.

Additionally, cluster VMs will need “disk.enableUUID” and VM hardware version 15 or higher.

# govc vm.change -vm '/datacenter/vm/ocp/control-plane-0' -e="disk.enableUUID=1"

# govc vm.upgrade -version=15 -vm '/datacenter/vm/ocp/control-plane-0'

Installing the vSphere Cloud Provider Interface

# vim csi-vsphere.conf


[Global]

cluster-id = "csi-vsphere-cluster"

[VirtualCenter "vcsa67.cloud.example.com"]

insecure-flag = "true"

user = "Administrator@vsphere.local"

password = "SuperPassword"

port = "443"

datacenters = "RDU"


# oc create secret generic vsphere-config-secret --from-file=csi-vsphere.conf --namespace=kube-system
# oc get secret vsphere-config-secret --namespace=kube-system

NAME                TYPE        DATA       AGE

vsphere-config-secret   Opaque 1 43s

 

If you have some experiences with the vsphere cloud provider the above format should be familiar to you. The following VMware article discusses more about this method of configuration and installation on a vanilla K8s platform.

Storage Tags for Storage Policy Names

One of the biggest limitations of the old vSphere cloud provider was the relationship between a storage class and a datastore. This one to one relationship made multiple storage classes a requirement and imposed inherent limitations on datastore clustering as well. When using the CSI driver a storage policy name can be used to distribute across multiple datastores.

 

 

 

The following tag based placement rule was applied to the aos-vsphere datastore in question. Multiple datastores could have been added to this policy by assigning the same tag.

Now, that the vSphere infrastructure has been prepared the CSI drivers can be installed.

Install the vSphere CSI Driver

First, install the required RBAC permissions

# oc create -f https://raw.githubusercontent.com/dav1x/ocp-vsphere-csi/master/csi-driver-rbac.yaml
serviceaccount/vsphere-csi-controller created

clusterrole.rbac.authorization.k8s.io/vsphere-csi-controller-role created

clusterrolebinding.rbac.authorization.k8s.io/vsphere-csi-controller-binding created

securitycontextconstraints.security.openshift.io/vsphere-csi-scc created

 

Now, install the daemonset and statefulset for CSI controller and driver

# oc create -f https://raw.githubusercontent.com/dav1x/ocp-vsphere-csi/master/csi-driver-deploy-sts.yaml

statefulset.apps/vsphere-csi-controller created

csidriver.storage.k8s.io/csi.vsphere.vmware.com created

# oc create -f https://raw.githubusercontent.com/dav1x/ocp-vsphere-csi/master/csi-driver-deploy-ds.yaml

daemonset.apps/vsphere-csi-node created

After the installation verify success by querying one of the new custom resource definitions

# oc get  CSINode

NAME              CREATED AT

control-plane-0   2020-03-02T18:21:44Z

 

Storage Class creation and PVC deployment

Now, the storage class should be deployed and tested. In the event that the vSphere cloud provider storage class was also present, the additional storage class can be created then used for migrations and workload use in tandem.

# oc get sc

NAME PROVISIONER AGE

csi-sc csi.vsphere.vmware.com 6m44s

thin (default) kubernetes.io/vsphere-volume 3h29m



# vi csi-sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-sc
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: csi.vsphere.vmware.com
parameters:
StoragePolicyName: "GoldVM"



# oc create -f csi-sc.yaml



# Test the new storage class

# vi csi-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: csi-pvc2
annotations:
volume.beta.kubernetes.io/storage-class: csi-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi




# oc create -f csi-pvc.yaml



# oc get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

csi-pvc2 Bound pvc-8892e718-d89a-4267-9826-e2beb362e723 30Gi RWO csi-sc 9m27s

 

The following information should be in the events log on vCenter.

 

 

 

Conclusion

In the event that the new CSI storage class is the preferred one, patch both of the classes:

# oc patch storageclass thin -p '{"metadata": {"annotations": \

{"storageclass.kubernetes.io/is-default-class": "false"}}}'

# oc patch storageclass csi-sc -p '{"metadata": {"annotations": \

{"storageclass.kubernetes.io/is-default-class": "true"}}}'

In closing, this article has laid out directions for deploying and using CSI driver support on vSphere using OpenShift. CSI driver support and container native storage greatly simplify deployment and management of persistent volume container workloads.

Additionally, the disks are no longer indexed with virtual machines and are considered first class disks.

For more information on CSI and OpenShift, check out this solution brief I worked on with VMware.

 


Categories

How-tos, OpenShift 4.3, vSphere, Installation

< Back to the blog