The rest of the restore

We still have a little ways to go to get my cluster restored. My next step is going to be installing portworx. Portworx is a storage layer for Kubernetes that is software-defined, and allows for a few nice functions for stateful applications (migrations, dr, auto provisioning, etc). I’ll have more to say about that later (and full disclosure, I work for portworx). Portworx also has a essentials version that is perfect for home labs.

We can install portworx by building a spec here: https://central.portworx.com/landing/login

The above will ask you a bunch of questions, but I will document my setup by showing you my cluster provisioning manifest:

# SOURCE: https://install.portworx.com/?operator=true&mc=false&kbver=&b=true&kd=type%3Dthin%2Csize%3D32&vsp=true&vc=vcenter.lab.local&vcp=443&ds=esx2-local3&s=%22type%3Dthin%2Csize%3D42%22&c=px-cluster-e54c0601-a323-4000-8440-b0f642e866a2&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true
kind: StorageCluster
apiVersion: core.libopenstorage.org/v1
metadata:
  name: px-cluster-e54c0601-a323-4000-8440-b0f642e866a2 # you should change this value
  namespace: kube-system
  annotations:
    portworx.io/install-source: "https://install.portworx.com/?operator=true&mc=false&kbver=&b=true&kd=type%3Dthin%2Csize%3D32&vsp=true&vc=vcenter.lab.local&vcp=443&ds=esx2-local3&s=%22type%3Dthin%2Csize%3D42%22&c=px-cluster-e54c0601-a323-4000-8440-b0f642e866a2&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true"
spec:
  image: portworx/oci-monitor:2.11.1
  imagePullPolicy: Always
  kvdb:
    internal: true
  cloudStorage:
    deviceSpecs:
    - type=thin,size=42 # What size should my vsphere disks be?
    kvdbDeviceSpec: type=thin,size=32 # the kvdb is an internal key value db
  secretsProvider: k8s
  stork:
    enabled: true
    args:
      webhook-controller: "true"
  autopilot:
    enabled: true
  csi:
    enabled: true
  monitoring:
    prometheus:
      enabled: true
      exportMetrics: true
  env:
  - name: VSPHERE_INSECURE
    value: "true"
  - name: VSPHERE_USER
    valueFrom:
      secretKeyRef:
        name: px-vsphere-secret #this is the secret that contains my vcenter creds
        key: VSPHERE_USER
  - name: VSPHERE_PASSWORD
    valueFrom:
      secretKeyRef:
        name: px-vsphere-secret
        key: VSPHERE_PASSWORD
  - name: VSPHERE_VCENTER
    value: "vcenter.lab.local"
  - name: VSPHERE_VCENTER_PORT
    value: "443"
  - name: VSPHERE_DATASTORE_PREFIX
    value: "esx2-local3" #this will match esx2-local3* for provisioning
  - name: VSPHERE_INSTALL_MODE
    value: "shared"

There is a lot to unpack here, so look at the comments. It is important to understand that I will be letting portworx do the provisioning for me by talking to my vCenter server.

Before I apply the above, there are 3 things I need to do:

First, install the operator, without it, we will not have CRD of a StorageCluster:

kubectl apply -f https://install.portworx.com/?comp=pxoperator

Next, we need to get our secrets file. We need to encode the username and password is base64, so run the following:

echo '<vcenter-server-user>' | base64
echo '<vcenter-server-password>' | base64

And put the info in to the following file:

apiVersion: v1
kind: Secret
metadata:
 name: px-vsphere-secret
 namespace: kube-system
type: Opaque
data:
 VSPHERE_USER: YWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2Fs
 VSPHERE_PASSWORD: cHgxLjMuMEZUVw==

apply the above with:

kubectl apply -f px-vsphere-secret.yaml

Lastly, we need to tell portworx not to install on the control plane nodes:

kubectl label node rke1 px/enabled=false --overwrite
kubectl label node rke2 px/enabled=false --overwrite
kubectl label node rke3 px/enabled=false --overwrite
kubectl apply -f pxcluster.yaml

The above will take a few minutes, and towards the end of the process you will see VMDKs get created and attached to your virtual machines. Of course, it is possible for portworx to use any block device that is presented to your virtual machines. See the builder URL above, or write me a comment as I’m happy to provide a tutorial.

Install PX backup

Now that portworx is installed, we will see a few additional storage classes created. We will be using px-db for our persistent storage claims. We can create a customized set of steps by visiting the URL at the beginning of this article, but the commands I used were

helm repo add portworx http://charts.portworx.io/ && helm repo update
helm install px-central portworx/px-central --namespace central --create-namespace --version 2.2.1 --set persistentStorage.enabled=true,persistentStorage.storageClassName="px-db",pxbackup.enabled=true

This will take a few minutes. When finished (we can always check with kubectl get all -n central). We should see a number of services start, but two of them should have grabbed IP addresses from our load balancer:

ccrow@ccrow-virtual-machine:~$ kubectl get svc -n central
NAME                                     TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)               AGE
px-backup                                ClusterIP      10.43.16.171    <none>        10002/TCP,10001/TCP   6h15m
px-backup-ui                             LoadBalancer   10.43.118.195   10.0.1.92     80:32570/TCP          6h15m
px-central-ui                            LoadBalancer   10.43.50.164    10.0.1.91     80:30434/TCP          6h15m
pxc-backup-mongodb-headless              ClusterIP      None            <none>        27017/TCP             6h15m
pxcentral-apiserver                      ClusterIP      10.43.135.127   <none>        10005/TCP,10006/TCP   6h15m
pxcentral-backend                        ClusterIP      10.43.133.234   <none>        80/TCP                6h15m
pxcentral-frontend                       ClusterIP      10.43.237.87    <none>        80/TCP                6h15m
pxcentral-keycloak-headless              ClusterIP      None            <none>        80/TCP,8443/TCP       6h15m
pxcentral-keycloak-http                  ClusterIP      10.43.194.143   <none>        80/TCP,8443/TCP       6h15m
pxcentral-keycloak-postgresql            ClusterIP      10.43.163.70    <none>        5432/TCP              6h15m
pxcentral-keycloak-postgresql-headless   ClusterIP      None            <none>        5432/TCP              6h15m
pxcentral-lh-middleware                  ClusterIP      10.43.88.142    <none>        8091/TCP,8092/TCP     6h15m
pxcentral-mysql                          ClusterIP      10.43.27.2      <none>        3306/TCP              6h15m

let’s visit the px-backup UI IP address. I would do this now and set a username and password (the default credentials were printed to your console during the helm install).