Working with multiple clusters

So for a while, I have had a very backward way of accessing multiple clusters: I would set the kubeconfig environment variable, or change the default file. If I had bothered to learn the first thing about contexts, I could have avoided the confusion of keeping track of multiple files.

When a cluster is created, we often get a basic config file to access the cluster. I had often looked at these as a black box of access. Here is an example below from my rancher cluster:

YAML<br>clusters:<br>cluster:<br>certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpVNE56Z3lNVEUzTUI0WERUSXlNRGN5TlRJd05EZ3pOMW9YRFRNeU1EY3lNakl3TkRnegpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZMU9EYzRNakV4TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJNeGZhZjJsVHYzeWMrZkpZWmh5dENZQXhoZ09HYVgwMTU5QzdkYUQKaGxwL1h0OXpuVVdscWV1L21hQnlLa1RTdVZSUHc3MC83b2tKeGh4S0k3SU0vaHFqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCU1BUV29BQWNEd0trT29RdWpQClZOTjlxK2lMY3pBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCNGJraFhEK1JIL0ZrQlRtWkRnbmJtNHpZMXh4TDYKeG5JM1pSdzcyRUt4NWdJaEFNVXpNbW1peURQZTZiZmx2NUJ0K1Q4RjVIblNMekZCWjJWYThRdUwwZkxvCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K<br>server: https://rke1:6443<br>name: default<br>contexts:<br>context:<br>cluster: default<br>user: default<br>name: default<br>current-context: default<br>kind: Config<br>preferences: {}<br>users:<br>name: default<br>user:<br>client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRpZ0F3SUJBZ0lJYktNTk5pV2xMQXd3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkxT0RjNE1qRXhOekFlRncweU1qQTNNalV5TURRNE16ZGFGdzB5TXpBMwpNalV5TURRNE16ZGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXBybkxHQllpVUdIcFkKYWJqN1ptV2VXand1VVQ5U0xnMWFQSTlLSGFBc3VlMTBtb0RqUTdFNWxGamFBVWhKbnloT2pWOXd0NXo5OTlSZwpWelMyUThXU28wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVYZDUyRkZtRVUwa01LU0o1U1NOeklxK1lBVnd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXcKUkFJZ01LMlUzS2V2THkxOWFZMExuOGZvcjFZNUdRVk4vQzhHWmhCWEdTSmlqR1lDSUIvU3paQ3dGK0E2cGtIeApKeGNkTU5mU2FuNENBeWlia0V6WjlmSkx2T2IzCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxXTnMKYVdWdWRDMWpZVUF4TmpVNE56Z3lNVEUzTUI0WERUSXlNRGN5TlRJd05EZ3pOMW9YRFRNeU1EY3lNakl3TkRnegpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTFqYkdsbGJuUXRZMkZBTVRZMU9EYzRNakV4TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJPWUN6d0FtOW5kK09FcUxPYXhMMGRrNnUvQm5kem1Fa1k0YzkvNlYKNXNOaFQrcE1HQmV2VDlBQlNUYUNOZ1p2QUR5TlIyVEVuaFp3MGxCK1hkbWZSZnVqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUmQzbllVV1lSVFNRd3BJbmxKCkkzTWlyNWdCWERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlFQTlSOHcxSVk2dUw2SGNUQXpXQ0ROTGdSVFk5T24Kb05CMjJZVFBrVDU3NzdzQ0lFY2I1c3NsV200Sm9GTk9RZ3ZwQUZkYjR1RG5QMzFTbE5SZ1JuYk5Sd1hWCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K<br>client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSVBRSzFKUURKSUttQlFWemFTR0liZDR4N2YyU0NwMDl4M3ROdXNPaVJPeUVvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFS2E1eXhnV0lsQmg2V0dtNCsyWmxubG84TGxFL1VpNE5XanlQU2gyZ0xMbnRkSnFBNDBPeApPWlJZMmdGSVNaOG9UbzFmY0xlYy9mZlVZRmMwdGtQRmtnPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=YAML

Thanks to the official documentation (RTFM folks) I think it has finally clicked. We have lists of 3 different object types in the above config:
– Cluster: the connection to the cluster (contains a CA and endpoint)
– User: Identified with the client cert data and key data
– Context: Ties the above together (also namespaces if we want)

Contexts allow me to have multiple configurations and switch between them using the kubectl config use-context command. My goal is to have a connection to both my openshift cluster, and my rancher cluster. So I combined (and renamed some elements) the configuration:

YAML

If we understand a little YAML, we can easily combine the files. Now it is simple to switch between my clusters:

Shell

Installing Portworx on Openshift

Today I decided to see about installing Portworx on Openshift with the goal of being able to move applications there from my production RKE2 cluster. I previously installed openshift using the Installer provisioned infrastructure (rebuilding this will be a post for another day). It is a basic cluster with 3 control nodes and 3 worker nodes.

Of course, I need to have a workstation with Openshift Client installed to interact with the cluster. I will admit that I am about as dumb as a post when it comes to openshift, but we all have to start somewhere! Log in to the openshift cluster and make sure kubectl works:

Login

Next, I went over to px central to create a spec. One important note! Unlike installing Portworx on other distros, openshift needs you to install the portworx operator using the Openshift Operator Hub. Being lazy, I used the console:

I was a little curious about the version (v2.11 is the current version of portworx as of this writing). What you are seeing here is the version of the operator that gets installed. This will allow the use of the StorageCluster object. Without installing the operator (and just blindly clicking links in the spec generator) will generate the following when we go to install Portworx:

Shell

Again, I chose to let Portworx automatically provision vmdks for this installation (I was less than excited about cracking open the black box of the OpenShift worker nodes).

Shell
Shell

Success!

We can also get the pxctl status. In this case, I would like to run the command directly from the pod, so I will create an alias using the worst bit of bash hacking known to mankind (any help would be appreciated):

Shell

For the next bit of housekeeping, I want to get a kubectl config so I can add this cluster in to PX Backup. Because of the black magic when I used the oc command to log in, I’m going to export the kubecfg with:

Shell

Notice that the token above is redacted, you will need to add your token from the oc when pasting it to PX Backup

And as promised, the spec I used to install:

YAML