Plexstack Part 5 – Installing Radarr

There are a couple more concepts I want to cover before turning folks loose on a github repo:

  1. Instead of a hostpath, we should be using a PVC (persistent volume claim) and PV (persistent volume).
  2. What if we need to give a pod access to an existing and external dataset?

Radarr (https://radarr.video/) is a program that manages movies. It can request them using a download client, and can then rename and move them into a shared movies folder. As such, our pod will need to have access to 2 shared locations:

  1. A shared downloads folder.
  2. A shared movies folder.

NFS Configuration

We need to connect to our media repository. This could directly mount the media server, or to a central NAS. In any case, our best bet is to use NFS. I won’t cover setting up the NFS server here (ping me in the comments if you get stuck), but I will mention how to connect to an NFS host.

This bit of code needs to be run from kubernetes node if you happen to use kubectl on a management box. If you have been following these tutorials and using a single linux server, then feel free to ignore this paragraph.

# Install NFS client
sudo apt install nfs-common -y

# edit /etc/fstab
sudo nano /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sysvg/root during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQM11kT2U143DtREAzGzzsoDCYbD2h7Ijke / xfs defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/890c138e-badd-487e-9126-4fd11181cf5c /boot xfs defaults 0 1
# /boot/efi was on /dev/sda1 during curtin installation
/dev/disk/by-uuid/6A88-778F /boot/efi vfat defaults 0 1
# /home was on /dev/sysvg/home during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQMZoJ5IYUmfVeAlOMYoeVSU3WStycNW6MX /home xfs defaults 0 1
# /opt was on /dev/sysvg/opt during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQM1Vgg9WyNh823YnysItHcwA4kc0PAzrAq /opt xfs defaults 0 1
# /tmp was on /dev/sysvg/tmp during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQMRA3d1jDZr8n9R23N2t4o1yxCyz2hiD3q /tmp xfs defaults 0 1
# /var was on /dev/sysvg/var during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQMnhsacKjBubhXMyv1tK8D3umR3mnzSjbp /var xfs defaults 0 1
# /var/log was on /dev/sysvg/log during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQM1IyfBAleLuw7m0G3UC9KNLrtmVAodTqu /var/log xfs defaults 0 1
# /var/audit was on /dev/sysvg/audit during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQMsrZUFWfY77xrwFBu3vSgbUfnJIp3AKA6 /var/audit xfs defaults 0 1
/swap.img       none    swap    sw      0       0

#added nfs mounts to the end of the file
10.0.1.8:/volume1/movies /mnt/movies nfs defaults 0 0
10.0.1.8:/volume1/downloads /mnt/downloads nfs defaults 0 0

Lines 29 and 30 were added to the end of the file. Be sure to change the IP address and export path. Go ahead and mount the exports:

mount /mnt/movies
mount /mnt/downloads

PVC and Radarr configuration

Second, we don’t want to use host path under most circumstances, so we need to get in the habit of using a PVC with a provisioner to manage volumes. This will effectively make our architecture much more portable in the future.

A CSI driver allows automated provisioning of storage. Storage is often external to the kubernetes nodes, and is essential when we have a multi-node cluster. I would encourage everyone to read this article from RedHat. The provisioner we will be using is rather simple: it would create a path on the host and store files there. The outcome is the same, but the difference is how we get there. Go ahead and install the local provisioner:

# Install the provisioner
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml

# Patch the newly created storage class
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Now take a look at this manifest for rancher (as always, a copy of this manifest is out on github: https://github.com/ccrow42/plexstack):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: radarr-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-path
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: radarr-deployment
  labels:
    app: radarr
spec:
  replicas: 1
  selector:
    matchLabels:
      app: radarr
  template:
    metadata:
      labels:
        app: radarr
    spec:
      containers:
        - name: radarr
          image: ghcr.io/linuxserver/radarr
          env:
            - name: PUID
              value: "999"
            - name: PGID
              value: "999"
          ports:
            - containerPort: 7878
          volumeMounts:
            - mountPath: /config
              name: radarr-config
            - mountPath: /downloads
              name: radarr-downloads
            - mountPath: /movies
              name: radarr-movies
      volumes:
        - name: radarr-config
          persistentVolumeClaim:
            claimName: radarr-pvc
        - name: radarr-downloads
          hostPath:
            path: /mnt/downloads
        - name: radarr-movies
          hostPath:
            path: /mnt/movies
---
kind: Service
apiVersion: v1
metadata:
  name: radarr-service
spec:
  selector:
    app: radarr
  ports:
  - protocol: TCP
    port: 7878
    targetPort: 7878
  type: LoadBalancer
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-radarr
  annotations:
    cert-manager.io/cluster-issuer: selfsigned-cluster-issuer #use a self-signed cert!
    kubernetes.io/ingress.class: nginx
spec:
  tls:
    - hosts:
        - radarr.ccrow.local #using a local DNS entry. Radarr should not be public!
      secretName: radarr-tls
  rules:
    - host: radarr.ccrow.local 
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: radarr-service
                port:
                  number: 8181

Go through the above. At a minimum, lines 80 and 83 should be modified. You will also notice that our movies and download directories are under the /mnt folder.

To connect to the service in one of two ways:

  1. LoadBalancer: run ‘kubectl get svc’ and record the IP address of the radarr-service, then connect with: http://<IPAddress>:7878
  2. Connect to the host name (provided you have a DNS entry that points to the k8s node)

That’s it!

Deploying Rancher clusters

Update January 5th 2023

We all get older and wiser, and although the below procedure works, a co-worker asked me: “Why not just use the cloud init image?” Information and downloads can be found here.

  • Grab the OVA
  • Deploy the OVA to vSphere
  • Mark it as a template

The rest of the article continues…

After a long while of playing with templates, I finally have a working configuration that I am documenting to ensure that I don’t forget what I did.

Step 1: packer

In trying to get a usable image, I ended up using packer following this tutorial: https://github.com/vmware-samples/packer-examples-for-vsphere. No dice, so after ensuring I had all of the packages added from here: https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template, the only missing packages were the growpart.

I tried prepping the template from the above, but ended up using the following script: https://github.com/David-VTUK/Rancher-Packer/blob/main/vSphere/ubuntu_2204/script.sh

# Apply updates and cleanup Apt cache

apt-get update ; apt-get -y dist-upgrade
apt-get -y autoremove
apt-get -y clean
# apt-get install docker.io -y

# Disable swap - generally recommended for K8s, but otherwise enable it for other workloads
echo "Disabling Swap"
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# Reset the machine-id value. This has known to cause issues with DHCP
#
echo "Reset Machine-ID"
truncate -s 0 /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id

# Reset any existing cloud-init state
#
echo "Reset Cloud-Init"
rm /etc/cloud/cloud.cfg.d/*.cfg
cloud-init clean -s -l

and I was off to the races… to only hit another problem.

Troubleshooting

I found the following reddit thread that was rather helpful: https://www.reddit.com/r/rancher/comments/tfxnzr/cluster_creation_works_in_rke_but_not_rke2/

export KUBECONFIG=/etc/rancher/rke2/rke2.yaml; export PATH=$PATH:/var/lib/rancher/rke2/bin
kubectl get pods -n cattle-system
kubectl logs <cattle-cluster-agent-pod> -n cattle-system

The above describes an easy way to test nodes that are coming up. Keep in mind that RKE2 turns up in a very different way than RKE. After the cloud-init stage, RKE2 binaries and containerd are deployed. It is helpful to be able to monitor pods that are coming up that control agents.

The last issue I encountered was that my /var filesystem didn’t have enough space. After fixing my template I now have a running RKE2 cluster!

PlexStack Part 4 – Our first app: Tautulli

We are now at a point where we can build our first application that requires some persistence. We are going to start with Tautulli, an application that provides statistics about your Plex server.

We assume that you only have a single server. The state of kubernetes storage is interesting. The easiest way is to simply pass a host path in to the pod, but that doesn’t work when you have multiple nodes. Incidentally, what I do for my day job (Portworx Cloud Architect) is solving these problems for customers. More on that later.

We first need to specify a location to store configuration data. I will use /opt/plexstack/tautulli as an example.

mkdir -p /opt/plexstack/tautulli

Next, let’s take a look at the manifest to install tautulli:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tautulli-deployment
  labels:
    app: tautulli
spec:
  replicas: 1
  selector:
     matchLabels:
       app: tautulli
  template:
    metadata:
      labels:
        app: tautulli
    spec:
     containers:
        - name: tautulli
          image: ghcr.io/linuxserver/tautulli
          env:
            - name: PUID
              value: "999"
            - name: PGID
              value: "999"
            - name: TZ
              value: "America/Los_Angeles"
          ports:
            - containerPort: 8181
          volumeMounts:
            - mountPath: /config
              name: tautulli-config
     volumes:
       - name: tautulli-config
         hostPath: 
            path: /opt/plexstack/tautulli
---
kind: Service
apiVersion: v1
metadata:
  name: tautulli-service
spec:
  selector:
    app: tautulli
  ports:
  - protocol: TCP
    port: 8181
    targetPort: 8181
  type: LoadBalancer
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-tautulli
  namespace: plexstack
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
spec:
  tls:
    - hosts:
        - tautulli.ccrow.org
      secretName: tautulli-tls
  rules:
    - host: tautulli.ccrow.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: tautulli-service
                port:
                  number: 8181

There is a lot to unpack here:

  • The first section is the deployment. It defines the application that will run. Line 19 specifies the image.
  • Lines 21 – 26 are environment variables that configure tautulli
  • We can see where we specify the /config directory inside the container to be mapped to a host path (lines 29 – 35).
  • The next section is the service, which looks for pods with an app selector of tautulli.
  • We are also going to provision a load balancer IP address to help with troubleshooting. This could be changed to ClusterIP to be internal only. After all, why go to an ip address when we can use an ingress.
  • Tautulli.ccrow.org must resolve to our rancher node through the firewall (a step we already did in the last blog.

Let’s apply the manifest with:

# create the namespace
kubectl create namespace plexstack

# apply the manifest
kubectl -n plexstack apply -f tautulli.yaml

# check on the deployment
kubectl -n plexstack get all -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
pod/tautulli-deployment-b4d5485df-f28px   1/1     Running   0          45s   10.42.2.30   rke04   <none>           <none>

NAME                       TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE   SELECTOR
service/tautulli-service   LoadBalancer   10.43.36.8   10.0.1.55     8181:31154/TCP   45s   app=tautulli

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                         SELECTOR
deployment.apps/tautulli-deployment   1/1     1            1           45s   tautulli     ghcr.io/linuxserver/tautulli   app=tautulli

NAME                                            DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                         SELECTOR
replicaset.apps/tautulli-deployment-b4d5485df   1         1         1       45s   tautulli     ghcr.io/linuxserver/tautulli   app=tautulli,pod-template-hash=b4d5485df

Notice the external IP address that was created for the tautulli-service. You can connect to the app from that IP (be sure to add the 8181 port!) instead of the DNS name.

All configuration data will be stored under /opt/plexstack/tautulli on your node.

Bonus Appplication: smtp

In order for tautulli to send email, we need to set up an SMTP server. This will really show off the power of kubernetes configurations. Take a look at this manifest:

---
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: smtp-deployment
   labels:
     app: smtp
 spec:
   replicas: 1
   selector:
      matchLabels:
        app: smtp
   template:
     metadata:
       labels:
         app: smtp

     spec:
      containers:
         - name: smtp
           image: pure/smtp-relay
           env:
             - name: SMTP_HOSTNAME
               value: "mail.ccrow.org"
             - name: RELAY_NETWORKS
               value: "10.0.0.0/8"
           ports:
             - containerPort: 25
---
kind: Service
apiVersion: v1
metadata:
  name: smtp-service
spec:
  selector:
    app: smtp
  ports:
  - protocol: TCP
    port: 25
    targetPort: 25
  type: ClusterIP

You can apply the above manifest. Be sure to change lines 24 and 26 to match your network. Please note: “Your network” really means your internal kubernetes network. After all, why would we send an email from an external source (well, unless you want to, in which case, change line 41 to loadBalancer).

kubectl -n plexstack apply -f smtp.yaml

We now have a working SMTP server! The coolest part of kubernetes service discovery is being able to simply use the name of our service for any application in the same namespace:

Using the service name means that this configuration is portable, no need to actually plug in the cluster IP address that was assigned.

PlexStack Part 3 – External Services and Ingresses

Now we will finally start to get in to some useful configurations for our home PlexStack: Ingresses and external services.

An Ingress is a kubernetes managed reverse proxy that is typically selected through a host. It turns out that your new Rancher cluster is listening on port 80 and 443, but if you have tried to connect by IP address, you will be greeted with a 404 error. An ingress will essentially route a web connection to a particular URL to a service. This means that you will need to configure your DNS service, and likely your router. Let’s look at an example to explain:

I have a service called Uptime Kuma (an excellent status dashboard with alerting) that runs on a raspberry PI. The trouble is, I want to secure the connection with SSL. Now I could install a cert on the Pi, but how would I automatically renew the 90 day cert for let’s encrypt? More importantly, how do I have multiple named services behind a single IP address? Ingresses.

For my example, I have a DNS entry for status.ccrow.org that points to the external IP of my router. I then forward ports 80 and 443 (TCP) to my Rancher node. If I have more than one node, it turns out I can port forward to ANY rancher node.

Next, I have a yaml file that defines 3 things:

  1. A service – an internal construct that kubernetes uses to connect to pods and other things
  2. An endpoint – a kubernetes object that resolves to an external web service
  3. An ingress – A rule that looks for incoming connections to status.ccrow.org, and routes it to the service, and then endpoint. It also contains configurations for the SSL cert information
apiVersion: v1
kind: Service
metadata:
  namespace: externalsvc
  name: pikuma-svc
spec:
  type: ClusterIP
  ports:
    - protocol: TCP
      port: 3001
      targetPort: 3001
---
apiVersion: v1
kind: Endpoints
metadata:
  namespace: externalsvc
  name: pikuma-svc
subsets:
  - addresses:
    - ip: 10.0.1.4
    ports:
    - port: 3001
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-pikuma
  namespace: externalsvc
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx

spec:
  tls:
    - hosts:
        - status.ccrow.org
      secretName: status-tls
  rules:
    - host: status.ccrow.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: pikuma-svc
                port:
                  number: 3001

A few important elements in the above I will explain:

  • all 3 of these elements will be stored in the externalsvc namespace (which will need to be created!)
  • The ingress >(points to)> the services >(points to)> the endpoint
  • The name of the service and endpoint needs to match on line 5, 17, and 46
  • The service type on line 7 is interesting. If it is set to loadBalancer, then an IP address (from the range that we defined in the previous blog post) would be provisioned to the service. No sense in doing that here.
  • Line 30 defines which cert provisioner we are using. Per our previous blog post, your choices are letsencrypt-prod, letsencrypt-staging, and selfsigned-cluster-issuer. Only use letsencrypt-prod if you are ready to go live. You can certainly use a self-signed issuer if you are using an internal DNS name, or if you don’t mind a self-signed certificate.
  • Lines 36 and 39 must match, and define the dns name that will be the incoming point.

Apply the config with:

kubectl create namespace externalsvc
kubectl apply -f uptimekuma-external.yaml

If you decided to go with the let’s encrypt cert, some verification has to happen. It turns out, the cert-manager will create a certificate request, which will create an order, which will create a challenge, which will spawn a new pod with a key that the let’s encrypt services will try to connect to. Of course if the DNS name or firewall hasn’t been configured, this process will fail.

This troubleshooting example is an excellent reference for tracking down issues (Credit):

$ kubectl get certificate
NAME               READY   SECRET     AGE
acme-certificate   False   acme-tls   66s

$ kubectl describe certificate acme-certificate
[...]
Events:
  Type    Reason     Age   From          Message
  ----    ------     ----  ----          -------
  Normal  Issuing    90s   cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  90s   cert-manager  Stored new private key in temporary Secret resource "acme-certificate-tr8b2"
  Normal  Requested  89s   cert-manager  Created new CertificateRequest resource "acme-certificate-qp5dm"

$ kubectl describe certificaterequest acme-certificate-qp5dm
[...]
Events:
  Type    Reason        Age    From          Message
  ----    ------        ----   ----          -------
  Normal  OrderCreated  7m17s  cert-manager  Created Order resource default/acme-certificate-qp5dm-1319513028

$ kubectl describe order acme-certificate-qp5dm-1319513028
[...]
Events:
  Type    Reason   Age    From          Message
  ----    ------   ----   ----          -------
  Normal  Created  7m51s  cert-manager  Created Challenge resource "acme-certificate-qp5dm-1319513028-1825664779" for domain "example-domain.net"

$ kubectl describe challenge acme-certificate-qp5dm-1319513028-1825664779
[...]
Status:
  Presented:   false
  Processing:  true
  Reason:      error getting clouddns service account: secret "clouddns-accoun" not found
  State:       pending
Events:
  Type     Reason        Age                    From          Message
  ----     ------        ----                   ----          -------
  Normal   Started       8m56s                  cert-manager  Challenge scheduled for processing
  Warning  PresentError  3m52s (x7 over 8m56s)  cert-manager  Error presenting challenge: error getting clouddns service account: secret "clouddns-accoun" not found

9/10 times, the issue will be in the challenge, and let’s encrypt can’t connect to the pod to verify you are who you say you are.

Now the above isn’t very cool if you only have one service behind your firewall, but if you have half a dozen, it can be very useful because you can have all of your web services behind a single IP. We will be building on using the ingress next by deploying our first application to our cluster..

PlexStack Part 2 – Installing Metallb and Cert-Manager on your new node.

Well, after a lengthy break involving a trip to Scotland, we are back in business! I also learned that I don’t remember as much about VMware troubleshooting as I used to when I encountered a failed vCenter server, but that is a story for another time.

In this post we will be installing a couple bits of supporting software. Metallb is a load balancer which will allow us to give out a block of IP addresses to K8S services, which can be a fairly easy way to interact with kubernetes services. Cert-manager is a bit of software that will allow us to create SSL certificates through let’s encrypt.

MetalLB

There are a couple of things that are worth getting familiar with. First, be comfortable with a text editor. I will be posted a number of files that you will need to copy and modify. Second, I would learn a little about git. I have a repository that you can feel free to clone here.

To install Metallb, we will first install the manifest.

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.4/config/manifests/metallb-native.yaml

Note the static URL above, it may be worth heading over to https://metallb.universe.tf/installation/ for updated instructions.

Next, we need to configure MetalLB by editing the following file:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.0.221 - 192.168.0.229
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system

Edit the above and change the addresses. The binding is handled by the L2Advertisement. Because the is not a selector that calls out first-pool, all of them are used. Obviously, your addresses should be in the same subnet as your K8s nodes. You can apply the config with:

kubectl apply -f metallb-config.yaml

That’s it, on to cert-manager.

Cert-Manager

the cert manager installation is best done with helm. Helm similar to a package manager for kubernetes. Installation is rather straight forward on Ubuntu. Of course snap seems to be a rather hated tool, but it does make things easy:

sudo snap install helm --classic

And the installation of cert-manager can be done with:

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
   cert-manager jetstack/cert-manager \
   --namespace cert-manager \
   --create-namespace \
   --set installCRDs=true

That’s it! Now we just need to configure it. Configurations will be handled with certificate issuers, which simply tell cert-manager how to generate a certificates. Don’t worry about the specific network plumbing just yet (we will cover that in the next post). I use 3 issuers: prod (let’s encrypt), staging, and self-signed. Take a look at the following and edit as needed:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: chris@ccrow.org
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: letsencrypt-staging
 namespace: cert-manager
spec:
 acme:
   # The ACME server URL
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: chris@ccrow.org
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-cluster-issuer
spec:
  selfSigned: {}

The emails above should be changed. It is also worth noting that I have combined 3 different manifests by separating them with ‘—‘ . You can apply the config with:

kubectl apply -f cert-issuer.yaml

That will do it! We are ready to move on to configuring our first service.

PlexStack Part 1 – Installing a single node Kubernetes Cluster

In our last post, I provided an overview of what we are trying to accomplish, so we will dive right into creating a single node Kubernetes cluster.

We are going to use Rancher RKE2 running on Ubuntu 20.04. I will admit that a lot of these choices are due to familiarity. There are a few other options for advanced users.

  • For a multi-node Rancher RKE2 cluster, check out A Return of Sorts
  • For a slightly more manual way, consider using kubeadm (I really liked this post)

We will need to start with a serviceable Ubuntu 20.04 machine. You can really install this on your hypervisor of choice. I would recommend giving your VM 4vCPUs, 12gb of RAM, and a 60GB root drive. Head over to ubuntu.com and grab a manual install of 20.04. The installation is fairly easy, enable SSH and give your VM a static IP address. (And comment if you get stuck and I will set up a tutorial).

Advanced Tip: For those that want to build a ubuntu 20.04 template using VMware customizations, check out this post at oxcrag.net

We should now have a running Ubuntu 20.04 VM that we can SSH to. I will be installing all of the client tools and configurations on this same VM.

Let’s update our VM and install some client tools:

# Update and reboot our server
sudo apt update
sudo apt upgrade -y
reboot

# install git
sudo apt install git

# install kubectl 
sudo snap install kubectl --classic

Installing RKE2

Up until now, I have been a little loose with the terms Rancher and RKE2. Rancher is a management platform that can install on any Kubernetes flavor and acts as a bit of a manager of managers. RKE2 is the Rancher Kubernetes Engine 2, which is a lightweight Kubernetes distro that is easy to install and work with.

Install RKE2 with:

sudo curl -sfL https://get.rke2.io |sudo  INSTALL_RKE2_CHANNEL=v1.23 sh -
###
###
sudo systemctl enable rke2-server.service
sudo systemctl start rke2-server.service

Now let’s install and configure some client tools.

# Snag the configuration file
mkdir .kube
sudo cp /etc/rancher/rke2/rke2.yaml ~/.kube/config
sudo chown ubuntu:ubuntu .kube -R

# Test Kubectl
kubectl get nodes
NAME         STATUS   ROLES                       AGE   VERSION
ubuntutest   Ready    control-plane,etcd,master   15m   v1.23.9+rke2r1

That’s it! We have a single node Kubernetes cluster!

Introducing PlexStack

After a hiatus due to my own stupidity of not adding this website to my backup set (which is somehow the greater sin than me destroying my Kubernetes cluster in a rage without bothering to check on said backup), I’m going to start documenting PlexStack.

PlexStack is a collection of configurations to bring a single node Kubernetes cluster online to do a few things that can start to be difficult if we were to set them up separately:

  • An ingress that can provide access to different internal web pages from a single IP address.
  • SSL certificate management using Let’s Encrypt and endpoint termination at ingress
  • A place to easily run some applications to support your plex infrastructure:
    • Monitoring with Uptime Kuma
    • SMTP relays
    • Apps like Radarr, Sonarr, OMBI, etc

The goal is with a little Linux, and networking knowledge, you will be able to provide external resources to the world that are encrypted, as well as having an easy-to-maintain, secure place to run many of the applications we all use to automate plex infrastructure.

OMBI running in a container with a proper SSL cert

The full list of applications that we will be spinning up:

  • OMBI
  • Radarr
  • Sonarr
  • qBittorrent
  • Tautulli
  • SMTP relay
  • Uptime-Kuma
  • Varken
  • Jackett

What do we need to get started?

We will need a single Ubuntu 20.04 server with:
– 4 to 6 cores
– 16gb of RAM
– 80gb root drive
– A static IP address
– (optional) a block of IP addresses for those that would like to deploy a load balancer.

It is outside of the scope of this series to build and deploy a ubuntu template, but if you wish to use VMware for deployment, I would recommend this excellent blog post. Otherwise, just install the server by hand. I would also get used to SSHing into the box (and consider setting up a key.

Working with multiple clusters

So for a while, I have had a very backward way of accessing multiple clusters: I would set the kubeconfig environment variable, or change the default file. If I had bothered to learn the first thing about contexts, I could have avoided the confusion of keeping track of multiple files.

When a cluster is created, we often get a basic config file to access the cluster. I had often looked at these as a black box of access. Here is an example below from my rancher cluster:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://rke1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

Thanks to the official documentation (RTFM folks) I think it has finally clicked. We have lists of 3 different object types in the above config:
– Cluster: the connection to the cluster (contains a CA and endpoint)
– User: Identified with the client cert data and key data
– Context: Ties the above together (also namespaces if we want)

Contexts allow me to have multiple configurations and switch between them using the kubectl config use-context command. My goal is to have a connection to both my openshift cluster, and my rancher cluster. So I combined (and renamed some elements) the configuration:

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://api.oc1.lab.local:6443
  name: api-oc1-lab-local:6443
- cluster:
    certificate-authority-data: REDACTED
    server: https://rke1:6443
  name: rancher
contexts:
- context:
    cluster: api-oc1-lab-local:6443
    namespace: default
    user: kube:admin/api-oc1-lab-local:6443
  name: default/api-oc1-lab-local:6443/kube:admin
- context:
    cluster: rancher
    user: rancherdefault
  name: rancher
current-context: rancher
kind: Config
preferences: {}
users:
- name: kube:admin/api-oc1-lab-local:6443
  user:
    token: REDACTED
- name: rancherdefault
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

If we understand a little YAML, we can easily combine the files. Now it is simple to switch between my clusters:

kubectl config get-contexts
CURRENT   NAME                                        CLUSTER                  AUTHINFO                            NAMESPACE
          default/api-oc1-lab-local:6443/kube:admin   api-oc1-lab-local:6443   kube:admin/api-oc1-lab-local:6443   default
*         rancher                                     rancher                  rancherdefault
kubectl config use-context default/api-oc1-lab-local:6443/kube:admin
Switched to context "default/api-oc1-lab-local:6443/kube:admin".

Installing Portworx on Openshift

Today I decided to see about installing Portworx on Openshift with the goal of being able to move applications there from my production RKE2 cluster. I previously installed openshift using the Installer provisioned infrastructure (rebuilding this will be a post for another day). It is a basic cluster with 3 control nodes and 3 worker nodes.

Of course, I need to have a workstation with Openshift Client installed to interact with the cluster. I will admit that I am about as dumb as a post when it comes to openshift, but we all have to start somewhere! Log in to the openshift cluster and make sure kubectl works:

oc login --token=****** --server=https://api.oc1.lab.local:6443

kubectl get nodes

NAME                     STATUS   ROLES    AGE   VERSION
oc1-g7nvr-master-0       Ready    master   17d   v1.23.5+3afdacb
oc1-g7nvr-master-1       Ready    master   17d   v1.23.5+3afdacb
oc1-g7nvr-master-2       Ready    master   17d   v1.23.5+3afdacb
oc1-g7nvr-worker-27vkp   Ready    worker   17d   v1.23.5+3afdacb
oc1-g7nvr-worker-2rt6s   Ready    worker   17d   v1.23.5+3afdacb
oc1-g7nvr-worker-cwxdm   Ready    worker   17d   v1.23.5+3afdacb

Next, I went over to px central to create a spec. One important note! Unlike installing Portworx on other distros, openshift needs you to install the portworx operator using the Openshift Operator Hub. Being lazy, I used the console:

I was a little curious about the version (v2.11 is the current version of portworx as of this writing). What you are seeing here is the version of the operator that gets installed. This will allow the use of the StorageCluster object. Without installing the operator (and just blindly clicking links in the spec generator) will generate the following when we go to install Portworx:

error: resource mapping not found for name: "px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c" namespace: "kube-system" from "px-operator-install.yaml": no matches for kind "StorageCluster" in version "core.libopenstorage.org/v1"

Again, I chose to let Portworx automatically provision vmdks for this installation (I was less than excited about cracking open the black box of the OpenShift worker nodes).

kubectl apply -f px-vsphere-secret.yaml
secret/px-vsphere-secret created

kubectl apply -f px-install.yaml
storagecluster.core.libopenstorage.org/px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c created
kubectl -n kube-system get pods

NAME                                                    READY   STATUS    RESTARTS   AGE
autopilot-7958599dfc-kw7v6                              1/1     Running   0          8m19s
portworx-api-6mwpl                                      1/1     Running   0          8m19s
portworx-api-c2r2p                                      1/1     Running   0          8m19s
portworx-api-hm6hr                                      1/1     Running   0          8m19s
portworx-kvdb-4wh62                                     1/1     Running   0          2m27s
portworx-kvdb-922hq                                     1/1     Running   0          111s
portworx-kvdb-r9g2f                                     1/1     Running   0          2m20s
prometheus-px-prometheus-0                              2/2     Running   0          7m54s
px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c-4h4rr   2/2     Running   0          8m18s
px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c-5dxx6   2/2     Running   0          8m18s
px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c-szh8m   2/2     Running   0          8m18s
px-csi-ext-5f85c7ddfd-j7hfc                             4/4     Running   0          8m18s
px-csi-ext-5f85c7ddfd-qj58x                             4/4     Running   0          8m18s
px-csi-ext-5f85c7ddfd-xs6wn                             4/4     Running   0          8m18s
px-prometheus-operator-67dfbfc467-lz52j                 1/1     Running   0          8m19s
stork-6d6dcfc98c-7nzh4                                  1/1     Running   0          8m20s
stork-6d6dcfc98c-lqv4c                                  1/1     Running   0          8m20s
stork-6d6dcfc98c-mcjck                                  1/1     Running   0          8m20s
stork-scheduler-55f5ccd6df-5ks6w                        1/1     Running   0          8m20s
stork-scheduler-55f5ccd6df-6kkqd                        1/1     Running   0          8m20s
stork-scheduler-55f5ccd6df-vls9l                        1/1     Running   0          8m20s

Success!

We can also get the pxctl status. In this case, I would like to run the command directly from the pod, so I will create an alias using the worst bit of bash hacking known to mankind (any help would be appreciated):

alias pxctl="kubectl exec $(kubectl get pods -n kube-system | awk '/px-cluster/ {print $1}' | head -n 1) -n kube-system -- /opt/pwx/bin/pxctl"
pxctl status
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: f3c9991f-9cdb-43c7-9d39-36aa388c5695
        IP: 10.0.1.211
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           42 GiB  2.4 GiB Online  default default
        Local Storage Devices: 1 device
        Device  Path            Media Type              Size            Last-Scan
        0:1     /dev/sdb        STORAGE_MEDIUM_MAGNETIC 42 GiB          27 Jul 22 20:25 UTC
        total                   -                       42 GiB
        Cache Devices:
         * No cache devices
        Kvdb Device:
        Device Path     Size
        /dev/sdc        32 GiB
         * Internal kvdb on this node is using this dedicated kvdb device to store its data.
Cluster Summary
        Cluster ID: px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c
        Cluster UUID: 73368237-8d36-4c23-ab88-47a3002d13cf
        Scheduler: kubernetes
        Nodes: 3 node(s) with storage (3 online)
        IP              ID                                      SchedulerNodeName       Auth            StorageNode     Used    Capacity        Status  StorageStatus        Version         Kernel                          OS
        10.0.1.211      f3c9991f-9cdb-43c7-9d39-36aa388c5695    oc1-g7nvr-worker-2rt6s  Disabled        Yes             2.4 GiB 42 GiB          Online  Up (This node)       2.11.1-3a5f406  4.18.0-305.49.1.el8_4.x86_64    Red Hat Enterprise Linux CoreOS 410.84.202206212304-0 (Ootpa)
        10.0.1.210      cfb2be04-9291-4222-8df6-17b308497af8    oc1-g7nvr-worker-cwxdm  Disabled        Yes             2.4 GiB 42 GiB          Online  Up  2.11.1-3a5f406   4.18.0-305.49.1.el8_4.x86_64    Red Hat Enterprise Linux CoreOS 410.84.202206212304-0 (Ootpa)
        10.0.1.213      5a6d2c8b-a295-4fb2-a831-c90f525011e8    oc1-g7nvr-worker-27vkp  Disabled        Yes             2.4 GiB 42 GiB          Online  Up  2.11.1-3a5f406   4.18.0-305.49.1.el8_4.x86_64    Red Hat Enterprise Linux CoreOS 410.84.202206212304-0 (Ootpa)
Global Storage Pool
        Total Used      :  7.1 GiB
        Total Capacity  :  126 GiB

For the next bit of housekeeping, I want to get a kubectl config so I can add this cluster in to PX Backup. Because of the black magic when I used the oc command to log in, I’m going to export the kubecfg with:

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://api.oc1.lab.local:6443
  name: api-oc1-lab-local:6443
contexts:
- context:
    cluster: api-oc1-lab-local:6443
    namespace: default
    user: kube:admin/api-oc1-lab-local:6443
  name: default/api-oc1-lab-local:6443/kube:admin
current-context: default/api-oc1-lab-local:6443/kube:admin
kind: Config
preferences: {}
users:
- name: kube:admin/api-oc1-lab-local:6443
  user:
    token: REDACTED

Notice that the token above is redacted, you will need to add your token from the oc when pasting it to PX Backup

And as promised, the spec I used to install:

# SOURCE: https://install.portworx.com/?operator=true&mc=false&kbver=&b=true&kd=type%3Dthin%2Csize%3D32&vsp=true&vc=vcenter.lab.local&vcp=443&ds=esx2-local3&s=%22type%3Dthin%2Csize%3D42%22&c=px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c&osft=true&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true
kind: StorageCluster
apiVersion: core.libopenstorage.org/v1
metadata:
  name: px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c
  namespace: kube-system
  annotations:
    portworx.io/install-source: "https://install.portworx.com/?operator=true&mc=false&kbver=&b=true&kd=type%3Dthin%2Csize%3D32&vsp=true&vc=vcenter.lab.local&vcp=443&ds=esx2-local3&s=%22type%3Dthin%2Csize%3D42%22&c=px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c&osft=true&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true"
    portworx.io/is-openshift: "true"
spec:
  image: portworx/oci-monitor:2.11.1
  imagePullPolicy: Always
  kvdb:
    internal: true
  cloudStorage:
    deviceSpecs:
    - type=thin,size=42
    kvdbDeviceSpec: type=thin,size=32
  secretsProvider: k8s
  stork:
    enabled: true
    args:
      webhook-controller: "true"
  autopilot:
    enabled: true
  csi:
    enabled: true
  monitoring:
    prometheus:
      enabled: true
      exportMetrics: true
  env:
  - name: VSPHERE_INSECURE
    value: "true"
  - name: VSPHERE_USER
    valueFrom:
      secretKeyRef:
        name: px-vsphere-secret
        key: VSPHERE_USER
  - name: VSPHERE_PASSWORD
    valueFrom:
      secretKeyRef:
        name: px-vsphere-secret
        key: VSPHERE_PASSWORD
  - name: VSPHERE_VCENTER
    value: "vcenter.lab.local"
  - name: VSPHERE_VCENTER_PORT
    value: "443"
  - name: VSPHERE_DATASTORE_PREFIX
    value: "esx2-local4"
  - name: VSPHERE_INSTALL_MODE
    value: "shared"

The rest of the restore – part 2

With the last post getting a little long, we will pick up where we left off. Our first task is to setup something called a proxy volume. A proxy volume is a portworx specific feature that allows me to create a PVC that is backed by an external NFS share, in this case my minio export. It should be noted that I wiped the minio configuration from the export by deleting the .minio.sys directory, but you won’t need to worry about that with a new install.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: portworx-proxy-volume-miniok8s
provisioner: kubernetes.io/portworx-volume
parameters:
  proxy_endpoint: "nfs://10.0.1.8"
  proxy_nfs_exportpath: "/volume1/miniok8s"
  mount_options: "vers=3.0"
allowVolumeExpansion: true
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: minio
  name: minio-data
  labels:
    app: nginx
spec:
  storageClassName: portworx-proxy-volume-miniok8s
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2T

The above does a couple of things. First, note the ‘—‘ This is a way of combining yaml files into one file. The first section creates a new storage class that points to my nfs export. The second section creates a PVC called minio-data that we will use later. Why not just mount the nfs export to the worker node? Because I don’t know which worker node my pod will be deployed on, and I would rather not mount my minio export to every node (as well as needing to update fstab anytime I do something like this!)

Apply the manifest with:

kubectl apply -f minio-pvc.yaml

Install Minio

To install minio, we will be using helm again. We will be using a values.yaml file for the first time. Let’s get ready:

kubectl create namespace minio
helm  show values minio/minio > minio-values.yaml

The second command will write an example values file to minio-values.yaml. Take the time to read through the file, but I will show you some important lines:

32 mode: standalone
...
81 rootUser: "minioadmin"
82 rootPassword: "AwsomeSecurePassword"
...
137 persistence:
138   enabled: true
139   annotations: {}

  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
144   existingClaim: "minio-data"
...
316 users:
322   - accessKey: pxbackup
323     secretKey: MyAwesomeKey
324     policy: readwrite

Be careful copying the above as I am manually writing in the line numbers so you can find them in your values file. It is also possible to create buckets from here. There is a ton of customization that can happen with a values.yaml file, without you needing to paw through manifests. Install minio with:

helm -n minio install minio minio/minio -f minio-values.yaml

Minio should be up and running, but we don’t have a good way of getting to it. Now is the time for all of our prep work to come together. We first need to plumb a couple of networking things out.

First, configure your firewall to allow port 80 and 443 to point to the IP of any node of your cluster

Second, configure a couple of DNS entries. I use:
minio.ccrow.org – the s3 API endpoint – This should be pointed to the external IP of your router
minioconsole.lab.local – my internal DNS name to manage minio. Point this to any node in your cluster

Now for our first ingress:

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-minio
  namespace: minio
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
spec:
  tls:
    - hosts:
        - minio.ccrow.org
      secretName: minio-tls
  rules:
    - host: minio.ccrow.org #change this to your DNS name
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: minio
                port:
                  number: 9000
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-minioconsole
  namespace: minio
  annotations:
    cert-manager.io/cluster-issuer: selfsigned-cluster-issuer
    kubernetes.io/ingress.class: nginx

spec:
  tls:
    - hosts:
        - minioconsole.lab.local
      secretName: minioconsole-tls
  rules:
    - host: minioconsole.lab.local # change this to your DNS name
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: minio-console
                port:
                  number: 9001

The above will create 2 ingresses in the minio namespace. One to point minioconsole.lab.local to the minio-console service that the helm chart created. The second to point minio.ccrow.org to the minio service.

We haven’t talked much about services, but they are a way for containers running on kubernetes to talk to each other. An ingress listens for an incoming hostname (think old webservers with virtual hosts) and routes to the appropriate service, but because of all of the work we have done before, these ingresses will automatically get certificates from let’s encrypt. Apply the above with:

kubectl apply -f minio-ingress.yaml

There are a few things that can go wrong here, and I will update this post when questions come in. At this point, it is easy to configure PX backup from the GUI to point at minio.ccrow.org:

And point PX Backup at your cluster:

You can export your kubeconfig with the command above.

We have to click on the ‘All backups’ link (which will take a few minutes to scan), but:

Sweet, sweet backups!!!

Again, sorry for the cliff notes version of these installs, but I wanted to make sure I documented this!

And yes, I backed up this WordPress site this time…