Automating Ingress whitelists with Plex

I recently needed to tackle a problem for one of my users. I employ whitelists for more sensitive services, which include some Dropbox-like functionality. One of my users was unable to use a VPN, but their IP address kept rotating.

How can I update nginx whitelists as well as firewall rules automatically (and maybe somewhat safely)? Read on for my latest crime against Kubernetes.

How is access handled with Synology Drive?

Synology Drive in my environment has two components. The first is the web interface, which is front-ended by a Kubernetes ingress with an associated service and endpoint (because the actual web interface is, of course, on the Synology). The basic configuration looks like this:

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-synology
  namespace: externalsvc
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod-dns01
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/8,136.226.0.0/16,67.183.150.241/32,71.212.140.237/32,71.212.91.169/32,73.42.224.105/32,76.22.86.230/32"
spec:
...

The important line is the nginx.ingress.kubernetes.io/whitelist-source-range annotation, which controls who can connect. Without it, users will get a 403 error.

The second piece is a firewall rule to allow access to 6690, which is the TCP port that Synology Drive uses for transmitting data.

Luckily, my firewall is configured by a script that flushes all rules and then configures the firewall in the form of iptables commands (I am that old). An example of the rule looks like this:

iptables -A FORWARD -s "$SRC" -d "$HOST_NAS01" -p tcp --dport "$PORT" -j ACCEPT

Reading the latest IP address from Tautulli

Because my user sometimes watches videos on Plex, I have a log of some metadata from the stream. This includes the IP address. It turns out Tautulli has a decent API I could use.

curl -s "https://tautulli.ccrow.org/api/v2?apikey=<TAUTULLI_API_KEY>&cmd=get_history&user=<USER>&length=1" \
    | jq -r '.response.data.data[0].ip_address // empty'

The above curl command uses the API to run the get_history command for a given user. This provides a JSON payload. The legth parameter ensures I only return a single item, which I opted for instead of parsing all of the user’s watch history entries locally. We pipe that into jq to extract the IP address and return nothing if it isn’t found, which is helpful for our bash command.

The goal of our script is to produce a text file that contains one line per CIDR address that we can use to update our firewall rule, as well as the ingress whitelist. We need to do a few things to the result, like deduplicating the CIDR addresses. There seems to be a shell command for everything these days:

# Loop over users and fetch most recent IP
for user in "${USERS[@]}"; do
  ip=$(curl -s "${TAUTULLI_URL}?apikey=${APIKEY}&cmd=get_history&user=${user}&length=1" \
    | jq -r '.response.data.data[0].ip_address // empty')

  if [[ -n "$ip" ]]; then
    echo "${ip}/32" >> "$CIDR_FILE"
  else
    echo "No IP found for $user" >&2
  fi
done

### Deduplicate and sort
sort -u "$CIDR_FILE" -o "$CIDR_FILE"

echo "CIDR list built at $CIDR_FILE"

Updating our Ingress whitelist

We can then update a list of ingress YAML definitions to update the whitelist:

# --- Update ingress files ---

# Create a comma-separated string of CIDRs
CIDR_LIST=$(paste -sd, "$CIDR_FILE")

for file in "${INGRESS_FILES[@]}"; do
  if [[ -f "$file" ]]; then
    sed -i "s#nginx.ingress.kubernetes.io/whitelist-source-range:.*#nginx.ingress.kubernetes.io/whitelist-source-range: \"$CIDR_LIST\"#" "$file"
    echo "Updated whitelist-source-range in $file"
  else
    echo "File not found: $file"
  fi
done

Why use sed? Some of my YAML files have multiple manifests that are delimited by --- , and I was not very uniform. Sed is a simple way to find and replace the existing string.

We then simply check our changes into git for ArgoCD to sweep up and apply.

Updating our firewall rules

I mentioned that my firewall is simply a script that is run over SSH. I ensured that our $CIDR_FILE is copied alongside the script and updated my iptables script to include a simple loop:

### Clients I allow to connect to synology drive
CIDR_FILE="cidrs.txt"

# --- Ports to allow ---
PORTS=(6690 6281)  # You can adjust per requirement

# --- Apply iptables rules ---
if [[ ! -f "$CIDR_FILE" ]]; then
  echo "CIDR file not found: $CIDR_FILE"
  exit 1
fi

while IFS= read -r SRC; do
  for PORT in "${PORTS[@]}"; do
    iptables -A FORWARD -s "$SRC" -d "$HOST_NAS01" -p tcp --dport "$PORT" -j ACCEPT
    echo "Added rule: $SRC -> $HOST_NAS01 port $PORT"
  done
done < "$CIDR_FILE"

The above will insert two rules per client CIDR to allow TCP port 6690 and 6281. We loop through the file using the SRC variable as a placeholder for the line, we then loop through the PORTS array to add the rule. The above implementation will be very different depending on your firewall.

For those that want to see the update script, which is sadly triggered by a cronjob, in all of its glory:

#!/usr/bin/env bash
set -euo pipefail

### --- Config ---
source /root/secrets.sh
APIKEY=$TAUTULLI_API_TOKEN
TAUTULLI_URL="https://tautulli.ccrow.org/api/v2"

CIDR_FILE="$HOME/personal/homelab/tautulli-api/cidrs.txt"
FIREWALL_DIR="$HOME/personal/firewall"

# Static CIDRs
STATIC_CIDRS=(
  "136.226.0.0/16"
  "10.0.0.0/8"
)

# Usernames to query
USERS=(
  "User1"
  "User2"
)

# GitOps repo base directory
BASEDIR="$HOME/personal/gitops-cd"
INGRESS_FILES=(
  "$BASEDIR/manifests/externalsvc/synology.yaml"
  "$BASEDIR/manifests/kiwix/kiwix.yaml"
)

### --- Build CIDR file ---

# Add static CIDRs
for cidr in "${STATIC_CIDRS[@]}"; do
  echo "$cidr" >> "$CIDR_FILE"
done

# Loop over users and fetch most recent IP
for user in "${USERS[@]}"; do
  ip=$(curl -s "${TAUTULLI_URL}?apikey=${APIKEY}&cmd=get_history&user=${user}&length=1" \
    | jq -r '.response.data.data[0].ip_address // empty')

  if [[ -n "$ip" ]]; then
    echo "${ip}/32" >> "$CIDR_FILE"
  else
    echo "No IP found for $user" >&2
  fi
done

### Deduplicate and sort
sort -u "$CIDR_FILE" -o "$CIDR_FILE"

echo "CIDR list built at $CIDR_FILE"

# --- Update ingress files ---

# Create a comma-separated string of CIDRs
CIDR_LIST=$(paste -sd, "$CIDR_FILE")

for file in "${INGRESS_FILES[@]}"; do
  if [[ -f "$file" ]]; then
    sed -i "s#nginx.ingress.kubernetes.io/whitelist-source-range:.*#nginx.ingress.kubernetes.io/whitelist-source-range: \"$CIDR_LIST\"#" "$file"
    echo "Updated whitelist-source-range in $file"
  else
    echo "File not found: $file"
  fi
done

echo "updating git in 10 sec"
sleep 10

# --- Git commit changes ---
cd "$BASEDIR"
#git pull || echo "ERROR: can't pull"; exit 1
if [[ -n "$(git status --porcelain)" ]]; then
  git add .
  git commit -m "automated whitelist update"
  echo "Changes committed in $BASEDIR"
else
  echo "No changes to commit in $BASEDIR"
fi

git push

echo "updating firewall in 10 sec"
sleep 10
cd $FIREWALL_DIR
./install_fw2.sh

Never underestimate the word ethic of a lazy system administrator!

Installing Portworx on Openshift

Today I decided to see about installing Portworx on Openshift with the goal of being able to move applications there from my production RKE2 cluster. I previously installed openshift using the Installer provisioned infrastructure (rebuilding this will be a post for another day). It is a basic cluster with 3 control nodes and 3 worker nodes.

Of course, I need to have a workstation with Openshift Client installed to interact with the cluster. I will admit that I am about as dumb as a post when it comes to openshift, but we all have to start somewhere! Log in to the openshift cluster and make sure kubectl works:

oc login --token=****** --server=https://api.oc1.lab.local:6443

kubectl get nodes

NAME                     STATUS   ROLES    AGE   VERSION
oc1-g7nvr-master-0       Ready    master   17d   v1.23.5+3afdacb
oc1-g7nvr-master-1       Ready    master   17d   v1.23.5+3afdacb
oc1-g7nvr-master-2       Ready    master   17d   v1.23.5+3afdacb
oc1-g7nvr-worker-27vkp   Ready    worker   17d   v1.23.5+3afdacb
oc1-g7nvr-worker-2rt6s   Ready    worker   17d   v1.23.5+3afdacb
oc1-g7nvr-worker-cwxdm   Ready    worker   17d   v1.23.5+3afdacb

Next, I went over to px central to create a spec. One important note! Unlike installing Portworx on other distros, openshift needs you to install the portworx operator using the Openshift Operator Hub. Being lazy, I used the console:

I was a little curious about the version (v2.11 is the current version of portworx as of this writing). What you are seeing here is the version of the operator that gets installed. This will allow the use of the StorageCluster object. Without installing the operator (and just blindly clicking links in the spec generator) will generate the following when we go to install Portworx:

error: resource mapping not found for name: "px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c" namespace: "kube-system" from "px-operator-install.yaml": no matches for kind "StorageCluster" in version "core.libopenstorage.org/v1"

Again, I chose to let Portworx automatically provision vmdks for this installation (I was less than excited about cracking open the black box of the OpenShift worker nodes).

kubectl apply -f px-vsphere-secret.yaml
secret/px-vsphere-secret created

kubectl apply -f px-install.yaml
storagecluster.core.libopenstorage.org/px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c created
kubectl -n kube-system get pods

NAME                                                    READY   STATUS    RESTARTS   AGE
autopilot-7958599dfc-kw7v6                              1/1     Running   0          8m19s
portworx-api-6mwpl                                      1/1     Running   0          8m19s
portworx-api-c2r2p                                      1/1     Running   0          8m19s
portworx-api-hm6hr                                      1/1     Running   0          8m19s
portworx-kvdb-4wh62                                     1/1     Running   0          2m27s
portworx-kvdb-922hq                                     1/1     Running   0          111s
portworx-kvdb-r9g2f                                     1/1     Running   0          2m20s
prometheus-px-prometheus-0                              2/2     Running   0          7m54s
px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c-4h4rr   2/2     Running   0          8m18s
px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c-5dxx6   2/2     Running   0          8m18s
px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c-szh8m   2/2     Running   0          8m18s
px-csi-ext-5f85c7ddfd-j7hfc                             4/4     Running   0          8m18s
px-csi-ext-5f85c7ddfd-qj58x                             4/4     Running   0          8m18s
px-csi-ext-5f85c7ddfd-xs6wn                             4/4     Running   0          8m18s
px-prometheus-operator-67dfbfc467-lz52j                 1/1     Running   0          8m19s
stork-6d6dcfc98c-7nzh4                                  1/1     Running   0          8m20s
stork-6d6dcfc98c-lqv4c                                  1/1     Running   0          8m20s
stork-6d6dcfc98c-mcjck                                  1/1     Running   0          8m20s
stork-scheduler-55f5ccd6df-5ks6w                        1/1     Running   0          8m20s
stork-scheduler-55f5ccd6df-6kkqd                        1/1     Running   0          8m20s
stork-scheduler-55f5ccd6df-vls9l                        1/1     Running   0          8m20s

Success!

We can also get the pxctl status. In this case, I would like to run the command directly from the pod, so I will create an alias using the worst bit of bash hacking known to mankind (any help would be appreciated):

alias pxctl="kubectl exec $(kubectl get pods -n kube-system | awk '/px-cluster/ {print $1}' | head -n 1) -n kube-system -- /opt/pwx/bin/pxctl"
pxctl status
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: f3c9991f-9cdb-43c7-9d39-36aa388c5695
        IP: 10.0.1.211
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           42 GiB  2.4 GiB Online  default default
        Local Storage Devices: 1 device
        Device  Path            Media Type              Size            Last-Scan
        0:1     /dev/sdb        STORAGE_MEDIUM_MAGNETIC 42 GiB          27 Jul 22 20:25 UTC
        total                   -                       42 GiB
        Cache Devices:
         * No cache devices
        Kvdb Device:
        Device Path     Size
        /dev/sdc        32 GiB
         * Internal kvdb on this node is using this dedicated kvdb device to store its data.
Cluster Summary
        Cluster ID: px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c
        Cluster UUID: 73368237-8d36-4c23-ab88-47a3002d13cf
        Scheduler: kubernetes
        Nodes: 3 node(s) with storage (3 online)
        IP              ID                                      SchedulerNodeName       Auth            StorageNode     Used    Capacity        Status  StorageStatus        Version         Kernel                          OS
        10.0.1.211      f3c9991f-9cdb-43c7-9d39-36aa388c5695    oc1-g7nvr-worker-2rt6s  Disabled        Yes             2.4 GiB 42 GiB          Online  Up (This node)       2.11.1-3a5f406  4.18.0-305.49.1.el8_4.x86_64    Red Hat Enterprise Linux CoreOS 410.84.202206212304-0 (Ootpa)
        10.0.1.210      cfb2be04-9291-4222-8df6-17b308497af8    oc1-g7nvr-worker-cwxdm  Disabled        Yes             2.4 GiB 42 GiB          Online  Up  2.11.1-3a5f406   4.18.0-305.49.1.el8_4.x86_64    Red Hat Enterprise Linux CoreOS 410.84.202206212304-0 (Ootpa)
        10.0.1.213      5a6d2c8b-a295-4fb2-a831-c90f525011e8    oc1-g7nvr-worker-27vkp  Disabled        Yes             2.4 GiB 42 GiB          Online  Up  2.11.1-3a5f406   4.18.0-305.49.1.el8_4.x86_64    Red Hat Enterprise Linux CoreOS 410.84.202206212304-0 (Ootpa)
Global Storage Pool
        Total Used      :  7.1 GiB
        Total Capacity  :  126 GiB

For the next bit of housekeeping, I want to get a kubectl config so I can add this cluster in to PX Backup. Because of the black magic when I used the oc command to log in, I’m going to export the kubecfg with:

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://api.oc1.lab.local:6443
  name: api-oc1-lab-local:6443
contexts:
- context:
    cluster: api-oc1-lab-local:6443
    namespace: default
    user: kube:admin/api-oc1-lab-local:6443
  name: default/api-oc1-lab-local:6443/kube:admin
current-context: default/api-oc1-lab-local:6443/kube:admin
kind: Config
preferences: {}
users:
- name: kube:admin/api-oc1-lab-local:6443
  user:
    token: REDACTED

Notice that the token above is redacted, you will need to add your token from the oc when pasting it to PX Backup

And as promised, the spec I used to install:

# SOURCE: https://install.portworx.com/?operator=true&mc=false&kbver=&b=true&kd=type%3Dthin%2Csize%3D32&vsp=true&vc=vcenter.lab.local&vcp=443&ds=esx2-local3&s=%22type%3Dthin%2Csize%3D42%22&c=px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c&osft=true&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true
kind: StorageCluster
apiVersion: core.libopenstorage.org/v1
metadata:
  name: px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c
  namespace: kube-system
  annotations:
    portworx.io/install-source: "https://install.portworx.com/?operator=true&mc=false&kbver=&b=true&kd=type%3Dthin%2Csize%3D32&vsp=true&vc=vcenter.lab.local&vcp=443&ds=esx2-local3&s=%22type%3Dthin%2Csize%3D42%22&c=px-cluster-f51bdd65-f8d1-4782-965f-2f9504024d5c&osft=true&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true"
    portworx.io/is-openshift: "true"
spec:
  image: portworx/oci-monitor:2.11.1
  imagePullPolicy: Always
  kvdb:
    internal: true
  cloudStorage:
    deviceSpecs:
    - type=thin,size=42
    kvdbDeviceSpec: type=thin,size=32
  secretsProvider: k8s
  stork:
    enabled: true
    args:
      webhook-controller: "true"
  autopilot:
    enabled: true
  csi:
    enabled: true
  monitoring:
    prometheus:
      enabled: true
      exportMetrics: true
  env:
  - name: VSPHERE_INSECURE
    value: "true"
  - name: VSPHERE_USER
    valueFrom:
      secretKeyRef:
        name: px-vsphere-secret
        key: VSPHERE_USER
  - name: VSPHERE_PASSWORD
    valueFrom:
      secretKeyRef:
        name: px-vsphere-secret
        key: VSPHERE_PASSWORD
  - name: VSPHERE_VCENTER
    value: "vcenter.lab.local"
  - name: VSPHERE_VCENTER_PORT
    value: "443"
  - name: VSPHERE_DATASTORE_PREFIX
    value: "esx2-local4"
  - name: VSPHERE_INSTALL_MODE
    value: "shared"