Automating Ingress whitelists with Plex

I recently needed to tackle a problem for one of my users. I employ whitelists for more sensitive services, which include some Dropbox-like functionality. One of my users was unable to use a VPN, but their IP address kept rotating.

How can I update nginx whitelists as well as firewall rules automatically (and maybe somewhat safely)? Read on for my latest crime against Kubernetes.

How is access handled with Synology Drive?

Synology Drive in my environment has two components. The first is the web interface, which is front-ended by a Kubernetes ingress with an associated service and endpoint (because the actual web interface is, of course, on the Synology). The basic configuration looks like this:

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-synology
  namespace: externalsvc
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod-dns01
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/8,136.226.0.0/16,67.183.150.241/32,71.212.140.237/32,71.212.91.169/32,73.42.224.105/32,76.22.86.230/32"
spec:
...

The important line is the nginx.ingress.kubernetes.io/whitelist-source-range annotation, which controls who can connect. Without it, users will get a 403 error.

The second piece is a firewall rule to allow access to 6690, which is the TCP port that Synology Drive uses for transmitting data.

Luckily, my firewall is configured by a script that flushes all rules and then configures the firewall in the form of iptables commands (I am that old). An example of the rule looks like this:

iptables -A FORWARD -s "$SRC" -d "$HOST_NAS01" -p tcp --dport "$PORT" -j ACCEPT

Reading the latest IP address from Tautulli

Because my user sometimes watches videos on Plex, I have a log of some metadata from the stream. This includes the IP address. It turns out Tautulli has a decent API I could use.

curl -s "https://tautulli.ccrow.org/api/v2?apikey=<TAUTULLI_API_KEY>&cmd=get_history&user=<USER>&length=1" \
    | jq -r '.response.data.data[0].ip_address // empty'

The above curl command uses the API to run the get_history command for a given user. This provides a JSON payload. The legth parameter ensures I only return a single item, which I opted for instead of parsing all of the user’s watch history entries locally. We pipe that into jq to extract the IP address and return nothing if it isn’t found, which is helpful for our bash command.

The goal of our script is to produce a text file that contains one line per CIDR address that we can use to update our firewall rule, as well as the ingress whitelist. We need to do a few things to the result, like deduplicating the CIDR addresses. There seems to be a shell command for everything these days:

# Loop over users and fetch most recent IP
for user in "${USERS[@]}"; do
  ip=$(curl -s "${TAUTULLI_URL}?apikey=${APIKEY}&cmd=get_history&user=${user}&length=1" \
    | jq -r '.response.data.data[0].ip_address // empty')

  if [[ -n "$ip" ]]; then
    echo "${ip}/32" >> "$CIDR_FILE"
  else
    echo "No IP found for $user" >&2
  fi
done

### Deduplicate and sort
sort -u "$CIDR_FILE" -o "$CIDR_FILE"

echo "CIDR list built at $CIDR_FILE"

Updating our Ingress whitelist

We can then update a list of ingress YAML definitions to update the whitelist:

# --- Update ingress files ---

# Create a comma-separated string of CIDRs
CIDR_LIST=$(paste -sd, "$CIDR_FILE")

for file in "${INGRESS_FILES[@]}"; do
  if [[ -f "$file" ]]; then
    sed -i "s#nginx.ingress.kubernetes.io/whitelist-source-range:.*#nginx.ingress.kubernetes.io/whitelist-source-range: \"$CIDR_LIST\"#" "$file"
    echo "Updated whitelist-source-range in $file"
  else
    echo "File not found: $file"
  fi
done

Why use sed? Some of my YAML files have multiple manifests that are delimited by --- , and I was not very uniform. Sed is a simple way to find and replace the existing string.

We then simply check our changes into git for ArgoCD to sweep up and apply.

Updating our firewall rules

I mentioned that my firewall is simply a script that is run over SSH. I ensured that our $CIDR_FILE is copied alongside the script and updated my iptables script to include a simple loop:

### Clients I allow to connect to synology drive
CIDR_FILE="cidrs.txt"

# --- Ports to allow ---
PORTS=(6690 6281)  # You can adjust per requirement

# --- Apply iptables rules ---
if [[ ! -f "$CIDR_FILE" ]]; then
  echo "CIDR file not found: $CIDR_FILE"
  exit 1
fi

while IFS= read -r SRC; do
  for PORT in "${PORTS[@]}"; do
    iptables -A FORWARD -s "$SRC" -d "$HOST_NAS01" -p tcp --dport "$PORT" -j ACCEPT
    echo "Added rule: $SRC -> $HOST_NAS01 port $PORT"
  done
done < "$CIDR_FILE"

The above will insert two rules per client CIDR to allow TCP port 6690 and 6281. We loop through the file using the SRC variable as a placeholder for the line, we then loop through the PORTS array to add the rule. The above implementation will be very different depending on your firewall.

For those that want to see the update script, which is sadly triggered by a cronjob, in all of its glory:

#!/usr/bin/env bash
set -euo pipefail

### --- Config ---
source /root/secrets.sh
APIKEY=$TAUTULLI_API_TOKEN
TAUTULLI_URL="https://tautulli.ccrow.org/api/v2"

CIDR_FILE="$HOME/personal/homelab/tautulli-api/cidrs.txt"
FIREWALL_DIR="$HOME/personal/firewall"

# Static CIDRs
STATIC_CIDRS=(
  "136.226.0.0/16"
  "10.0.0.0/8"
)

# Usernames to query
USERS=(
  "User1"
  "User2"
)

# GitOps repo base directory
BASEDIR="$HOME/personal/gitops-cd"
INGRESS_FILES=(
  "$BASEDIR/manifests/externalsvc/synology.yaml"
  "$BASEDIR/manifests/kiwix/kiwix.yaml"
)

### --- Build CIDR file ---

# Add static CIDRs
for cidr in "${STATIC_CIDRS[@]}"; do
  echo "$cidr" >> "$CIDR_FILE"
done

# Loop over users and fetch most recent IP
for user in "${USERS[@]}"; do
  ip=$(curl -s "${TAUTULLI_URL}?apikey=${APIKEY}&cmd=get_history&user=${user}&length=1" \
    | jq -r '.response.data.data[0].ip_address // empty')

  if [[ -n "$ip" ]]; then
    echo "${ip}/32" >> "$CIDR_FILE"
  else
    echo "No IP found for $user" >&2
  fi
done

### Deduplicate and sort
sort -u "$CIDR_FILE" -o "$CIDR_FILE"

echo "CIDR list built at $CIDR_FILE"

# --- Update ingress files ---

# Create a comma-separated string of CIDRs
CIDR_LIST=$(paste -sd, "$CIDR_FILE")

for file in "${INGRESS_FILES[@]}"; do
  if [[ -f "$file" ]]; then
    sed -i "s#nginx.ingress.kubernetes.io/whitelist-source-range:.*#nginx.ingress.kubernetes.io/whitelist-source-range: \"$CIDR_LIST\"#" "$file"
    echo "Updated whitelist-source-range in $file"
  else
    echo "File not found: $file"
  fi
done

echo "updating git in 10 sec"
sleep 10

# --- Git commit changes ---
cd "$BASEDIR"
#git pull || echo "ERROR: can't pull"; exit 1
if [[ -n "$(git status --porcelain)" ]]; then
  git add .
  git commit -m "automated whitelist update"
  echo "Changes committed in $BASEDIR"
else
  echo "No changes to commit in $BASEDIR"
fi

git push

echo "updating firewall in 10 sec"
sleep 10
cd $FIREWALL_DIR
./install_fw2.sh

Never underestimate the work ethic of a lazy system administrator!

The NMState Operator and Policy based routes

I recently needed to have containers egress through a different public IP due to some issues with some archiving of YouTube videos (They don’t seem to want you to do that for some reason).

I’m always looking for an excuse to put on my network administrator hat, so I grabbed some coffee and got to work. What resulted was perhaps the craziest Kubernetes and networking rabbit hole of have been down in a while. I won’t spoil the actual solution (that will be for a future post), but I did learn a neat trick with the NMState operator after some searching and some reverse engineering.

The NMState operator configures networking using NetworkManager on Kubernetes clusters. It is declarative, and allows the configuration of interfaces, sub-interfaces and bridges. This cuts down on management of the actual Linux endpoints, and is all but required for immutable distros like CoreOS (which was my first exposure to the operator).

For example, if I want to configure a VLAN sub-interface and create a bridge interface that is suitable for virtual machines, I can apply the following:

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: example-config
spec:
  desiredState:
    interfaces:
    - name: enp72s0f3.10
      description: VLAN 10 on enp72s0f3
      type: vlan
      state: up
      vlan:
        id: 10
        base-iface: enp72s0f3


    - name: br10
      description: Linux bridge with enp72s0f3.10 as a port
      type: linux-bridge
      state: up
      ipv4:
        enabled: true
        dhcp: false
        address:
        - ip: 10.0.10.10
          prefix-length: 24
      bridge:
        options:
          stp:
            enabled: false
        port:
        - name: enp72s0f3.10

The above creates a VLAN sub-interface on enp72s0f3 and created a new bridge interface called br10. What results is exactly what we would expect if we created a VLAN interface and bridge interface without all of the pesky nmtui, ansible, or tool-of-choice.

server ~ > ip addr show enp72s0f3.10
444: enp72s0f3.10@enp72s0f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br10 state UP group default qlen 1000
    link/ether f4:ce:46:a5:c0:d3 brd ff:ff:ff:ff:ff:ff

Policy-Based Routing

Now, for my initial problem, I needed to send packets down a different route, not based on their destination, but based on their source (or other criteria). This, in essence, is what policy-based routing is for. Linux allows us to create separate routing tables and assign interfaces to them.

Keep in mind that these are separate routing tables, so all of the things one takes for granted when it comes to having them populated on a Linux system does not exist, just take a look at what I need to do on my firewall to create and assign a routing table:

    ip route add default via ${WAN1_GW} dev ${WAN1_IF} table lab
    ip route add ${CLIENT_NET} dev ${CLIENT_IF} table lab
    ip route add ${STORAGE_NET} dev ${STORAGE_IF} table lab
    ip route add ${K8SPROD_NET} dev ${K8SPROD_IF} table lab
    ip route add ${VMPROD_NET} dev ${VMPROD_IF} table lab
    ip route add ${VMLAB_NET} dev ${VMLAB_IF} table lab
    ip route add ${CLIENT2_NET} dev ${CLIENT2_IF} table lab
    ip route add ${IOT_NET} dev ${IOT_IF} table lab
    ip route add ${GUEST_NET} dev ${GUEST_IF} table lab
    ip route add ${ADM_NET} dev ${ADM_IF} table lab
    ip route add ${VPN_NET} dev ${VPN_IF} table lab
    ip route add ${VMLAB2_NET} dev ${VMLAB2_IF} table lab
    ip route add ${SECURE_CLIENT_NET} dev ${SECURE_CLIENT_IF} table lab


    ip rule add from ${VMLAB_NET} dev ${VMLAB_IF} lookup lab priority 100
    ip rule add from ${VMLAB2_NET} dev ${VMLAB2_IF} lookup lab priority 100

A routing table is a series of routes, and a set of rules that match what will use the above routing table. The the case of the above, any traffic from the ${VMLAB_NET} and ${VMLAB2_NET} will use the lab routing interface. This allows the lab traffic to exit a different default gateway than the rest of my traffic. With the proper NAT rules, my lab traffic will now leave using a different interface and gateway on the firewall.

Note in the above example from my firewall, I needed to add all of the routes, even if they were directly connected.

NMState Operator and Policy-based routes

We can achieve the same result using the NMstate operator. Admittedly, this has less value on a Kubernetes node, but when combined with Cilium’s egress policies, we can really do some cool stuff.

Here is the example from my NodeNetworkConfigurationPolicy:

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: example-config
spec:
  desiredState:
...
    routes:
      config:
      - destination: 0.0.0.0/0
        next-hop-address: 10.0.10.1
        next-hop-interface: br10
        table-id: 101
      - destination: 10.0.10.0/24
        next-hop-interface: br10
        table-id: 101        
    route-rules:
      config:
      - ip-from: 10.0.10.10/32
        route-table: 101

The above creates a new route table with an ID of 101 and adds a couple of routes. It also creates a rule matching any traffic that originates from the 10.0.10.10 IP address, which is assigned to an interface.

The net result is the following:

server ~ > ip route show table 101
default via 10.0.10.1 dev br10 proto static 
10.0.10.0/24 dev br10 proto static scope link 

server ~ > ip rule show
...
30000:     from 10.0.10.10 iif br10 lookup 101 proto static
...

server ~ > curl ifconfig.me
131.191.104.138
server ~ > curl --interface br10 ifconfig.me
131.191.55.228

The path my packets take depends on the source interface they were assigned.

The other valid fields from the api are:

  • family
  • state
  • ip-from
  • ip-to
  • priority
  • route-table
  • fwmark
  • fwmask
  • action
  • iif
  • suppress-prefix-length
  • suppress_prefixlength

I have been fighting with the egress policy, but I figured a quick post to document how to create route rules would not hurt, as I didn’t find much in the way of documentation.

I’m still investigating, so drop me a line if you have an addition or correction.

Running a Valheim server with Password Rotation on Kubernetes

It has been a long while since I have posted, mostly due to work having fun enough projects that my lab became a sort of second job. That isn’t to say I wasn’t tinkering, just that most of my time was going to the Kubernetes equivalent of weed pulling. Provided that moving off of VMware to Kubevirt counts as weed pulling. I don’t have the energy to document that journey just yet.

But then a bolt of inspiration happened after his kids wrecked a portion of our majestic mountain castle:


“If only you could change the password automatically”

In fairness to the kids “Dave” and “Normol the Red”… They only led the stone golem to the base. After a couple of play sessions of finger pointing and comic book guy style bans (there is nothing funnier than a kid building a village just to ban the adults and his brother), I decided to get to work. This sounds like a job for someone with more time than sense…

Building on lloesche’s excellent work

None of this would be possible without standing on the shoulder’s of the giant that is lloesche using his excellent Valheim server container. Seriously, star his repo and buy him a coffee and a puppy.

Prerequisites

Besides the obvious Kubernetes cluster, we are going to need a few things for our server to work correctly:

First, we are going to need a place to store persistent data. I currently the local-path-provisioner from the SUSE Rancher folks. This simply creates a local directory for every new PVC/PV that is created. You are welcome to use anything here.

Second, we need a load balancer (or configure a nodeport if you would like). My lab uses Metallb.

Deploy Valheim to Kubernetes

I don’t plan to show all of the YAML required to get this going, so instead, let’s download the repo from github.com:

git clone git@github.com:ccrow42/valheim-k8s-server.git
cd valheim-k8s-server

We can apply these files in order to deploy the Valheim server container:

ccrow--MacBookPro18:deploy ccrow$ ls deploy
01-namespace.yaml
02-valheim-pvc.yaml
03-valheim-deployment.yaml
04-valheim-service.yaml

k apply -f deploy/.

This will apply all of the manifests required to get Valheim running.

You may wish to change storageClass in the 03-valheim-pvc.yaml file:

...
spec:
  storageClassName: local-path
...

and the service configuration in 04-valheim-service.yaml if you are not using a load balancer:

...
spec:
  ports:
  - name: gameport
    nodePort: 30742
    port: 2456
    protocol: UDP
  - name: queryport
    nodePort: 32422
    port: 2457
    protocol: UDP
  selector:
    app: valheim-server
  type: LoadBalancer
...

The type can easily be changed to nodePort. Take note of the ports required for valheim to operate.

We now need to create a secret to make this work correctly. The secret reference can be found in the 04-valheim-deployment.yaml file (we don’t need to modify anything, but it is important to know what the name and key is for our secret:

...
spec:
  template:
    spec:
      containers:
      - name: valheim-server
        env:
          valueFrom:
            secretKeyRef:
              name: valheim-pass
              key: SERVER_PASS

Now let’s create the secret. Don’t worry about what you set the secret to as the entire point of this article is to be able to cycle the password automatically:

k create secret generic -n valheim valheim-pass --from-literal=SERVER_PASS=lumberjack

Now let’s check on our service and port information so we know how to configure our firewall:

ccrow--MacBookPro18:deploy ccrow$ k get svc -n valheim
NAME             TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
valheim-server   LoadBalancer   10.43.14.176   172.42.3.32     2456:30742/UDP,2457:32422/UDP   166d

We would of course forward 2456-2457 UDP on our firewall to 172.42.3.32. We can stop now if all we want to do is build a Valheim server on Kubernetes, but you’re here for the shenanigans.

Rotating the Valheim server password

How to rotate this password was a hotly debated topic. Should I use a gitops pipeline to update a sealed secret and make sure ArgoCD bounces the pod? This seems like a lot of work to persist secrets in a git repo (I’m not a fan of this, even with sealed secrets). Besides, our deployment shouldn’t be hard-coding a password in the first place. Although the real reason is I suck at writing gitlab actions.

Should I use an external secrets provider? This is probably the correct way and tell it to cycle the password. This did bring up the question about how to get the Valheim pod to notice the change and restart (I suspect that this should be a sidecar). Either way, I don’t have an external secrets provider configured… yet.

At the end of the day, any solution is going to require some custom scripting so that I can notify people of the password change. I decided to go with a Kubernetes CronJob and a custom image to do the work.

I decided to use a simple list of dictionary words for the password. I also decided to use discord for notifications to a private channel.

It is funny how often things devolve to bash…

Building a custom image

Our custom image is going to have a couple of helpful tools installed to do a password rotation. I was also lazy and baked the word list into the image itself. I have also added a little script to notify Discord of the password change. It is a generic function, and will pull the Discord webhook URL from a secret. If you haven’t generated a Discord webhook yet, see these instructions.

Let’s take a look at a couple more files in our repo:

#!/usr/bin/env bash

set -ex

MESSAGE="$*"

# Safely encode message and send
curl -k -X POST -H "Content-Type: application/json" \
  -d "$(jq -nc --arg content "$MESSAGE" '{content: $content}')" \
  "$DISCORD_WEBHOOK"

This file will be included in our image. Let’s take a look at the Dockerfile:

FROM debian:bookworm-slim

RUN apt-get update && \
    apt-get install -y --no-install-recommends bash jq curl python3 ffmpeg gettext-base && \
    apt-get clean && rm -rf /var/lib/apt/lists/*
COPY words.txt .
COPY notify_discord.sh /usr/local/bin
COPY update_youtube_channels.sh /usr/local/bin
RUN chmod +x /usr/local/bin/notify_discord.sh
RUN curl -k -LO "https://dl.k8s.io/release/$(curl -k -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
RUN mv kubectl /usr/local/bin
RUN curl -k -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o /usr/local/bin/yt-dlp
RUN chmod a+rx /usr/local/bin/yt-dlp
RUN chmod +x /usr/local/bin/kubectl

# Set bash as default shell
SHELL ["/bin/bash", "-c"]

We can now build and push our image:

docker build -t registry.lan.ccrow.org/debian-custom:latest debian-custom/.
docker push registry.lan.ccrow.org/debian-custom:latest

Of course you will need to change the location where you are storing your image. My registry is not public.

Finally, let’s create our webhook secret:

k -n valheim create secret generic discord-password-url --from-literal=DISCORD_WEBHOOK=https://discord.com/api/webhooks/aVeryLongStringofThings

Be sure to use the webhook URL you created earlier.

Creating the CronJob

Our cronjob is really the glue that makes this whole thing work. It starts by updating the password in the secret we created earlier. It then restarts the Valheim deployment (I don’t bother to check if folks are in the server. If you are still playing at 4am than this your hint to go to bed). Lastly, it posts the password to the discord channel using the URL we stored in the above secret.

Because we are messing with some Kubernetes objects from our container, we need to create a service account the the proper permissions. Take a moment to review the configuration in the valheim-password-rotation/service-account.yaml file and apply it:

k apply -f  valheim-password-rotation/service-account.yaml

Now let’s take a look at the CronJob itself:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: valheim-password-rotate
  namespace: valheim
spec:
  schedule: "0 4 * * *"
  timeZone: "US/Pacific"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: valheim-password-rotator
          restartPolicy: OnFailure
          containers:
          - name: rotate-password
            image: registry.lan.ccrow.org/debian-custom:latest
            imagePullPolicy: Always
            env:
            - name: SECRET_NAME
              value: valheim-pass
            - name: DISCORD_WEBHOOK
              valueFrom:
                secretKeyRef:
                  name: discord-password-url
                  key: DISCORD_WEBHOOK
            - name: SECRET_KEY
              value: SERVER_PASS
            - name: DEPLOYMENT_NAME
              value: valheim-server
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            command:
            - /bin/bash
            - -c
            - |
              set -ex
              PASSWORD=$(shuf -n 1 words.txt | tr -d '\r\n')

              echo "New password: $PASSWORD"

              kubectl get secret "$SECRET_NAME" -n "$NAMESPACE" -o json | \
                jq --arg pw "$(echo -n "$PASSWORD" | base64)" \
                   '.data[$ENV.SECRET_KEY] = $pw' | \
                kubectl apply -f -

              notify_discord.sh "new valheim server password is $PASSWORD"

              kubectl rollout restart deployment/"$DEPLOYMENT_NAME" -n "$NAMESPACE"

We pass a number of configuration variables, which if you have been following this guide you shouldn’t need to change.

Be sure to update the image on line 17.

Line 40 is where the magic happens using the shuf command. This logic would be easy to update if you would like a different password policy.

Once you are satisfied, let’s apply the manifests and run a test job:

k apply -f valheim-password-rotation/rotate-password-cron.yaml
kubectl create job --from=cronjob/valheim-password-rotate valheim-password-rotate-test -n valheim

sleep 60

kubectl logs job/valheim-password-rotate-test -n valheim -f

Wrapping up

If all went well, you should see a discord notification trigger:

I swear I did not plan that password…

This was a fun little project that I could wrap my walnut around given the arrival of our new baby (which also explains the numerous spelling and grammar mistakes).

One more fun idea… Don’t give your kids the new password until they used the previous password in a sentence:

And there you have it. May your fires be warm and your homes unmolested by children!

Migrating a VM from vSphere to Openshift Virtualization

Portworx is a solution for data management on Kubernetes, but one area that surprised me was that our support extended to the KubeVirt project. Kubevirt is funded by RedHat, and a number of my customer have asked about the feasibility of using Portworx for virtualization.

KubeVirt is an open-source project, and although I configured the project on generic Kubernetes installations, RedHat Openshift has the best integration I have found so far. For this article I want to chronicle the first part of my journey: How would I move a virtual machine from my VMware environment to an Openshift Virtualization environment?

I started with an Openshift cluster that was running on virtual machines in my environment (using virtualization passthrough). I then installed Portworx (more on the why of that later).

VMware Migration Prerequisites

In order to convert VMware VMs, we will need to do two things. First, we need to capture the SHA1 of our vCenter certificates. Run the following to get your SHA1 fingerprint, you will need it later.

echo | openssl s_client -connect 10.0.1.10:443 | openssl x509 -noout -fingerprint -sha1
...
SHA1 Fingerprint=EF:82:09:1D:C2:69:80:F3:A3:00:3B:53:F6:EC:86:E3:8C:98:83:20

Next, we will need to build a quick container that contains the Virtual Disk Development Toolkit (VDDK). Ensure you have docker or podman (or something similar) and are connected to a registry. Download and extract the VDDK:

tar zxfv ./VMware-vix-disklib-7.0.3-20134304.x86_64.tar.gz

Create a new file called Dockerfile, it should be in the same directory that you extracted the above in to. Place the following content in the file:

FROM registry.access.redhat.com/ubi8/ubi-minimal
USER 1001
COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
RUN mkdir -p /opt
ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]

Now let’s build and push our new container to a repo:

docker build . -t ccrow42/vddk:latest
docker push ccrow42/vddk:latest

Obviously, replace your tag with your repo (or hell, use my uploaded image and save yourself some steps!)

Installing the Migration Toolkit

I should mention that this article is not designed to be a step-by-step tutorial but to simply document the overview and resources I have used.

The first step was to read through the documentation here. (Just kidding, but wanted to cite my sources)

I then installed the operator. This installation will prompt you to install the forklift controller.

Reload your web interface and you will see a migration section on the menu. Let’s head over to the virtualization providers. Be sure change your project to openshift-mtv (if that is indeed where you installed the operator):

Let’s connect Openshift to VMware by clicking the Create Provider button:

Last, we simply need to create a Migration Plan, head over to the Plans for Migration section and select: Create Plan.

This process is straightforward, just select the source and destination. If you are not familiar with Portworx, just use the px-db storage class for now.

There are two ways of importing VMs, one is to use a cold migration, and the other a warm migration (which requires CBT).

Although this covered the migration steps, there are a few considerations around storage and networking that I will cover in a later article.

Why would I use Portworx Enterprise for this?

Portworx provides the same benefits to Opershift Virtualization as it does to other container workloads. The two that are very important:

  • Migration and DR: The ability to take a VM and move it to a new cluster, or to create DR plans for the VM.
  • Because Portworx supports RWX access on block devices, we enable a live migration of virtual machines.

How software development has changed in my life

I didn’t notice myself getting older; it snuck up on me in many ways. Similar to watching children grow up, it’s a slow and subtle process.

However, unlike observing children grow, my return to light development work was quite shocking. Like many others, I started with BASIC on MS DOS, then moved on to Perl, briefly entertained the idea of becoming a C++ and Java developer (a quick glance at my profile will reveal how well that worked out for me), and eventually settled into the gentle scripting of a sysadmin. But throughout my journey, I did acquire one trait: I became lazy.

Trigger Warning for Developers: Prepare for Criminal Inefficiency that may cause an aneurysm.

In the past, when I used to develop, I would spend time setting up my Very Special* brand laptop with the necessary Perl modules. I would build virtual machines to replicate production environments and data services. And then, due to several misplaced semicolons, I would find myself mashing the save button 50 times an hour. When started using containers, I quickly retooled my workflows to be more container-based. It was great to have every module and customization be immutable and packaged. But now, every time I mashed that save button, I had to go through the following steps:

  1. Check in my code to github.
  2. Download the code on my docker host (don’t ask me why).
  3. Build and upload the image to dockerhub.
  4. Update my deployment to incorporate the new image (in a testing environment, of course!).
  5. Only to realize that I missed the Python equivalent of a semi-colon (which, I suppose, is a space).

The above process was maddening. However, I learned two crucial things when I attended a developer user group hosted by DevZero:

  1. VS Code has an SSH plugin
  2. There are tools available for Kubernetes service insertion.

Remote Development with VS Code

Remote Development with VS Code became a game-changer for me. I had a Linux host with all the necessary tools (kubectl, pxctl, etc.) installed and ready. I had been using this host for Kubernetes administration, but when all you have is VI (which, I must add, would make my father roll over in his grave, by which I mean his nice rambler in the country, as I type this), any complex change can be daunting.

For more information on using VS Code with SSH, refer to: https://code.visualstudio.com/docs/remote/ssh. However, after installing the plugin, you can follow these steps by pressing F1:

  • Remote-SSH: Add New SSH host
  • Remote-SSH: Connect to SSH host

Once the connection is complete, you will be able to navigate your remote server from the file browser, use git remotely, and use the remote terminal.

Of course, since many programs require a web browser for testing, remote-ssh also facilitates port tunneling through the SSH connection (similar to the “-L” option in SSH for experienced users). Whenever a program sets up a new port on my remote machine, a prompt appears, enabling me to forward the port and access it from my local laptop.

This only addresses the initial aspect of my problem. The subsequent issue is that I have a tendency to excessively press the save button while attempting to achieve proper spacing in Python (or nowadays, when I ask ChatGPT to write a Python script for me). Additionally, the program I was working on required a connection to MongoDB, which was running in my Kubernetes cluster. I could run Mongo locally, but it wouldn’t have a copy of my production data.

Telepresence – and other tools like it

Once again, I am fairly sure DevZero told me about this tool (or at least the concept) Telepresence.

Telepresence establishes a connection to a Kubernetes cluster, enabling connections to Kubernetes services and service insertion, which permits other Kubernetes objects to interact with my local program. This significantly simplifies the process of debugging.

kubectl config use-context MyStagingCluster
telepresence helm install
telepresence connect

And my Flask app has tested a connection to MongoDB successfully! To summarize:

  • I did the above from my laptop (which ONLY has VSCode installed).
  • I was connected to a Linux server in my house with all of the development tools I used
  • My Linux server ran the code and was connected to an Azure AKS staging cluster that was running a copy of my production application.
  • I then connected to my Flask application from my web browser on my laptop, which was connected to the Linux server with a dynamic SSH tunnel, which then connected to the MongoDB instance running in Azure.

Plexstack Part 5 – Installing Radarr

There are a couple more concepts I want to cover before turning folks loose on a github repo:

  1. Instead of a hostpath, we should be using a PVC (persistent volume claim) and PV (persistent volume).
  2. What if we need to give a pod access to an existing and external dataset?

Radarr (https://radarr.video/) is a program that manages movies. It can request them using a download client, and can then rename and move them into a shared movies folder. As such, our pod will need to have access to 2 shared locations:

  1. A shared downloads folder.
  2. A shared movies folder.

NFS Configuration

We need to connect to our media repository. This could directly mount the media server, or to a central NAS. In any case, our best bet is to use NFS. I won’t cover setting up the NFS server here (ping me in the comments if you get stuck), but I will mention how to connect to an NFS host.

This bit of code needs to be run from kubernetes node if you happen to use kubectl on a management box. If you have been following these tutorials and using a single linux server, then feel free to ignore this paragraph.

# Install NFS client
sudo apt install nfs-common -y

# edit /etc/fstab
sudo nano /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sysvg/root during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQM11kT2U143DtREAzGzzsoDCYbD2h7Ijke / xfs defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/890c138e-badd-487e-9126-4fd11181cf5c /boot xfs defaults 0 1
# /boot/efi was on /dev/sda1 during curtin installation
/dev/disk/by-uuid/6A88-778F /boot/efi vfat defaults 0 1
# /home was on /dev/sysvg/home during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQMZoJ5IYUmfVeAlOMYoeVSU3WStycNW6MX /home xfs defaults 0 1
# /opt was on /dev/sysvg/opt during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQM1Vgg9WyNh823YnysItHcwA4kc0PAzrAq /opt xfs defaults 0 1
# /tmp was on /dev/sysvg/tmp during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQMRA3d1jDZr8n9R23N2t4o1yxCyz2hiD3q /tmp xfs defaults 0 1
# /var was on /dev/sysvg/var during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQMnhsacKjBubhXMyv1tK8D3umR3mnzSjbp /var xfs defaults 0 1
# /var/log was on /dev/sysvg/log during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQM1IyfBAleLuw7m0G3UC9KNLrtmVAodTqu /var/log xfs defaults 0 1
# /var/audit was on /dev/sysvg/audit during curtin installation
/dev/disk/by-id/dm-uuid-LVM-QkcnzyIuoI6q532Z4OIYgQCxWKfPFEQMsrZUFWfY77xrwFBu3vSgbUfnJIp3AKA6 /var/audit xfs defaults 0 1
/swap.img       none    swap    sw      0       0

#added nfs mounts to the end of the file
10.0.1.8:/volume1/movies /mnt/movies nfs defaults 0 0
10.0.1.8:/volume1/downloads /mnt/downloads nfs defaults 0 0

Lines 29 and 30 were added to the end of the file. Be sure to change the IP address and export path. Go ahead and mount the exports:

mount /mnt/movies
mount /mnt/downloads

PVC and Radarr configuration

Second, we don’t want to use host path under most circumstances, so we need to get in the habit of using a PVC with a provisioner to manage volumes. This will effectively make our architecture much more portable in the future.

A CSI driver allows automated provisioning of storage. Storage is often external to the kubernetes nodes, and is essential when we have a multi-node cluster. I would encourage everyone to read this article from RedHat. The provisioner we will be using is rather simple: it would create a path on the host and store files there. The outcome is the same, but the difference is how we get there. Go ahead and install the local provisioner:

# Install the provisioner
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml

# Patch the newly created storage class
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Now take a look at this manifest for rancher (as always, a copy of this manifest is out on github: https://github.com/ccrow42/plexstack):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: radarr-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-path
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: radarr-deployment
  labels:
    app: radarr
spec:
  replicas: 1
  selector:
    matchLabels:
      app: radarr
  template:
    metadata:
      labels:
        app: radarr
    spec:
      containers:
        - name: radarr
          image: ghcr.io/linuxserver/radarr
          env:
            - name: PUID
              value: "999"
            - name: PGID
              value: "999"
          ports:
            - containerPort: 7878
          volumeMounts:
            - mountPath: /config
              name: radarr-config
            - mountPath: /downloads
              name: radarr-downloads
            - mountPath: /movies
              name: radarr-movies
      volumes:
        - name: radarr-config
          persistentVolumeClaim:
            claimName: radarr-pvc
        - name: radarr-downloads
          hostPath:
            path: /mnt/downloads
        - name: radarr-movies
          hostPath:
            path: /mnt/movies
---
kind: Service
apiVersion: v1
metadata:
  name: radarr-service
spec:
  selector:
    app: radarr
  ports:
  - protocol: TCP
    port: 7878
    targetPort: 7878
  type: LoadBalancer
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-radarr
  annotations:
    cert-manager.io/cluster-issuer: selfsigned-cluster-issuer #use a self-signed cert!
    kubernetes.io/ingress.class: nginx
spec:
  tls:
    - hosts:
        - radarr.ccrow.local #using a local DNS entry. Radarr should not be public!
      secretName: radarr-tls
  rules:
    - host: radarr.ccrow.local 
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: radarr-service
                port:
                  number: 8181

Go through the above. At a minimum, lines 80 and 83 should be modified. You will also notice that our movies and download directories are under the /mnt folder.

To connect to the service in one of two ways:

  1. LoadBalancer: run ‘kubectl get svc’ and record the IP address of the radarr-service, then connect with: http://<IPAddress>:7878
  2. Connect to the host name (provided you have a DNS entry that points to the k8s node)

That’s it!

PlexStack Part 1.6 – Installing Plex

Due to the last post getting a bit length, I’m going to cover installing Plex in a separate post. However you get a Linux VM, you can simply log in to the box.

This is probably the worst time to tell people, but you can easily run Plex on Windows, but that would not allow you to run Plex on a Raspberry PI.

Log in to your VM.

Next, we need to get the plex installation. Head over to plex.tv and select linux, and click the “Choose Distribution” option.

We are now going to do something tricky: Right-click on the Ubuntu Intel/AMD 64-bit or the Ubuntu ARMv8 (if you use Raspberry PI) and select “Copy Link”. After all, we want the software on our linux box!

If you haven’t already, get an application called putty. This will allow you to connect to a terminal of your new Linux server, and paste commands most importantly! Launch the app:

Plug in that IP that you wrote down

And then type in your username and password at the prompt.

At the prompt, let’s download and install Plex:

#get the plexmediaserver package
wget https://downloads.plex.tv/plex-media-server-new/1.28.0.5999-97678ded3/debian/plexmediaserver_1.28.0.5999-97678ded3_amd64.deb

#install plex
sudo dpkg -i plexmediaserver_1.28.0.5999-97678ded3_amd64.deb

Keep in mind that the first time you run a command with sudo (which allows you to become an administrator for just that command) you will have to type your same password in again.

You are set! Plex is done! Access it with: HTTP://<YOURIPADDRESS>:32400/web

Getting media over is a separate task. It can be as simple as getting a drive from Costco. Consider formatting the drive on the Linux machine and transferring data using a tool like WinSCP.

Drop a comment if you get this far and I can update the post.

PlexStack Part 1.5 – Installing Ubuntu and Plex Media Server

An earlier post sparked enough questions from folks that I figured I would write a separate article: If I just want a Plex server, how would I go about installing that?

So far, my posts have assumed that my readers have a degree of skill using Linux, and that they were able to install a Linux server fairly easily. Not everyone falls in to the above category, so I figured I would write a quick post to hopefully point people in the right direction.

What do I need to set up a linux server?

The short answer is: a place to install a linux server. This could be any of the following:
– A Raspberry PI
– Running as a virtual machine on your desktop (you should have a bit of ram for this!)
– An old computer or laptop you have lying around

I will cover each of these to hopefully provide some resources

A Raspberry PI

Getting Linux installed on a Raspberry PI is probably the simplest out of all the above options. You will of course need a Raspberry PI as well as a Power Supply and SD card (look for a bundle in the store if it is your first time doing this). You will also need a way to put Linux on the SD card for the Raspberry PI to boot, consider something like this

Once you have the parts, plug the SD card in to the USB adapter. Download the following program: https://www.raspberrypi.com/software/. This program will download and install Raspberry PI OS to the SD card. Launch the application, and select “Choose OS”. I would select “Raspberry PI OS (other)” and then “Raspberry PI OS Light” so we don’t install a desktop. You can install a desktop later if you would like, but getting comfortable with the CLI on Linux is essential.

Next, select the SD card device and click “Write”. You can then plug in the SD card and power on the Raspberry PI.

Running on a Virtual Machine

Because I run ESXi and VMware workstation at home, I’m going to have the least info on how to do this, but I would recommend installing VirtualBox on your PC. This will allow you to create a virtual machine:

The above is an example of a virtual “hardware” configuration

However you arrive at it, you can see that we connect a “virtual” CD/DVD drive. You can get the .ISO file here: https://ubuntu.com/download/server.

You will also need to ensure that your network type is set to “bridge” so that other computers can access the VM (and therefor, your plex server)

Install on an old desktop or laptop

In order to install Linux on an old computer, we will need to boot from some installation media. Grab an old USB drive and download Rufus and Ubuntu.

Rufus is a tool that writes an ISO to a USB drive so you can boot your computer from the USB drive to install Linux. Keep in mind that installing Linux is DESTRUCTIVE to your old computer. Fire up Rufus and point it to your ISO file and your USB drive.

Insert the USB drive and reboot your computer (keep in mind that you may need to tell your computer to boot from the USB drive, this can usually be done by pressing F11 or F12 when the computer first powers on, but it depends on the computer).

Install Linux (Finally)

We can now run through the Linux install (unless you choose to use a raspberry PI, then skip this section).

Pick your language
Don’t bother updating the installer
Write down this IP address! This is how you will get to your Plex server and SSH
If you don’t know if you are running a proxy, you aren’t
Use the defaults here
Use the defaults here
Set a computer name, username, and password. Be sure to document it!
Check the box to install the SSH server

That is it, the server will reboot and you should be able to log in using a keyboard and mouse.

This post is getting long, so I’m going to save the plex install for the next post.

Deploying Rancher clusters

Update January 5th 2023

We all get older and wiser, and although the below procedure works, a co-worker asked me: “Why not just use the cloud init image?” Information and downloads can be found here.

  • Grab the OVA
  • Deploy the OVA to vSphere
  • Mark it as a template

The rest of the article continues…

After a long while of playing with templates, I finally have a working configuration that I am documenting to ensure that I don’t forget what I did.

Step 1: packer

In trying to get a usable image, I ended up using packer following this tutorial: https://github.com/vmware-samples/packer-examples-for-vsphere. No dice, so after ensuring I had all of the packages added from here: https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template, the only missing packages were the growpart.

I tried prepping the template from the above, but ended up using the following script: https://github.com/David-VTUK/Rancher-Packer/blob/main/vSphere/ubuntu_2204/script.sh

# Apply updates and cleanup Apt cache

apt-get update ; apt-get -y dist-upgrade
apt-get -y autoremove
apt-get -y clean
# apt-get install docker.io -y

# Disable swap - generally recommended for K8s, but otherwise enable it for other workloads
echo "Disabling Swap"
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# Reset the machine-id value. This has known to cause issues with DHCP
#
echo "Reset Machine-ID"
truncate -s 0 /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id

# Reset any existing cloud-init state
#
echo "Reset Cloud-Init"
rm /etc/cloud/cloud.cfg.d/*.cfg
cloud-init clean -s -l

and I was off to the races… to only hit another problem.

Troubleshooting

I found the following reddit thread that was rather helpful: https://www.reddit.com/r/rancher/comments/tfxnzr/cluster_creation_works_in_rke_but_not_rke2/

export KUBECONFIG=/etc/rancher/rke2/rke2.yaml; export PATH=$PATH:/var/lib/rancher/rke2/bin
kubectl get pods -n cattle-system
kubectl logs <cattle-cluster-agent-pod> -n cattle-system

The above describes an easy way to test nodes that are coming up. Keep in mind that RKE2 turns up in a very different way than RKE. After the cloud-init stage, RKE2 binaries and containerd are deployed. It is helpful to be able to monitor pods that are coming up that control agents.

The last issue I encountered was that my /var filesystem didn’t have enough space. After fixing my template I now have a running RKE2 cluster!

PlexStack Part 4 – Our first app: Tautulli

We are now at a point where we can build our first application that requires some persistence. We are going to start with Tautulli, an application that provides statistics about your Plex server.

We assume that you only have a single server. The state of kubernetes storage is interesting. The easiest way is to simply pass a host path in to the pod, but that doesn’t work when you have multiple nodes. Incidentally, what I do for my day job (Portworx Cloud Architect) is solving these problems for customers. More on that later.

We first need to specify a location to store configuration data. I will use /opt/plexstack/tautulli as an example.

mkdir -p /opt/plexstack/tautulli

Next, let’s take a look at the manifest to install tautulli:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tautulli-deployment
  labels:
    app: tautulli
spec:
  replicas: 1
  selector:
     matchLabels:
       app: tautulli
  template:
    metadata:
      labels:
        app: tautulli
    spec:
     containers:
        - name: tautulli
          image: ghcr.io/linuxserver/tautulli
          env:
            - name: PUID
              value: "999"
            - name: PGID
              value: "999"
            - name: TZ
              value: "America/Los_Angeles"
          ports:
            - containerPort: 8181
          volumeMounts:
            - mountPath: /config
              name: tautulli-config
     volumes:
       - name: tautulli-config
         hostPath: 
            path: /opt/plexstack/tautulli
---
kind: Service
apiVersion: v1
metadata:
  name: tautulli-service
spec:
  selector:
    app: tautulli
  ports:
  - protocol: TCP
    port: 8181
    targetPort: 8181
  type: LoadBalancer
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-tautulli
  namespace: plexstack
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
spec:
  tls:
    - hosts:
        - tautulli.ccrow.org
      secretName: tautulli-tls
  rules:
    - host: tautulli.ccrow.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: tautulli-service
                port:
                  number: 8181

There is a lot to unpack here:

  • The first section is the deployment. It defines the application that will run. Line 19 specifies the image.
  • Lines 21 – 26 are environment variables that configure tautulli
  • We can see where we specify the /config directory inside the container to be mapped to a host path (lines 29 – 35).
  • The next section is the service, which looks for pods with an app selector of tautulli.
  • We are also going to provision a load balancer IP address to help with troubleshooting. This could be changed to ClusterIP to be internal only. After all, why go to an ip address when we can use an ingress.
  • Tautulli.ccrow.org must resolve to our rancher node through the firewall (a step we already did in the last blog.

Let’s apply the manifest with:

# create the namespace
kubectl create namespace plexstack

# apply the manifest
kubectl -n plexstack apply -f tautulli.yaml

# check on the deployment
kubectl -n plexstack get all -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
pod/tautulli-deployment-b4d5485df-f28px   1/1     Running   0          45s   10.42.2.30   rke04   <none>           <none>

NAME                       TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE   SELECTOR
service/tautulli-service   LoadBalancer   10.43.36.8   10.0.1.55     8181:31154/TCP   45s   app=tautulli

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                         SELECTOR
deployment.apps/tautulli-deployment   1/1     1            1           45s   tautulli     ghcr.io/linuxserver/tautulli   app=tautulli

NAME                                            DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                         SELECTOR
replicaset.apps/tautulli-deployment-b4d5485df   1         1         1       45s   tautulli     ghcr.io/linuxserver/tautulli   app=tautulli,pod-template-hash=b4d5485df

Notice the external IP address that was created for the tautulli-service. You can connect to the app from that IP (be sure to add the 8181 port!) instead of the DNS name.

All configuration data will be stored under /opt/plexstack/tautulli on your node.

Bonus Appplication: smtp

In order for tautulli to send email, we need to set up an SMTP server. This will really show off the power of kubernetes configurations. Take a look at this manifest:

---
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: smtp-deployment
   labels:
     app: smtp
 spec:
   replicas: 1
   selector:
      matchLabels:
        app: smtp
   template:
     metadata:
       labels:
         app: smtp

     spec:
      containers:
         - name: smtp
           image: pure/smtp-relay
           env:
             - name: SMTP_HOSTNAME
               value: "mail.ccrow.org"
             - name: RELAY_NETWORKS
               value: "10.0.0.0/8"
           ports:
             - containerPort: 25
---
kind: Service
apiVersion: v1
metadata:
  name: smtp-service
spec:
  selector:
    app: smtp
  ports:
  - protocol: TCP
    port: 25
    targetPort: 25
  type: ClusterIP

You can apply the above manifest. Be sure to change lines 24 and 26 to match your network. Please note: “Your network” really means your internal kubernetes network. After all, why would we send an email from an external source (well, unless you want to, in which case, change line 41 to loadBalancer).

kubectl -n plexstack apply -f smtp.yaml

We now have a working SMTP server! The coolest part of kubernetes service discovery is being able to simply use the name of our service for any application in the same namespace:

Using the service name means that this configuration is portable, no need to actually plug in the cluster IP address that was assigned.