PlexStack Part 3 – External Services and Ingresses

Now we will finally start to get in to some useful configurations for our home PlexStack: Ingresses and external services.

An Ingress is a kubernetes managed reverse proxy that is typically selected through a host. It turns out that your new Rancher cluster is listening on port 80 and 443, but if you have tried to connect by IP address, you will be greeted with a 404 error. An ingress will essentially route a web connection to a particular URL to a service. This means that you will need to configure your DNS service, and likely your router. Let’s look at an example to explain:

I have a service called Uptime Kuma (an excellent status dashboard with alerting) that runs on a raspberry PI. The trouble is, I want to secure the connection with SSL. Now I could install a cert on the Pi, but how would I automatically renew the 90 day cert for let’s encrypt? More importantly, how do I have multiple named services behind a single IP address? Ingresses.

For my example, I have a DNS entry for status.ccrow.org that points to the external IP of my router. I then forward ports 80 and 443 (TCP) to my Rancher node. If I have more than one node, it turns out I can port forward to ANY rancher node.

Next, I have a yaml file that defines 3 things:

  1. A service – an internal construct that kubernetes uses to connect to pods and other things
  2. An endpoint – a kubernetes object that resolves to an external web service
  3. An ingress – A rule that looks for incoming connections to status.ccrow.org, and routes it to the service, and then endpoint. It also contains configurations for the SSL cert information
apiVersion: v1
kind: Service
metadata:
  namespace: externalsvc
  name: pikuma-svc
spec:
  type: ClusterIP
  ports:
    - protocol: TCP
      port: 3001
      targetPort: 3001
---
apiVersion: v1
kind: Endpoints
metadata:
  namespace: externalsvc
  name: pikuma-svc
subsets:
  - addresses:
    - ip: 10.0.1.4
    ports:
    - port: 3001
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-pikuma
  namespace: externalsvc
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx

spec:
  tls:
    - hosts:
        - status.ccrow.org
      secretName: status-tls
  rules:
    - host: status.ccrow.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: pikuma-svc
                port:
                  number: 3001

A few important elements in the above I will explain:

  • all 3 of these elements will be stored in the externalsvc namespace (which will need to be created!)
  • The ingress >(points to)> the services >(points to)> the endpoint
  • The name of the service and endpoint needs to match on line 5, 17, and 46
  • The service type on line 7 is interesting. If it is set to loadBalancer, then an IP address (from the range that we defined in the previous blog post) would be provisioned to the service. No sense in doing that here.
  • Line 30 defines which cert provisioner we are using. Per our previous blog post, your choices are letsencrypt-prod, letsencrypt-staging, and selfsigned-cluster-issuer. Only use letsencrypt-prod if you are ready to go live. You can certainly use a self-signed issuer if you are using an internal DNS name, or if you don’t mind a self-signed certificate.
  • Lines 36 and 39 must match, and define the dns name that will be the incoming point.

Apply the config with:

kubectl create namespace externalsvc
kubectl apply -f uptimekuma-external.yaml

If you decided to go with the let’s encrypt cert, some verification has to happen. It turns out, the cert-manager will create a certificate request, which will create an order, which will create a challenge, which will spawn a new pod with a key that the let’s encrypt services will try to connect to. Of course if the DNS name or firewall hasn’t been configured, this process will fail.

This troubleshooting example is an excellent reference for tracking down issues (Credit):

$ kubectl get certificate
NAME               READY   SECRET     AGE
acme-certificate   False   acme-tls   66s

$ kubectl describe certificate acme-certificate
[...]
Events:
  Type    Reason     Age   From          Message
  ----    ------     ----  ----          -------
  Normal  Issuing    90s   cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  90s   cert-manager  Stored new private key in temporary Secret resource "acme-certificate-tr8b2"
  Normal  Requested  89s   cert-manager  Created new CertificateRequest resource "acme-certificate-qp5dm"

$ kubectl describe certificaterequest acme-certificate-qp5dm
[...]
Events:
  Type    Reason        Age    From          Message
  ----    ------        ----   ----          -------
  Normal  OrderCreated  7m17s  cert-manager  Created Order resource default/acme-certificate-qp5dm-1319513028

$ kubectl describe order acme-certificate-qp5dm-1319513028
[...]
Events:
  Type    Reason   Age    From          Message
  ----    ------   ----   ----          -------
  Normal  Created  7m51s  cert-manager  Created Challenge resource "acme-certificate-qp5dm-1319513028-1825664779" for domain "example-domain.net"

$ kubectl describe challenge acme-certificate-qp5dm-1319513028-1825664779
[...]
Status:
  Presented:   false
  Processing:  true
  Reason:      error getting clouddns service account: secret "clouddns-accoun" not found
  State:       pending
Events:
  Type     Reason        Age                    From          Message
  ----     ------        ----                   ----          -------
  Normal   Started       8m56s                  cert-manager  Challenge scheduled for processing
  Warning  PresentError  3m52s (x7 over 8m56s)  cert-manager  Error presenting challenge: error getting clouddns service account: secret "clouddns-accoun" not found

9/10 times, the issue will be in the challenge, and let’s encrypt can’t connect to the pod to verify you are who you say you are.

Now the above isn’t very cool if you only have one service behind your firewall, but if you have half a dozen, it can be very useful because you can have all of your web services behind a single IP. We will be building on using the ingress next by deploying our first application to our cluster..

The rest of the restore – part 2

With the last post getting a little long, we will pick up where we left off. Our first task is to setup something called a proxy volume. A proxy volume is a portworx specific feature that allows me to create a PVC that is backed by an external NFS share, in this case my minio export. It should be noted that I wiped the minio configuration from the export by deleting the .minio.sys directory, but you won’t need to worry about that with a new install.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: portworx-proxy-volume-miniok8s
provisioner: kubernetes.io/portworx-volume
parameters:
  proxy_endpoint: "nfs://10.0.1.8"
  proxy_nfs_exportpath: "/volume1/miniok8s"
  mount_options: "vers=3.0"
allowVolumeExpansion: true
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: minio
  name: minio-data
  labels:
    app: nginx
spec:
  storageClassName: portworx-proxy-volume-miniok8s
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2T

The above does a couple of things. First, note the ‘—‘ This is a way of combining yaml files into one file. The first section creates a new storage class that points to my nfs export. The second section creates a PVC called minio-data that we will use later. Why not just mount the nfs export to the worker node? Because I don’t know which worker node my pod will be deployed on, and I would rather not mount my minio export to every node (as well as needing to update fstab anytime I do something like this!)

Apply the manifest with:

kubectl apply -f minio-pvc.yaml

Install Minio

To install minio, we will be using helm again. We will be using a values.yaml file for the first time. Let’s get ready:

kubectl create namespace minio
helm  show values minio/minio > minio-values.yaml

The second command will write an example values file to minio-values.yaml. Take the time to read through the file, but I will show you some important lines:

32 mode: standalone
...
81 rootUser: "minioadmin"
82 rootPassword: "AwsomeSecurePassword"
...
137 persistence:
138   enabled: true
139   annotations: {}

  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
144   existingClaim: "minio-data"
...
316 users:
322   - accessKey: pxbackup
323     secretKey: MyAwesomeKey
324     policy: readwrite

Be careful copying the above as I am manually writing in the line numbers so you can find them in your values file. It is also possible to create buckets from here. There is a ton of customization that can happen with a values.yaml file, without you needing to paw through manifests. Install minio with:

helm -n minio install minio minio/minio -f minio-values.yaml

Minio should be up and running, but we don’t have a good way of getting to it. Now is the time for all of our prep work to come together. We first need to plumb a couple of networking things out.

First, configure your firewall to allow port 80 and 443 to point to the IP of any node of your cluster

Second, configure a couple of DNS entries. I use:
minio.ccrow.org – the s3 API endpoint – This should be pointed to the external IP of your router
minioconsole.lab.local – my internal DNS name to manage minio. Point this to any node in your cluster

Now for our first ingress:

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-minio
  namespace: minio
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
spec:
  tls:
    - hosts:
        - minio.ccrow.org
      secretName: minio-tls
  rules:
    - host: minio.ccrow.org #change this to your DNS name
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: minio
                port:
                  number: 9000
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: ingress-minioconsole
  namespace: minio
  annotations:
    cert-manager.io/cluster-issuer: selfsigned-cluster-issuer
    kubernetes.io/ingress.class: nginx

spec:
  tls:
    - hosts:
        - minioconsole.lab.local
      secretName: minioconsole-tls
  rules:
    - host: minioconsole.lab.local # change this to your DNS name
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: minio-console
                port:
                  number: 9001

The above will create 2 ingresses in the minio namespace. One to point minioconsole.lab.local to the minio-console service that the helm chart created. The second to point minio.ccrow.org to the minio service.

We haven’t talked much about services, but they are a way for containers running on kubernetes to talk to each other. An ingress listens for an incoming hostname (think old webservers with virtual hosts) and routes to the appropriate service, but because of all of the work we have done before, these ingresses will automatically get certificates from let’s encrypt. Apply the above with:

kubectl apply -f minio-ingress.yaml

There are a few things that can go wrong here, and I will update this post when questions come in. At this point, it is easy to configure PX backup from the GUI to point at minio.ccrow.org:

And point PX Backup at your cluster:

You can export your kubeconfig with the command above.

We have to click on the ‘All backups’ link (which will take a few minutes to scan), but:

Sweet, sweet backups!!!

Again, sorry for the cliff notes version of these installs, but I wanted to make sure I documented this!

And yes, I backed up this WordPress site this time…