Live Pod Deployments

Live Pod Deployments

I implement 3 different kind of deployments:
  1. A serviceless deployment that reaches out as a client to an API to retrieve data for processing.  In this case this deployment is also defined to run only on 2 of the nodes and not on the node that is running the video source.
  2. A Daemonset which is running on all nodes and is exposed to the "Outside" (technically the outside in this case is just the rest of my office ;-) ) via the Metallb load balancer.  This service is stateless since there is just one request/response in each use.
  3. A single service multi process deployment that creates a web page that allows the selection of streaming of a timestamped colorbar screen with a clock and audible timer.

Boinc deployment






Now that the underlying cluster is working, now we can start it doing some work. I decided to use the BOINC client as my demo workload. There are 2 projects that I'm currently running: "MilkyWay@home" and "World Community Grid/Mapping Cancer Markers". I figured I needed a demo workload and what better than one that's doing some actually needed work.   The manifests I'm using can be found at https://github.com/cstradtman/kube-demo.git.  I've tried to put comments in the manifests to make them self-explanatory.  I'm using Kubernetes secrets to store BOINC RPC password used with the GUI and the "weak key" assigned to me by each project.  Since the BOINC folks actually publish an "official" container on Docker Hub I'm not doing the full CI workflow.  I've kind of done a "poor mans CD" that I'll describe on another page.

whoami deployment

The second deployment is a simple program that reflects all the information about the http get request  back to the requester.   It's a simple container image that also lives in docker hub.   https://hub.docker.com/r/containous/whoami.  I have this running as a Daemon set so that it runs on all 3 of the nodes.   This serves as lightway health check for me to use in testing during the setup of this kubernetes installation.  The yaml Daemonset configuration is below.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: whoami-deployment
spec:
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
      - name: whoami
        image: containous/whoami
        ports:
        - containerPort: 80

Live Video Source deployment

gstreamer

The third deployment consists of a pair of containers that serve a web page capable of generating live streams. These streams feature a color bar screen with a timestamp and a timed audio beep (or audio timer), and are available in either HLS or DASH format.

The domain  refers to the GitHub Container Registry, a service provided by GitHub for storing and managing Docker and OCI container images.  For specifics on the creation of this container see here.

The deployment yaml for gstreamer is below is below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gstreamer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gstreamer
  template:
    metadata:
      labels:
        app: gstreamer
    spec:
      imagePullSecrets:
      - name: ghcrpullcred
      containers:
      - name: gstreamer
        image: ghcr.io/cstradtman/gstreamer_livesource:main
      initContainers:
      - name: wait-for-nginx
        image: busybox
        command: ['sh', '-c', 'until nc -z nginx 8080; do echo waiting for nginx; sleep 2; done;']

The deployment yaml for nginx is below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      imagePullSecrets:
      - name: ghcrpullcred
      containers:
      - name: nginx
        image: ghcr.io/cstradtman/gstreamer_webserver:main
        ports:
        - containerPort: 8080
        - containerPort: 1935
        volumeMounts:
        - mountPath: /live
          name: live-storage
      volumes:
      - name: live-storage
        emptyDir: {}


So what is happening here is we are defining 2 deployments in these yaml stanzas.  One is named nginx and the other is named gstreamer.  Within the kubernetes cluster the deployments are referred to by the defined name.  The way these are define there isn't any guarantee about the order of  containers starting.
In this scenario gstreamer provides content to nginx.  Or conversely nginx is a client of gstreamer.   To work around the dependency ordering the "initcontainers" stanza is added.   The container image being used for this is busybox.  As per it's home page "BusyBox combines tiny versions of many common UNIX utilities into a single small executable."  This gives us a extremely lightweight container to use for the purpose of testing the liveness of the the nginx container.  Here's a breakdown of this shell snippet.
  • until nc -z nginx 8080;
    • nc Netcat, a utility to test network connections.
      • -z: Tells Netcat to check only whether the specified port is open (without sending any data).
    • nginx: the nginx container
    • 8080: The port being checked.
    • The command runs in a loop until the nginx server is detected as available on port 8080.
  • do echo waiting for nginx; sleep 2;
    • While the port is not open, it outputs the message waiting for nginx and pauses for 2 seconds.
  • done;
    • Ends the loop once the nginx service responds on port 8080.
Once the loop finishes the container launches the main gstreamer container from ghcr.io/cstradtman/gstreamer_livesource:main

No comments: