Poor Mans Continuous Deployment

Poor Mans Continuous Deployment


Since I'm using the official BOINC client container from the docker hub, I don't really need any CI work at the moment.  I looked at just using Jenkins as a CD engine for this project, but discovered the recommended memory for Jenkins was around 40% of the memory in the ENTIRE cluster.   So I decided to "roll my own".  

The logic is really all that difficult:

-Check the official container on docker hub
-if the same
--    exit
-else
--    do a "kubectl patch" with the timestamp to force the cluster to do a rolling update

Then package this script up and stick in a cron job.

This little snippet of YAML in the manifest results in the container ALWAYS being pulled from the repo on an update. Otherwise, Kubernetes will simply check the tag and say "Yep we're good" and use the cached image that's already stored locally.

spec:
  spec:
    containers:
    - name: boinc
       image: boinc/client
       imagePullPolicy: Always
Here is the source code of the script I'm using for my deployment.  It doesn't have a nice dashboard or lots of features, but it gets the basics done.

#!/bin/bash
# this script is fairly simple. It does the following
# To implement a cheap CD it should be run by cron every so often.
# 1 pulls a python script that pulls data from docker repositories
# 2 retrives the hash for the boinc/client official container
# 3 checks the most recent official container hash against the provious pulled hash
# 4 If there is no difference it exits.
# 5 else saves the new hash value as the future old hash
# 6 pull the "kube-demo" manifest repository
# 7 applies the latest rosetta and world community grid manifests to the kubernetes cluster
# 8 patches the running mamifest with a timestamp indicating latest manifest pull
tmpfile=$(mktemp /tmp/pmcd.XXXXXX)
#this line pulls the latest code from the repo if it exists locally, otherwise it does a fresh clone of the repo
git -C docker-repo-info pull >>$tmpfile 2>&1 || git clone https://github.com/cstradtman/docker-repo-info.git >>$tmpfile 2>&1
#this pulls the latest container hash value from the container repo for the boinc client with the "latest" tag
newhash=`python3 docker-repo-info/etagcli.py index.docker.io boinc/client latest` >>$tmpfile 2>&1
oldhash=`cat ~/.pmcicd/currentboinchash.txt`
if [ "$newhash" = "$oldhash" ]; then
    echo "no difference" >>$tmpfile 2>&1
    mail pmcd@sysnetinc.com -s "short cd run" < $tmpfile
    rm $tmpfile
    exit
else
    echo "$newhash" > ~/.pmcicd/currentboinchash.txt
    git -C kube-demo pull >>$tmpfile 2>&1 || git clone https://github.com/cstradtman/kube-demo.git >>$tmpfile 2>&1
    deploydate=`date +%s`
    kubectl apply -f kube-demo/rosetta.yaml >>$tmpfile 2>&1
# by patching running deployment with the timestamp, it forces a rolling update to the pods in the
# deployment because the manifest has changed
    kubectl patch deployment rosetta-deployment -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$deploydate\"}}}}}" >>$tmpfile 2>&1
    kubectl apply -f kube-demo/world-community-grid.yaml >>$tmpfile 2>&1
    kubectl patch deployment wcg-deployment -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$deploydate\"}}}}}" >>$tmpfile 2>&1
    mail pmcd@sysnetinc.com -s "deployment cd run" < $tmpfile
    rm $tmpfile
fi

The combination of these two plus cron result in the Kubernetes cluster being updated to the 
latest container image when published with the "latest" tag in the docker hub.



No comments: