Status Server
The Ingress example I wanted to use didn't work because I misconfigured it but I thought it didn't respond to the /healthz "liveness" requests so looked to write my own microservice. Writing a simple microservice is interesting in itself, of course.
So, we need to author a microservice and push it into our private docker registry and it won't look a million miles away from https://github.com/txn2/ok.git.
I'm using a previously created system with Docker installed, completely separate to the Kubernetes cluster.
Go
Unfamiliar as I am with Go, my cheatsheet looks like:
install Go: dnf install go
create a working area:
$ mkdir status $ cd status
initialise the Go module:
$ go mod init example.com/app/status
which creates go.mod if nothing else
edit status.go -- see below
refresh the module:
$ go mod tidy
which creates/updates go.sum
build the module (which may tidy as well):
$ go build [example.com/app/status]
run the app:
$ IP=0.0.0.0 ./status
Easy!
Source
Here's the source code:
package main // following https://github.com/txn2/ok.git import ( //"log" "net/http" "os" "time" "github.com/gin-gonic/gin" "github.com/google/uuid" ) func getEnv (key, def string) string { val, exists := os.LookupEnv (key) if exists { return val } else { return def } } var ( // useful values from the service's deployment nodeName = getEnv ("NODE_NAME", "(undefined)") podName = getEnv ("POD_NAME", "(undefined)") podNamespace = getEnv ("POD_NAMESPACE", "(undefined)") podIP = getEnv ("POD_IP", "(undefined)") serviceAccount = getEnv ("SERVICE_ACCOUNT", "(undefined)") ip = getEnv ("IP", "127.0.0.1") port = getEnv ("PORT", "8080") message = getEnv ("MESSAGE", "status service message") ) var Version = "0.0" var Service = "status" func main () { gin.SetMode (gin.ReleaseMode) rtr := gin.Default () rtr.SetTrustedProxies (nil) count := 1 svcUUID := uuid.New () rtr.GET ("/", func (c *gin.Context) { callUUID := uuid.New () //c.String (http.StatusOK, "Hello World from /") c.JSON (http.StatusOK, gin.H{ "time": time.Now (), "count": count, "svc-uuid": svcUUID, "call-uuid": callUUID, "client-ip": c.ClientIP (), "node-name": nodeName, "pod-name": podName, "pod-namespace": podNamespace, "pod-ip": ip, "pod-port": port, "service-account": serviceAccount, }) count++ }) rtr.GET ("/healthz", func (c *gin.Context) { c.String (http.StatusOK, Service + " is Healthy") }) // pseudo-wildcard - one path deep: /foo but not /foo/bar rtr.GET ("/:path", func (c *gin.Context) { path := c.Param ("path") c.String (http.StatusOK, "Hello World from /" + path) }) // pseudo-wildcard - one path deep: /foo but not /foo/bar rtr.NoRoute (func (c *gin.Context) { path := c.Request.URL.Path c.String (http.StatusNotFound, "%s Not Found", path) }) rtr.Run (ip + ":" + port) }
The actually interesting thing here are the three GET calls on the router:
- the first one handles the base requests to / and reports on some state
- the second one, the missing /healthz, merely reports StatusOK which is all the health check needs
- the third handles simple path component wildcards, eg. /foo, /baz etc.
The fourth router call handles anything else, eg. /not/the/path/you/were/looking/for, /secret/sauce, etc.
The other thing to note is the requirement to set IP in the environment otherwise the service will listen on 127.0.0.1 which makes its tricky to connect with. But secure.
Dockerfile
The Dockerfile isn't too exciting. Not being a Docker person, the access to nobody seems a little clumsy. Maybe it's Docker Art:
FROM alpine:latest AS util RUN echo "nobody:x:65534:65534:Nobody:/:" > /etc_passwd FROM scratch ENV PATH=/bin COPY status /bin/ COPY --from=util /etc_passwd /etc/passwd WORKDIR / USER nobody ENTRYPOINT ["/bin/status"]
Typing it in by hand I, naturally, fell into the COPY status /bin trap and spent a while wondering why I was getting a weird filesystem error. How has Docker managed to persist with a stomping over directories implementation?
Deployment
Here, the use of alpine in the Dockerfile seems to cause Go an issue such that when we run it we get a no such file or directory error.
However, we can seemingly placate Go by setting CGO_ENABLED=0 in the environment:
$ CGO_ENABLED=0 go build $ sudo docker build [tags] .
Now the pseudo-magic, we need to re-tag our (local) image for the remote repository before pushing it there:
$ name=status $ sudo docker image tag ${name}:latest docker-registry:5000/${name}:latest $ sudo docker push docker-registry:5000/${name}
Non-Persistent Running
We can run the image to test it out.
Docker Run
If we did want to run the status service locally we could use:
$ sudo docker run --env IP=0.0.0.0 ${name}
Kubernetes Run
$ kubectl run status-pod --image=docker-registry:5000/${name}:latest \ --overrides='{"apiVersion":"v1","spec":{"imagePullSecrets":[{"name":"reg-cred-secret"}]}}' \ --env=IP=0.0.0.0
Testing
Connecting to port 8080 (or whatever you've used) should give appropriate responses to your questions.
Kubernetes
For Kubernetes we need a Deployment and a Service.
Deployment
$ kubectl create -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: status labels: app: status system: test spec: replicas: 1 selector: matchLabels: app: status template: metadata: labels: app: status system: example spec: containers: - name: status image: docker-registry:5000/example.com/app/status imagePullPolicy: Always env: - name: IP value: "0.0.0.0" - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName ports: - name: status-port containerPort: 8080 imagePullSecrets: - name: reg-cred-secret EOF
where we:
reference the docker registry: docker-registry:5000/example.com/app/status
and also the Secret required to access it:
imagePullSecrets: - name: reg-cred-secret
also we set IP in the Deployment's environment
as well as gather some metadata from the Deployment to expose as environment variables too (to be picked up by the microservice in due course)
Service
$ kubectl create -f - <<EOF apiVersion: v1 kind: Service metadata: name: status labels: app: status system: test spec: selector: app: status ports: - protocol: "TCP" port: 8080 targetPort: 8080 type: NodePort EOF
Now we should have a Service we can query on port 8080 in the usual way.
Document Actions