User Access
There aren't any users per se in Kubernetes but rather that a user's configuration grants them access to certain facilities, Role Based Access Control.
The second aspect is that those permissions are time limited, essentially, the certificate encoding those rights is given an expiry date which might be daily, say.
RBAC
RBAC is implemented in Kubernetes through Roles and RoleBindings.
Note, though, that these are NameSpace-specific. That's easy enough to discover when you only have a few and the one you want is missing but rather harder when there might be multiple re-uses of entity names across NameSpaces.
Roles
A Role combines verbs (get, create, etc.) with Resources (Pods, ConfigMaps, etc.).
You can create them on the command line:
# kubectl create role my-role --verb=get --verb=list --resource=pods --resource=services
or something like:
# kubectl create -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: my-role namespace: default rules: - apiGroups: - "" resources: - pods - services verbs: - list - get EOF
The adventurous can try:
# kubectl edit role/my-role
and run the risk of getting stuck in a loop of bad edits as you try to figure out what the valid syntax and/or options are. (Good times!)
RoleBindings
A RoleBinding simply ties a Role together with a (not-yet existing) User:
kubectl create rolebinding my-role-binding-my-user --role=my-role --user=my-user
ClusterRoles and ClusterRoleBindings
Some entities are extra-NameSpace and need to be manipulated with Cluster-level permissions. Otherwise they function the same.
Users
Users, or, rather, the association through a RoleBinding to some permissions granted in a Role are a bit more interesting.
As noted they are encapsulated in the form of a certificate which means we need the whole certificate keys, requests, signing and issuing palaver. Then we need to repeat it all when the certificate expires.
You imagine the enterprise wrappers save a bit of time but what actually happens?
In this example we'll use a built-in auto-signer. You can create your own signer (where I got stuck!) but we'll stick with the easy stuff.
Also the same names/words appear time and again so we'll try to distinguish everything with variables.
OpenSSL CSR
We need a Certificate Signing Request which, in turn requires a Certificate key.
It appears that we can create a one-off OpenSSL CSR for a user which can be re-used on expiry of the signed Certificate. I guess we could conceptually add the idea of revoking the CSR (by deleting the file? Maybe something more complex.).
Anyway, some actual user, bob, wants access. We need to create an OpenSSL CSR for bob (and set up a few derived variables):
# user=bob # user_key=${user}.key # user_crt=${user}.crt # user_csr=${user}.csr # user_cfg=${user}.cfg # openssl req -new -newkey rsa:4096 -nodes -keyout ${user_key} -out ${user_csr} -subj "/CN=${user}/O=devops"
The CommonName (CN) is the important bit, O=devops adds, er, depth. The CommonName is the User.
There doesn't appear to be any association between the Kubernetes User and an operating system user account. If you have read access to a config file, you have those associated permissions. Erm, yes, that means the config file also encodes the Certificate's key.
In this regard, you'll see plenty of examples where the suggestion is that you copy admin.conf to your personal ~/.kube/config and simply not worry about permissions again. It's a strategy.
Kubernetes CSR
Just to check we could report the OpenSSL subject:
# openssl req -noout -subject -in ${user_csr} subject=CN = bob, O = devops
and check that it matches our expected ${user}.
The Kubernetes CSR is an entity in the system:
# cat <<EOF | kubectl apply -f - apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: name: ${user} spec: request: $(cat ${user_csr} | base64 | tr -d '\n') signerName: kubernetes.io/kube-apiserver-client expirationSeconds: 86400 usages: - client auth EOF
There's a few things of interest here:
- the spec.request is the base64 encoding of the OpenSSL CSR file
- we are requesting a specific signer, kubernetes.io/kube-apiserver-client
- our expiration time is in one day (very annoying!)
- metadata.name is just an identifying token, the User is encoded in CommonName in the CSR
- spec.usages are client auth
Approval
The CSR will sit there looking stupid, try kubectl get csr:
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION bob 1s kubernetes.io/kube-apiserver-client kubernetes-admin 24h Pending
until we approve it:
# kubectl certificate approve ${user}
whereon the CSR will become Approved but not yet Issued.
Signing
At some point, the specific signer we requested will wake up and sign the certificate. This should occur in a few seconds:
bob 1s kubernetes.io/kube-apiserver-client kubernetes-admin 24h Approved,Issued
If you requested a different signer then that signer needs to, er, sign the certificate. This could be a slow manual process.
Certificate Collection
The newly signed Certificate is still embedded in the Kubernetes system, we need to extract it. Here we can do a little JSON magic:
# kubectl get csr ${user} -o jsonpath='{.status.certificate}' | base64 --decode > ${user_crt}
Config Generation
The signed OpenSSL (X509!) Certificate isn't much use as kubectl wants a configuration file. Here, we need to gather a few things:
(Obviously, you don't need to repeat these all the time!)
the cluster's name:
# cluster=$(kubectl config view -o jsonpath='{.clusters[0].name}')
the cluster's IP address:
# server=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}')
This could be complicated by having more than one master. We only have one at the moment...
the Kubernetes Cluster's CA Certificate -- reusing ${cluster} in case we have more than one:
# ca_crt=${cluster}-ca.crt # kubectl config view -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw | base64 --decode - > ${ca_crt}
Now we can start constructing a config file where we pass --kubeconfig=${user_cfg} (rather than stamp over our own config file).
define the cluster in this config file:
# kubectl config set-cluster ${cluster} --server=${server} --certificate-authority=${ca_crt} --kubeconfig=${user_cfg} --embed-certs
add the User's credentials:
# kubectl config set-credentials ${user} --client-certificate=${user_crt} --client-key=${user_key} --embed-certs --kubeconfig=${user_cfg}
create the NameSpace -- if you're not using the default namespace. Let's suppose we have an opt_ns variable:
if ! kubectl get ns/${opt_ns} ; then kubectl create ns ${opt_ns} fi
define the context -- here using a ${user}-${cluster} naming scheme:
# kubectl config set-context ${user}-${cluster} --cluster=${cluster} --namespace=${opt_ns} --user=${user} --kubeconfig=${user_cfg}
Notice that it binds in the NameSpace.
use the context:
# kubectl config use-context ${user}-${cluster} --kubeconfig=${user_cfg}
Now you are free to pass ${user_cfg} to the user as their ~/.kube/config file (or otherwise represented on KUBECONFIG).
And, remember, they'll be back again tomorrow for a new config file!
Document Actions