Ahosti.Com
Read trending IT updates for cloud businesses, managed service providers, IT pros & what innovation digital transformation is driving in tech industry.

TKG v1.4 LDAP (Energetic Listing) integration with Pinniped and Dex

0 5

LDAP integration with Pinniped and Dex is a subject that I’ve written about earlier than, significantly with TKG v1.3. Nonetheless, lately I had cause to deploy TKG v1.4 and seen some good new enhancements round LDAP integration that I believed it worthwhile highlighting. One is the truth that you now not must have an online browser obtainable within the surroundings the place you’re configuring LDAP credentials which was a requirement is the earlier model.

On this submit, I’ll deploy a TKG v1.4 administration cluster on vSphere. This surroundings makes use of the NSX ALB to offer IP addresses for each the TKG cluster management aircraft in addition to Load Balancer companies for purposes. After deploying the TKG administration cluster, the Pinniped and Dex companies are transformed from NodePort to Load Balancer. Then we’ll authenticate an LDAP person and assign a ClusterRoleBinding to that person in order that they will work with the non-admin context of the administration cluster. I can’t cowl how one can deploy TKG with LDAP integration as that is coated within the earlier submit and the steps are similar.

Let’s start by trying on the Pinniped and Dex companies after preliminary administration cluster deployment. As talked about, each are configured to make use of NodePort companies. We might want to change this to Load Balancer.

$ kubectl get all -n pinniped-supervisor
NAME                                      READY   STATUS      RESTARTS   AGE
pod/pinniped-post-deploy-job-789cj        0/1     Error       0          14h
pod/pinniped-post-deploy-job-c7jfc        0/1     Accomplished   0          14h
pod/pinniped-supervisor-ff8467c76-qt9kz   1/1     Working     0          14h
pod/pinniped-supervisor-ff8467c76-vjh44   1/1     Working     0          14h

NAME                          TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
service/pinniped-supervisor   NodePort   100.65.148.115           443:31234/TCP   14h

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pinniped-supervisor   2/2     2            2           14h

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/pinniped-supervisor-ff8467c76   2         2         2       14h

NAME                                 COMPLETIONS   DURATION   AGE
job.batch/pinniped-post-deploy-job   1/1           3m53s      14h

 
$ kubectl get all -n tanzu-system-auth
NAME                       READY   STATUS    RESTARTS   AGE
pod/dex-657fdcb9f9-bhtxf   1/1     Working   0          14h

NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/dexsvc   NodePort   100.67.123.98           5556:30167/TCP   14h

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dex   1/1     1            1           14h
 
NAME                             DESIRED   CURRENT   READY   AGE
replicaset.apps/dex-657fdcb9f9   1         1         1       14h

Configure the Pinniped and Dex companies to make use of Load Balancer companies as per the documentation discovered right here. Each Dex and Pinniped ought to then get exterior IP addresses offered by the NSX ALB.

$ cat pinniped-supervisor-svc-overlay.yaml
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"form": "Service", "metadata": {"title": "pinniped-supervisor", "namespace": "pinniped-supervisor"}})
---
#@overlay/substitute
spec:
  sort: LoadBalancer
  selector:
    app: pinniped-supervisor
  ports:
    - title: https
      protocol: TCP
      port: 443
      targetPort: 8443

#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"form": Service", "metadata": {"title": "dexsvc", "namespace": "tanzu-system-auth"}}), missing_ok=True
---
#@overlay/substitute
spec:
  sort: LoadBalancer
  selector:
    app: dex
  ports:
    - title: dex
      protocol: TCP
      port: 443
      targetPort: https
 

$ cat pinniped-supervisor-svc-overlay.yaml | base64 -w 0
I0AgbG9hZCgiQHl0dDpvdmVybG.....wcwo=


$ kubectl patch secret ldap-cert-pinniped-addon -n tkg-system -p '{"information": {"overlays.yaml": "I0AgbG9hZCgiQHl0dDpvdmVyb...wcwo="}}'
secret/ldap-cert-pinniped-addon patched
 

$ kubectl get all -n pinniped-supervisor
NAME                                       READY   STATUS      RESTARTS   AGE
pod/pinniped-post-deploy-job-2x7hl         0/1     Accomplished   0          8m31s
pod/pinniped-post-deploy-job-r8hrh         0/1     Error       0          10m
pod/pinniped-post-deploy-job-wr7np         0/1     Error       0          9m19s
pod/pinniped-supervisor-5dcbd8d56f-g6qkd   1/1     Working     0          8m9s
pod/pinniped-supervisor-5dcbd8d56f-zhdfp   1/1     Working     0          8m9s

NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
service/pinniped-supervisor   LoadBalancer   100.71.206.210   xx.xx.xx.18   443:30044/TCP   10m

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pinniped-supervisor   2/2     2            2           10m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/pinniped-supervisor-5dcbd8d56f   2         2         2       10m

NAME                                 COMPLETIONS   DURATION   AGE
job.batch/pinniped-post-deploy-job   1/1           2m34s      10m

 
$ kubectl get all -n tanzu-system-auth
NAME                       READY   STATUS    RESTARTS   AGE
pod/dex-688567f8c4-kf5mx   1/1     Working   0          8m29s

NAME             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
service/dexsvc   LoadBalancer   100.70.240.108   xx.xx.xx.19   443:31265/TCP   11m

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dex   1/1     1            1           11m

NAME                             DESIRED   CURRENT   READY   AGE
replicaset.apps/dex-688567f8c4   1         1         1       11m

The companies at the moment are of sort Load Balancer and have exterior IP deal with (which I’ve deliberately obfuscated). The subsequent step is to delete the pinniped submit deploy job, and permit it restart. It’s going to restart mechanically. This will take a while (3-4 minutes). I often run a watch (-w) to regulate it.

$ kubectl get jobs -n pinniped-supervisor
NAME                       COMPLETIONS   DURATION   AGE
pinniped-post-deploy-job   1/1           3m53s      14h
 

$ kubectl delete jobs pinniped-post-deploy-job -n pinniped-supervisor
job.batch "pinniped-post-deploy-job" deleted


$ kubectl get jobs -n pinniped-supervisor -w
NAME                       COMPLETIONS   DURATION   AGE
pinniped-post-deploy-job   0/1                      0s
pinniped-post-deploy-job   0/1                      0s
pinniped-post-deploy-job   0/1           0s         0s
pinniped-post-deploy-job   1/1           11s        11s

Now you can swap to the non-admin context of the cluster and attempt to entry it. Now, if you’re on a desktop and also you attempt to do a question, a browser might be launched and you’ll present the LDAP credentials for the person that you just want to authenticate and provides TKG cluster entry to. If nonetheless, like me, you’re SSH’ed to a distant host and are working the kubectl get instructions remotely in that session, you will notice the next.

$ tanzu management-cluster kubeconfig get
Now you can entry the cluster by working 'kubectl config use-context [email protected]'
 
 
$ kubectl config use-context [email protected]
Switched to context "[email protected]".
 
 
$ kubectl get nodes
Error: no DISPLAY surroundings variable specified
^C

That is working as anticipated. Up to now, you’ll have required a desktop with a browser to do the authentication. In TKG v1.4, there’s a new function which permits you authenticate LDAP customers on environments that do not need browsers (e.g. headless workstations) or when you’re SSH’ed to an surroundings like I’m above. This course of is roofed within the official documentation right here. Step one is to set the surroundings variable TANZU_CLI_PINNIPED_AUTH_LOGIN_SKIP_BROWSER. You’ll then must take away the sooner non-admin context, and recreate it with the surroundings variable in place.

$ export TANZU_CLI_PINNIPED_AUTH_LOGIN_SKIP_BROWSER=true


$ kubectl config delete-context [email protected]
warning: this eliminated your lively context, use "kubectl config use-context" to pick out a unique one
deleted context [email protected] from /residence/cormac/.kube/config


$ tanzu management-cluster kubeconfig get
Now you can entry the cluster by working 'kubectl config use-context [email protected]'


$ kubectl config use-context [email protected]
Switched to context "[email protected]".


$ kubectl get nodes
Please log in: https://xx.xx.xx.18/oauth2/authorize?access_type=offline&client_id=pinniped-cli&
code_challenge=8VbRBUHKhSK69gP4mq3G1dnd897tu0ShpsDvkuqi1Q0&code_challenge_method=S256&nonce=
263df470a232c24a929712565954a2d4&redirect_uri=httppercent3Apercent2Fpercent2F127.0.0.1percent3A39819percent2Fcallback&
response_type=code&scope=offline_access+openid+pinnipedpercent3Arequest-audience&state=c2c0517fab9affa8741d78d32acdd330

Word the “Please log in” message. Now you can copy this URL and paste it on any host that has a browser, however can nonetheless attain the Pinniped Supervisor service IP deal with. Discover that the redirect is a callback to 127.0.0.1 (localhost). Which means there might be a failure anticipated, because the host with the browser received’t reply to it, however that’s okay. Once you paste the hyperlink right into a browser, it’s best to see the next authenticate immediate the place you an add the LDAP person which you need to have entry to the non-admin context of your cluster:

The browser will redirect to the IP deal with of the Dex service, however when it tries a callback to localhost, you get this error:

Once more, that is anticipated. The subsequent step is to take the callback URL that was offered to you within the browser (the browser which was unable to do the 127.0.0.1 callback), and run the next command on the host the place you ran the preliminary kubectl get command:

$ curl -L 'http://127.0.0.1:35259/callback?code=4bExu5_NESSuaY7kSCCIGcn8arVwJiVUC19UTZL-
Jck.61FRn2BeHo7C8fYGbA0aranDsAFT3v0bRTTwq-TizA8&scope=openid+offline_access+pinnipedpercent3A
request-audience&state=edd3196e00a3eb01403d2d4e2d918fa3'
you will have been logged in and will now shut this tab

Anyway, instructions now work as an LDAP person, offering the cluster function binding created as per the official documentation right here matches the credentials added for the LDAP person. If the cluster function binding doesn’t exist, you’ll encounter the next error in making an attempt to question the non-admin cluster as a person who doesn’t have any privileges:

$ kubectl get nodes
Error from server (Forbidden): nodes is forbidden: Consumer "[email protected]rainpole.com" can not record useful resource 
"nodes" in API group "" on the cluster scope

To create it, swap again to the admin context, create the ClusterRoleBinding, swap once more to the non-admin context and see if this LDAP person (who you will have already authenticated by way of Dex/Pinniped) can now efficiently work together with the cluster.

$ kubectl config use-context [email protected]
Switched to context "[email protected]".


$ kubectl config get-contexts
CURRENT   NAME                            CLUSTER      AUTHINFO                                   NAMESPACE
*         [email protected]       ldap-cert    ldap-cert-admin
          [email protected]   ldap-cert    tanzu-cli-ldap-cert


$ cat chogan-crb.yaml
form: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  title: chogan
topics:
  - form: Consumer
    title: [email protected]
    apiGroup:
roleRef:
  form: ClusterRole
  title: cluster-admin
  apiGroup: rbac.authorization.k8s.io
 

$ kubectl apply -f chogan-crb.yaml
clusterrolebinding.rbac.authorization.k8s.io/chogan created


$ kubectl get clusterrolebinding chogan
NAME     ROLE                        AGE
chogan   ClusterRole/cluster-admin   6m


$ kubectl config use-context [email protected]
Switched to context "[email protected]".


$ kubectl get nodes
NAME                             STATUS   ROLES                  AGE   VERSION
ldap-cert-control-plane-77g8v    Prepared    control-plane,grasp   86m   v1.21.2+vmware.1
ldap-cert-md-0-b7f799d64-kgcqm   Prepared                     85m   v1.21.2+vmware.1

The LDAP person ([email protected]) is now in a position to handle the cluster utilizing the non-admin context of the TKG administration cluster.

Helpful suggestions

On one event, after deleting the Pinniped submit deploy job, I might nonetheless not entry the cluster. I acquired the next error:

$ kubectl get pods
Error: couldn't full Pinniped login: couldn't carry out OIDC discovery for 
"https://xx.xx.xx.16:31234": Get "https://xx.xx.xx.16:31234/.well-known/openid-configuration": 
dial tcp xx.xx.xx.16:31234: join: connection refused
Error: pinniped-auth login failed: exit standing 1
Error: exit standing 1

Unable to connect with the server: getting credentials: exec: executable tanzu failed with exit code 1

To resolve this concern, I deleted the Pinniped submit deploy job for a second time. As soon as that accomplished the kubectl command labored as anticipated. On one other event, I hit the next concern:

$ kubectl get nodes
Error: couldn't full Pinniped login: couldn't carry out OIDC discovery for 
"https://xx.xx.xx.18": Get "https://xx.xx.xx.18/.well-known/openid-configuration": 
x509: certificates has expired or will not be but legitimate: present time 2021-11-18T10:52:35Z is earlier than 2021-11-18T10:54:18Z
Error: pinniped-auth login failed: exit standing 1
Error: exit standing 1

Unable to connect with the server: getting credentials: exec: executable tanzu failed with exit code 1

To resolve this concern, I needed to make some modifications to the TKG management aircraft VM’s time sync configuration on vSphere. I did this by enhancing the settings of the management aircraft VMs in vSphere, and enabling the choice to permit periodic time syncs to the ESXi host (this feature is disabled by default). Afterwards, the kubectl command labored as anticipated. Right here is the place an admin can modify the Synchronize Time with Host. I merely must click on on the Synchronize time periodically and the difficulty was mounted.

You might also like
Loading...