Ahosti.Com
Read trending IT updates for cloud businesses, managed service providers, IT pros & what innovation digital transformation is driving in tech industry.

First Steps with TKGm Visitor Clusters in VMware Cloud Director 10.3

0 12

Within the earlier article, I’ve defined easy methods to deploy Container Service Extension 3.1 with TKGm Help in VMware Cloud Director 10.3. On this article, I am looking at how the Tanzu Kubernetes Grid Cluster is built-in into the Group VDC and the way the Tenant can entry and work with the Kubernetes Cluster.

Conditions

This information is from the Tenant’s perspective and all actions will be performed by the Tenant itself within the VMware Cloud Director Self Service Portal. Nevertheless, it’s required that the Service Supplier has enabled “Tanzu Kubernetes Grid” assist, which requires:

To deploy TKGm Clusters, the Tenant wants:

  • An Group VDC
  • NSX-T backed Edge Gateway
  • TKGm Entitlement Rights Bundle
  • Routed Org Community (Self Service)
  • SNAT for the Routed Org Community to entry Cloud Director

Deploy TKGm Kubernetes Cluster

The TKG Cluster is deployed by the Tenant. Login to VDC and navigate to Extra > Kubernetes Container Clusters, press NEW and comply with the Wizard.

You’ll be able to add an SSH Key in Step 2. This lets you log in to Kubernetes Nodes as root.

Presently, solely Single Management Aircraft Clusters are supported.

Do NOT disable Enable exterior site visitors to be routed to this cluster! This characteristic is required by TKGm. Please bear in mind that the grasp node is absolutely uncovered to the exterior community (Web).

If you end the Wizard, Cloud Director and CSE are deploying and configuring Kubernetes Grasp and Employee VMs as VAPP. It additionally creates a DNAT utilizing a free IP handle from the Edges IP Tackle Pool to totally expose the grasp node:

Please remember that with this NAT configuration, each service Sort NodePode you deploy in Kubernetes is accessible on the Web.

Get Kubeconfig and Acces the Kubernetes Cluster with kubectl

When the cluster has been deployed you may obtain the kubeconfig from the UI:

You can too get the kubeconfig utilizing API with the next API name:

GET https://[URL]/api/cse/3.0/cluster/[Cluster ID]/config

I made a small bash script that routinely pulls the configuration. You will get the script right here.

# wget https://uncooked.githubusercontent.com/fgrehl/virten-scripts/grasp/bash/vcd_get_kubeconfig.sh
# chmod +x vcd_get_kubeconfig.sh
# ./vcd_get_kubeconfig.sh -u [email protected] -p 'VMware1!' -v vcloud.virten.lab -c k8s-fgr > ~/k8s/k8s-fgr
# export KUBECONFIG=~/k8s/k8s-fgr

As talked about earlier, the Kubernetes apiserver is uncovered to the Web and the general public NAT handle is routinely added to the downloaded kubeconfig. You need to now be capable to entry the Cluster with kubectl.

# kubectl get nodes
NAME        STATUS   ROLES                 AGE   VERSION
mstr-sqpd   Prepared    control-plane,grasp  45h   v1.20.5+vmware.2
node-knp3   Prepared                          45h   v1.20.5+vmware.2
node-zrce   Prepared                          45h   v1.20.5+vmware.2

The very first thing to examine is that if all pods are up and working. All pods needs to be in Working state!

# kubectl get pods -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   antrea-agent-47j2t                     2/2     Working   0          94m
kube-system   antrea-agent-d7dms                     2/2     Working   0          91m
kube-system   antrea-agent-hslbg                     2/2     Working   0          89m
kube-system   antrea-controller-5cd95c574d-5z2np     1/1     Working   0          94m
kube-system   coredns-6598d898cd-9z55d               1/1     Working   0          94m
kube-system   coredns-6598d898cd-gf5qc               1/1     Working   0          94m
kube-system   csi-vcd-controllerplugin-0             3/3     Working   0          94m
kube-system   csi-vcd-nodeplugin-fwfz2               2/2     Working   0          90m
kube-system   csi-vcd-nodeplugin-xjz9l               2/2     Working   0          89m
kube-system   etcd-mstr-tfgn                         1/1     Working   0          94m
kube-system   kube-apiserver-mstr-tfgn               1/1     Working   0          94m
kube-system   kube-controller-manager-mstr-tfgn      1/1     Working   0          94m
kube-system   kube-proxy-8hbvz                       1/1     Working   0          89m
kube-system   kube-proxy-lsqxv                       1/1     Working   0          94m
kube-system   kube-proxy-r4twn                       1/1     Working   0          91m
kube-system   kube-scheduler-mstr-tfgn               1/1     Working   0          94m
kube-system   vmware-cloud-director-ccm-5489b6788c   1/1     Working   0          94m

The vmware-cloud-director-ccm pod is accountable for Cloud Director communication. If antrea-controller, coredns or csi-vcd-controllerplugin-0 are caught in pending state, you in all probability have a communication downside with the Cloud Director. At this level, no workload pods will be began as a result of with out the Cloud Provivder Integration, all nodes are tainted:

# kubectl get occasions
LAST SEEN    TYPE     REASON           OBJECT                          MESSAGE
20s          Warning  FailedScheduling pod/webserver-559b886555-p4sjc  0/3 nodes can be found: 1 node(s) had taint {node-role.kubernetes.io/grasp: }, that the pod did not tolerate, 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod did not tolerate.

If that is the case, you may run a shell on the ccm pod to confirm VCD communication:

# kubectl exec -it vmware-cloud-director-ccm-5489b6788c-fc86k -n kube-system -- sh

Or get Logs from the pod:

# kubectl logs vmware-cloud-director-ccm-5489b6788c-fc86k -n kube-system

Deploy and expose the primary Container

For example, I’ll deploy an nginx webserver and expose it to the web utilizing the ALB integration. It is a quite simple job in Kubernetes:

# kubectl create deployment webserver --image nginx
# kubectl expose deployment webserver --port=80 --type=LoadBalancer

It ought to obtain the nginx picture, launch a pod and create a service of sort LoadBalancer:

# kubectl get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
webserver    1/1     1            1           47m

# kubectl get service
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)        AGE
kubernetes   ClusterIP      100.64.0.1                443/TCP        2d1h
webserver    LoadBalancer   100.67.6.30      203.0.113.13   80:32515/TCP   47m

From the output, you may study that it has routinely assigned the handle 203.0.113.13 from the tenant’s IP pool. This handle shouldn’t be instantly configured within the ALB loadbalancer. Each service sort LoadBalancer you deploy makes use of an handle from 192.168.8.2-192.168.8.100 to configure the NSX-ALB Loadbalancer after which creates a NAT to ahead a public handle to the load balancer.

For extra info, in regards to the VCD Cloud supplier which additionally consists of the NSX-ALB integration, confer with cloud-provider-for-cloud-director.

Deploy Inner Companies

Presently, each service you deploy is routinely uncovered to the Web. You would not have an choice to affect which IP Tackle is assigned if you deploy a LoadBalancer. If you wish to make service accessible internally to your Org Community solely, it’s a must to deploy it as NodePort:

# kubectl expose deployment webserver --port=80 --type=NodePort

Examine the Service to study the uncovered port:

# kubectl get service
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)        AGE
webserver2   NodePort       100.71.223.221            80:30871/TCP   9h

The webserver can now be accessed on all Kubernetes Nodes on port 30871. If you would like, you may configure an inner loadbalancer utilizing the VCD UI. Simply create a pool with all nodes as Node members. You’ll be able to rapidly get an inventory of all handle with the next command:

kubectl get nodes -o jsonpath='{ $.gadgets[*].standing.addresses[?(@.type=="InternalIP")].handle }'
192.168.0.13
192.168.0.15
192.168.0.14

Persistent Storage with Title Disks

Persistent Disks in TKGm are utilizing the Named Disks characteristic in Cloud Director. The CSI driver documentation is accessible right here. Communication with Cloud Director is already ready by the Cloud Supplier, so that you simply need to outline a Storage Class. A storage class will be outlined with the next YAML file. Please change storageProfile to suit your setting or simply use “*” to make use of the default class.

sc_gold.yaml

apiVersion: v1
form: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
  title: gold
provisioner: named-disk.csi.cloud-director.vmware.com
reclaimPolicy: Delete
parameters:
  storageProfile: "StorageGold"
  filesystem: "ext4"

Then apply the yaml:

# kubectl apply -f sc.yaml
storageclass.storage.k8s.io/gold created

Now create and apply a PVC and Debugger Pod that mounts the PVC to /knowledge


pvc.yaml

---
apiVersion: v1
form: PersistentVolumeClaim
metadata:
  title: testpvc
spec:
  accessModes:
    - ReadWriteOnce
  sources:
    requests:
      storage: 1Gi
  storageClassName: "gold"
---
form: Pod
apiVersion: v1
metadata:
  title: debugger
spec:
  volumes:
    - title: testpvc
      persistentVolumeClaim:
       claimName: testpvc
  containers:
    - title: debugger
      picture: busybox
      command: ['sleep', '3600']
      volumeMounts:
        - mountPath: "https://www.virten.web/knowledge"
          title: testpvc

You’ll find the PVC in Cloud Director > Information Facilities > Datacenter > Storage > Named Disks

Run a shell within the pod to confirm that the PVC has been mounted efficiently:

# kubectl exec -it debugger -- sh
/ # df -h /knowledge
Filesystem  Dimension    Used  Obtainable  Use%  Mounted on
/dev/sdb    975.9M  2.5M  906.2M     0%    /knowledge
/ # contact /knowledge/hi there
/ # ls -l /knowledge
complete 16
-rw-r--r--  1 root root     0 Nov 21 20:56 hi there
drwx------  2 root root 16384 Nov 21 20:51 misplaced+discovered


You might also like
Loading...