Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

PowerProtect Data Manager 19.11 Kubernetes User Guide

About Tanzu Kubernetes guest clusters and Supervisor clusters

In addition to support for protection of a Kubernetes cluster running directly on the virtual machine in a vSphere environment, PowerProtect Data Manager supports vSphere with Kubernetes for Tanzu guest cluster protection.

This functionality is provided by a Supervisor cluster which, unlike a regular upstream Kubernetes cluster, acts as a customized cluster for vCenter purposes and includes VM Controller Services, the cluster API, and the guest cluster controller. The Tanzu Kubernetes guest cluster, which is where the worker nodes and all applications reside, is controlled and run on the Supervisor cluster, and a Supervisor cluster exists on each vSphere cluster.

When creating a Tanzu Kubernetes guest cluster running a Supervisor cluster, a yaml manifest file is used to specify the number of control plane nodes and worker nodes you want to create in the guest cluster. Guest VTK services will then use cluster API services to create the guest cluster and, at the same time, use VM operator services to create the virtual machines that will make up the guest cluster.

Protecting the Tanzu Kubernetes guest cluster involves two layers of interaction:

  • A Supervisor cluster running on the vSphere infrastructure acts as the controlling authority that allows you to create the guest clusters. Also, you can directly create virtual machines running on a supervisor cluster, where the supervisor cluster provides the native functionality of Kubernetes.
  • You can create an upstream Kubernetes cluster, and specify how many control plane nodes that you require.

Note the following differences in behavior between the protection of Kubernetes clusters deployed directly on vSphere and the protection of Kubernetes clusters deployed by vSphere with Tanzu:

  • The pods running in the Kubernetes Tanzu guest cluster will not have direct access to the supervisor cluster resource, and therefore, any components running inside the guest cluster, such as the PowerProtect controller, will not have access to the Supervisor cluster resource.
  • A mapping is created for Persistent Volumes that are provisioned by vSphere CSI on the guest cluster, so that virtual FCDs and Persistent Volumes created on the guest cluster are mapped to the Supervisor Cluster that runs directly on the vSphere STD infrastructure.
  • Since cProxy pods running in the Kubernetes Tanzu guest cluster will not have access to FCDs directly, a vProxy will be deployed in vCenter to protect the guest cluster. This protection requires an external VM Direct engine dedicated to Kubernetes workloads. During protection of the guest cluster, cndm locates this VM Direct engine and notifies the guest cluster to use this engine for backup and restore operations.

    Communication is then established between the PowerProtect controller and the Velero vSphere plug-in running in the guest cluster so that, once the backup is created, the vSphere plug-in can notify the Supervisor cluster's API server. The Supervisor cluster performs the FCD snapshot, returns the snapshot ID to the guest cluster, and once the PowerProtect controller becomes aware of the snapshot, a session is created on the vProxy virtual machine and pod in the Supervisor cluster namespace that have access to the FCDs in order to facilitate moving data from the FCD to the backup destination.


Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\