Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products

PowerProtect Data Manager 19.10 Kubernetes User Guide

Recommendations and considerations when using a Kubernetes cluster

Review the following information that is related to the deployment, configuration, and use of the Kubernetes cluster as an asset source in PowerProtect Data Manager:

Add line to custom-ports file when not using port 443 or 6443 for Kubernetes API server

If a Kubernetes API server listens on a port other than 443 or 6443, an update is required to the PowerProtect Data Manager firewall to allow outgoing communication on the port being used. Before you add the Kubernetes cluster as an asset source, perform the following steps to ensure that the port is open:

  1. Log in to PowerProtect Data Manager, and change the user to root.
  2. Add a line to the file /etc/sysconfig/scripts/custom-ports that includes the port number that you want to open.
  3. Run the command service SuSEfirewall2 restart.

This procedure should be performed after a PowerProtect Data Manager update, restart, or server disaster recovery.

Log locations for Kubernetes asset backup and restore operations and pod networking

All session logs for Kubernetes asset protection operations are pulled into the /logs/external-components/k8s folder on the PowerProtect Data Manager host.

PVC parallel backup and restore performance considerations

To throttle system performance, PowerProtect Data Manager supports only five parallel namespace backups and two parallel namespace restores per Kubernetes cluster. PVCs within a namespace are backed up and restored sequentially.

You can queue up to 100 namespace backups across protection policies in PowerProtect Data Manager.

Overhead of PowerProtect Data Manager components on Kubernetes cluster

At any time during backup, the typical footprint of PowerProtect Data Manager components (Velero, PowerProtect Controller, cProxy) is less than 2 GB RAM Memory and four CPU cores, and such usage is not sustained and visible only during the backup window.

The following resource limits are defined on the PowerProtect PODs, which are part of the PowerProtect Data Manager stack:

  • Velero maximum resource usage: 1 CPU core, 256 MiB memory
  • PowerProtect Controller maximum resource usage: 1 CPU core, 256 MiB memory
  • PowerProtect cProxy pods (maximum of 5 per cluster): Each cProxy pod typically consumes less than 300 MB memory and less than 0.8 CPU cores. These pods are created and terminated within the backup job.

Only Persistent Volumes with VolumeMode Filesystem supported

Backup and recovery of Kubernetes cluster assets in PowerProtect Data Manager is only supported for Persistent Volumes with the VolumeMode Filesystem.

Objects using PVC scaled down before starting the restore

The following activities occur before a PVC restore to the original namespace or an existing namespace when PowerProtect Data Manager detects that the PVC is in use by a Pod, Deployment, StatefulSet, DaemonSet, ReplicaSet, or Replication Controller:

  • PowerProtect Data Manager scales down any objects using the PVC.
  • PowerProtect Data Manager deletes the daemonSet and any Pods using PVCs.

Upon completion of the PVC restore, any objects that were scaled down are scaled back up, and any objects that were deleted are re-created. Ensure that you shut down any Kubernetes jobs that actively use the PVC before running a restore.

NOTE If PowerProtect Data Manager is unable to reset the configuration changes due to a controller crash, it is recommended to delete the Pod, Deployment, StatefulSet, DaemonSet, ReplicaSet, or Replication Controller from the namespace, and then perform a Restore to Original again on the same namespace.

Restore to a different namespace that already exists can result in mismatch between UID of pod and UID persistent volume files

A PowerProtect Data Manager restore of files in persistent volumes restores the UID and GID along with the contents. When performing a restore to a different namespace that already exists, and the pod consuming the persistent volume is running with restricted Security Context Constraints (SCC) on OpenShift, the UID assigned to the pod upon restore might not match the UID of the files in the persistent volumes. This UID mismatch might result in a pod startup failure.

For namespaces with pods running with restricted SCC, Dell Technologies recommends one of the following restore options:

  • Restore to a new namespace where PowerProtect Data Manager restores the namespace resource as well.
  • Restore to the original namespace if this namespace still exists.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\